Sramana Mitra: How are the companies in the Valley that you’re investing in? Are they using offshore development centers where there is AI talent available?
Ankit Jain: Even in the last year, it has changed a lot. There is a lot of interesting AI development being done out of Ukraine. Both Pakistan and India have teams that are starting to specialize in AI. At least, one of our startups has a team out of Pakistan that is doing a lot of their AI development.
Sramana Mitra: Interesting. I have never heard that before. Let’s talk about your portfolio companies. Talk about some of the highlights. Especially talk about what stage you get them in. How did you encounter them? What did
Sramana Mitra: I was talking to a friend at a party last weekend. He’s very experienced and successful serial entrepreneur. He has invested in an AI company that is doing very well. But this question of hiring AI talent is very serious right now. Let me ask you the geography question. What is your footprint? Where do you like to invest?
Ankit Jain: Before we get to that, let’s go back to the recruiting point because I think it’s a very important point. Valley folklore and reality has been that the best companies are built with the best people. If it’s okay with you, we can spend a couple of minutes on the recruiting aspect. >>>
Sramana Mitra: Double-click down for me on your definition of early stage. You said check size is from $1 million to $10 million. What is your definition of early stage? What does an AI startup need to show to be able to convince you that is has enough validation that there is something there?
Ankit Jain: That’s a very interesting question. I wish I had a clear answer of, “These are the things that you need to convince any investor that you are fundable.” Every investor has his view on this. We have a few things that we look for. They change by the stage of the company. At the seed stage, we’re looking for a strong core team that we think can execute in a given market, what people would refer to as >>>
Responding to a popular request, we are now sharing transcripts of our investor podcast interviews in this new series. The following interview with Ankit Jain was recorded in May 2018.
Ankit Jain is Founding Partner at Gradient Ventures, Google’s AI venture fund.
Sramana Mitra: Let’s introduce you to our audience. Tell us about yourself a bit and introduce us to Gradient Ventures. What is the focus of the fund? How big is the fund?
Ankit Jain: Gradient Ventures is Google’s AI-focused early-stage venture fund. We invest $1 million to $10 million in companies that >>>
AI and Machine Learning startups often face a fair amount of engineering and data challenges, which tends to make it expensive to build the product. It is difficult to build such a company as a lean startup. A natural question that comes to mind is how to get such startups off the ground? This question along with other similar questions will be addressed by speaker Sramana Mitra during this event. Come prepared with your questions and REGISTER HERE.
Sramana Mitra: Do you have any thoughts on this problem that is being discussed nowadays? AI is a bit of a black box and all these biases that are creeping into AI are going to drive society in the next several decades. We don’t really have a very good understanding of what really the AI is doing in a lot of domains and a lot of AI applications. How does the world deal with that?
Steve Scott: That is a super good question and it’s very real. I mentioned before that the marriage of AI and traditional simulations can help address that. People are very leery to take what was done in simulations and just replace it wholesale with a deep neural net because you don’t really know if you can trust what’s going on and you lose some insight into what’s happening. You can’t peer inside and understand it. >>>
Steve Scott: With the advent of GPU computing, deep neural nets started to become enabled to the point where you could get good enough performance so that you could really do useful things with them. GPU computing is the application of the processors that were designed for highly parallel tasks of painting triangles on the screen for rendering graphics in real time.
It turns out you can use all those parallel functional units for doing normal computation. GPUs are the first processors with a single processor level powerful enough to do meaningful deep neural net. We could previously do it on very large systems, but that limits you to a small number of people who have these large systems. GPUs were the first ones that could do it on a single desktop basis at a useful scale. >>>
Sramana Mitra: Can you give an example?
Steve Scott: If you think about deep neural networks in particular, there’s training and there’s inference. Training is the learning part where you take a bunch of data and based on that, you train a model to be able to provide some function. Inference, of course, is using that model that has done the learning already to make decisions on new data. The inference problem is sort of a throughput problem.
Once you’ve got the model designed, you can run lots of data through the model and make ad decisions very quickly. The training problem itself takes a lot of compute and a lot of communication. This is the sort of thing that a Cray system >>>