Sramana Mitra: Let me give you what I’m thinking. It’s slightly different from what you’re thinking.
Where I agree with you completely is that it’s not here yet. I actually do not think you are saying that AGI is here. AGI is not here yet.
Sandeep Sardana: I’m just assuming that it’s here.
Sramana Mitra: I don’t think it’s here yet. At the moment, the state of the union, is human in the loop—and probably for a little bit longer it’s going to be human in the loop.
Every AI workflow being created in every vertical and domain that has a human in the loop has the potential for that human to be removed from the loop as more domain knowledge and fluidity develop.
I believe that is going to happen at scale.
The Economist recently had a very good analysis: junior jobs are not being hired. It’s not that people are getting fired, but they’re not being hired. Entry-level hiring is stalling in companies adopting AI.
That means if people cannot enter industries and get trained, where are the senior people going to come from? Someone has to train them so they can become senior employees.
That raises a question, but it’s just a matter of time before the firing will start. Right now it’s non-hiring, but the firing will start.
When we say there will be no jobs, it’s not that there won’t be any jobs, but the dramatic reduction in labor as part of the economy is going to be incredibly disruptive and unsustainable without universal basic income. So, I believe universal basic income has to come; otherwise, there will be social unrest.
On the point of the data moat—if you understand the structure of data, there’s a lot of synthetic data being created to train AI models. So, that data moat is a question mark and will continue to be one as we go along.
Then comes the question: can AI manage AI? Can technology manage technology? We are not there yet. Agentic technology exists, but it hasn’t been fully woven into enterprises yet. You gave an example of an industry—automotive dealerships—where agent AI is being adopted successfully because the domain is understood by the people bringing in agent technology.
There’s still a human in the loop, but those humans will start to get out of the loop as enterprises become more comfortable adopting AI. So is it a year, two years, five years, ten years? We don’t know yet. The timeline is the question mark; the trajectory is not.
We are going to move toward robotics. China is already building factories with no humans—fully automated, dark factories that conserve energy and operate like clockwork. That’s the future of manufacturing. Self-driving cars and trucks will also eliminate enormous numbers of jobs.
At the moment, I believe small language models are great because they don’t hallucinate as much. For domain-specific problems, they’re perfect and can extract enormous value.
But AGI doesn’t need small language models. AGI can operate with superintelligence, and that will make many of the solutions we’re developing now obsolete as more powerful technology enters the system.
That will bring another layer of disruption. We don’t know whether that’s three years, ten years, or twenty years away.
That would be my synthesis. The end of capitalism will come. We just don’t know when.
Sandeep Sardana: I don’t disagree with the majority of your thesis. You’re right—it’s headed in that direction. The question is timing. In the meantime, the world still needs to operate. Things need to become more efficient, and productivity gains still need to happen.
We’re investing in companies solving today’s problems for the foreseeable future. Where those companies go from there—time will tell. The domain expertise we look for right now to build out industry operating systems—that problem still sustains.
Sramana Mitra: At the moment, domain expertise is still very valuable and is a defensible moat.
Sandeep Sardana: We’re still focused on that. We still see value and a lot of demand for that kind of work, and we continue to plug away. If AGI takes over the world, maybe my agent will be talking to your agent—your MCP server talking to mine—and we won’t have this podcast anymore. Maybe there’ll be agents building their own podcasts.
But who will they be serving? Probably other agents. I imagine a world where agents support what we want to do, make us more productive, useful, and happier. That’s the world I’d like to see.
Sramana Mitra: Let’s hope for that world. What worries me is what’s happened with social media. The algorithms have taken over and are creating real damage.
Sandeep Sardana: I agree with you.
Sramana Mitra: If that trend continues and becomes more pervasive—well, I read this morning that in Brazil there are podcasts or streaming programs calling people to prayer early in the morning. That’s good. It’s much better than the hate being spread by social media and algorithms that incite people to bad actions. I’d rather people pray in the morning, calm down, and feel better about the world. That’s a good thing.
Sandeep Sardana: That’s a good thing.
Sramana Mitra: On that note, Sandeep, pleasure talking to you. We’ll be in touch and bring you back again—there’s so much going on all the time.
Sandeep Sardana: Thank you for staying on top of it and keeping us all aware of what’s happening in the world.
Sramana Mitra: See you soon. Bye.
This segment is part 5 in the series : 1Mby1M Virtual Accelerator AI Investor Forum: Sandeep Sardana, BluePointe Ventures
1 2 3 4 5