categories

HOT TOPICS

1Mby1M Virtual Accelerator AI Investor Forum: Marina and Nick Davidov, DVC (Part 2)

Posted on Saturday, Sep 13th 2025

Sramana Mitra: You’re not the only ones who invest only in repeat founders. Most of the venture capital industry does the same. It’s very hard for first-time founders to raise money because of that. We help a lot of first-time founders, and it’s a real challenge.

Let’s go deeper into your AI investment thesis with a more specific question.

One thing that’s been coming up a lot—especially on LinkedIn, which has become the water cooler of the world—is this criticism of “wrappers” around ChatGPT, Gemini, or other LLMs. Many investors say they don’t like to invest in wrappers. The main concerns are defensibility, differentiation, and long-term competitive advantage. After the first wave of customer adoption, it becomes easy for others to enter the market.

What’s your perspective on that?

I’ll frame it with a recent success story—Lovable. It’s a wrapper around ChatGPT and has grown incredibly fast, reportedly reaching $100 million in about eight months. They offer no-code web development tools and face many competitors. Another similar startup was acquired by Wix for $80 million within six months. So it’s clearly an active space, and these are useful examples to anchor the discussion.

Let me hear your thoughts.

Nick Davidov: First, about LinkedIn—it’s a global water cooler, yes, but sometimes it feels more like the procurement or accounting department’s water cooler than the C-suite. Often the real action is happening on Twitter.

Sramana Mitra: I don’t agree with that. The entrepreneurs are very active on LinkedIn.

Nick Davidov: Fair. But to the point: “Wrapper” is often used as a derogatory term for application-layer startups. But if you think about it, even ChatGPT is a wrapper over NVIDIA, which itself is a wrapper over TSMC, which is a wrapper over silicon. Similarly, Dropbox is a wrapper over Amazon S3, which is a wrapper over cloud storage.

The issue isn’t being a wrapper. It’s whether you’re adding real value.

If someone just exposes ChatGPT’s API through a Telegram bot with a couple of extra features and no UI or unique experience, then yes, that’s just a wrapper.

But if you build a compelling experience on top of it—like an application in an operating system—that’s different.

Think of LLMs as operating systems where the command line interface is the default. If you build a useful, well-designed application on top, you’re not a wrapper anymore. You’re creating user value.

One of our most successful investments—Perplexity AI—gets called a wrapper all the time. But look at what they’ve built: their own search, their own models for search and information presentation. They do rely on large external models for reasoning, but their strength is in their ability to choose and integrate the best tools available—Claude, GPT-5, or others—into a seamless user experience.

That’s not a wrapper. That’s a platform.

Sramana Mitra: That’s a great example to double-click on. What do you think is the secret of Perplexity’s success? Who are the users?

Nick Davidov: Mostly knowledge workers—people doing information retrieval and analysis. From there, it spreads to other parts of their life.

For me, the browser and Perplexity itself have become irreplaceable. A good test is: how much would your users pay to avoid losing your product? When Perplexity first released their search, I would’ve paid $2,000 a month not to lose access.

The team, led by Aravind, focuses on things that don’t change—even in an environment where everything changes every week. In the last two weeks, we’ve seen more progress than in the last two months, more in two months than two years, and so on. To persevere through that kind of change, you have to focus on fundamentals: users won’t look for slower, less accurate, or less enjoyable products. They focus on execution quality and speed of shipping.

Strategically, they understood early that it’s not just about prompt engineering—it’s about context engineering. The context you present to the model vastly affects the output. Better context = better results.

Perplexity builds better context through search. They’ve essentially built a better search engine for LLMs than Google. Also, by controlling the browser and app, they know what the user has seen, what accounts they’re logged into, what they’re researching—and can tailor the responses accordingly. Not for ads, but for true utility.

So when I say, “Get me another tuna bowl,” it knows which restaurant, which delivery service, how I pay for it—it just handles it.

Sramana Mitra: Let’s change gears a little. For those listening who may not be as far along in understanding your points—Perplexity isn’t the only LLM offering this, but the feature you’re describing is very useful: related questions. It ties into what you’re saying about prompt and context engineering.

When you start using Perplexity and asking questions, it gives you lots of related questions. That’s helpful—you’re prompted for prompts, which is productive.

Are you saying others can’t do this? Or is it just hard to do well?

Nick Davidov: It’s not that others can’t—it’s just very hard to do it right. And very hard to do it with their speed and quality.

Every time a new model gets released—say, an open-weights model—Perplexity integrates it almost instantly. The paper can be three hours old, and it’s already built in. Their time to first token is incredible.

The way they host models, the speed they execute—it’s just unreachable for most companies.

They focus on two things: creating the most enjoyable possible user experience, and making the tech the fastest and most accurate. That’s what makes them different.

This segment is part 2 in the series : 1Mby1M Virtual Accelerator AI Investor Forum: Marina and Nick Davidov, DVC
1 2

Hacker News
() Comments

Featured Videos