
Build AI, push the limits
Open Source Meets AI | Dmitry Grankin, CEO of Vexa.ai
In conversation with Dmitry Grankin, CEO of Vexa.ai
Amit: Welcome to another episode of Build AI podcast, where we talk to people who are building in the AI space. Today I have Dmitri Grankin. Thanks Dmitri for joining us from Lisbon, Portugal.
Dmitry: Thanks Amit for having me.
Amit: Yeah, absolutely. I just wanted to know right off the bat—how did you get into the AI space and specifically what you're building in the meeting notes and transcription space?
Dmitry: Yeah, I've been working as a data scientist since 2018. I had a decent career before, but since 2018 I was playing around with machine learning, with neural nets. A couple of years ago, I decided that it's time to build something that people want and went into the venture. I went to Unicorn AI, which is a startup hub here in Lisbon, and there the idea of an AI assistant appeared back in late 2023. So I built a prototype, got a few customers there, then raised some pre-seed funding, built the product. A year passed since that time, and I thought to myself: why not actually open source what I've been doing?
Why? Because I've been a software developer, a machine learning practitioner, and I've been using open source all the time. Everything I build is on top of another open source project or product. I started to look at commercial open source products that are on the market and actually found that it's a great commercial idea. It's a business move.
Why is that? Because it builds trust. It builds reach. It gives you a community. It's much easier marketing, and it brings super users—not just those who try your product and whether they use it or not. With normal proprietary products, the major issue is actually a big problem: how you get feedback from users. With open source, you get very genuine and very deep feedback.
Amit: This is a great introduction. I was reading about your company, Vexa.ai. It says "open-source API for real-time meeting transcription." Sometimes the sense of what a founder's vision is and what they're really trying to solve doesn't come out. So I wanted to ask you: what was that moment which made you decide to solve this problem and that this is a big enough problem?
Dmitry: The problem we were just starting with—it was a very early time for making transcripts, for making assistants. It was clear to me that people were spending a lot of time online in online meetings, and they continue to do so. The market is growing for meeting assistants ever since.
But actually, the infrastructure part of it is something that is really interesting to me right now because the number of products, the competition, is growing really fast in this market. It's actually very tough to deliver something in the B2C space because it requires a lot of user experience work. But I myself don't believe in the UI in the long run. Why? Because agents are taking over the internet. People would probably—my vision is that people will more and more interact with agents, which are like user personal agents, and the agents hate user interfaces. Agents want APIs.
So the implication here is that we want to deliver infrastructure to those who want to build products, and we want to deliver APIs for agents.
Amit: That's absolutely right. There's so much traction in the whole sort of AI meeting assistants space. There were a lot of traditional players like Fireflies and Otter that already existed in the market. Now they're trying to go niche into verticals as well. For example, Fireflies recently announced something for wealth advisors or financial advisors. And then every other field—clinics, medical field—has their own set of meeting tools. Underlying all this, either people are building on their own or they're using one of the infrastructure API providers like you or Recall or what have you. So that's how I understand the landscape. Is that correct?
Dmitry: Yes, it looks like this. My vision is that your agent would probably be able to solve your specific problem much better than any niche product. If you have an Anthropic model, an OpenAI model—if you have all three with direct access to your meeting transcripts, that will work much better than whatever niche product you're delivering. Because AI is general in a sense right now.
Amit: That's very interesting. Let's say somebody like Fireflies, Otter, or one of these mass-market players started and are now going vertical. They'd be building it as a proprietary vertical stack—everything from infrastructure to final meeting transcription is one set of tools or APIs they've picked. But what you're saying is maybe let's disassociate it into two parts. One is a bot that joins the meeting, records it, and then you can use Claude or OpenAI or any of the models to do the actual transcription, and then maybe there's another agent that does what you specify. So you're saying it's better to dismantle this and build it that way?
Dmitry: Yes, absolutely. There's a protocol—MCP, Model Context Protocol—open source and developed by Anthropic, and it's gaining traction. Why? Because for now, you can connect Claude Desktop, you can connect Cursor, Windsurf, to any source of tools, to any source of information. My vision is that every assistant will have access to these MCP servers. It's another great topic to discuss—how this will actually work, because MCP doesn't solve all the problems here. The search tool problem, like a Google of tools, is something very interesting to discuss. But basically, there will be an agent. There will be a user interface—my own interface, your own interface—which will do the best job for you. And there will be the internet of tools, of APIs, of MCP servers. There will be something like a search in between—some Google of tools—between the internet of APIs and the agent. Those APIs will probably develop themselves because APIs are a much simpler thing than the contemporary consumer product, which is all about human behavior and psychology. You just get rid of all that stuff. You build the API, which is a really simple thing that AI essentially is building for you right now and will do more and more. You get your personal system and an ecosystem of tools.
Amit: Super interesting. I think we'll come back to this point of how the future looks. But those initial inputs are very interesting. I want to switch gears and go back to what you were talking about earlier—that your product is open source. That's a bold move. My question is just to understand it a little bit more: why go open source when most companies in the space lock things up?
Dmitry: There are a few very serious implications here. You're trading off something—any decision is a tradeoff. What you get is traction, because it's much easier to market an open source product. It's much easier to reach influencers who have a bigger audience with an open source product. It's about trust. If your product serves companies, enterprises, there are a lot of companies that wouldn't even talk to you if you're not open source. These two reasons are already enough.
In terms of funding, an open source product becomes a transparent thing. It's transparent not only because the codebase is open, but also all the community is basically transparent as open. So an open source company is as transparent, or maybe even more transparent, than say Google, Microsoft—public companies which you can trade on the stock exchange. And this is basically in terms of funding, it's much easier to get funding.
What are the downsides? You will probably compete with your own product. You'll compete with your code. You'll compete with those who deploy your code as a competitor. But let's be honest—competition is a rental problem in the startup world anyway. It wasn't always the major thing. The major problem is the lack of feedback and the connection to the market. The lack of information—the bandwidth of information flow between you and the market.
Amit: Very interesting. When you talk about open source, there are a few things that come to mind. For example, the users, as you were also talking about earlier, are fairly technical. They hack their way into different types of use cases and they're power users. Is there a use case or thing that your users have built which totally surprised you—something you weren't expecting somebody to do?
Dmitry: Let's be clear—the open source version of Vexa is very early stage right now. I open sourced the proprietary product about a month ago. I got some feedback and understood that people just don't have an idea of what to do with it. They basically told me: we want API, we want this simple API. Then I realized we have Recall, a proprietary product, which is basically doing what people are asking me to do here, and I know that Recall is doing really, really well. So why not just open source and make an open source API for a meeting bot with real-time transcription? I've built this product, and today I plan to release version 0.3 of this product, which will be a public beta. So we'll see how it goes.
Amit: From a product and technology perspective, can you share what's under the hood that makes your product special or superior than what exists in the market?
Dmitry: It's low latency, real-time. Low latency real-time transcription built into the pipeline. Recall is making these bots and streaming audio, then piping it into other feed providers—doing everything in one place. We get rid of video for now, just concentrate on Google Meet as an input, and next Microsoft Teams and Zoom. We've got this bot input and get instantaneous output as transcription or translation, which is a cool implication because you can set any language and it will do real-time translation for you from one language to another, real-time.
Amit: Even at Guy Ventures, one of the products we have is transcription of meetings. On the surface, it sounds very easy until you have to do it at scale. There are so many accents and jargon and privacy concerns. I wanted to ask you, since you're focused on this space: what is the hardest technical challenge that you have cracked?
Dmitry: The hardest thing is that you need to scale, and you need to do it right to make it scalable. I've been undergoing this paradigm shift many times in a row, and now I think I'm doing things right because it's built with Kubernetes in mind. For now, it's a product in development in a single server, but it's been done right so that it can just be thrown into the cloud and auto-scale itself. So to answer your question, the hardest thing is to know how to do things right. AI makes it easier now because you start concentrating on system design from the very early stage of the product—on the system design, not on the code, not on how to build a prototype, because prototypes should be built really, really easily. But you start thinking about scale as someone who has a product with thousands and millions of users in mind.
Amit: I keep thinking about the open source nature of your business. As an entrepreneur myself, building a business and trying to find sustainable, profitable growth—how do you enlighten us and the audience? When you think about value capture using open source, how do you think about it? How does it become a financially sustainable business?
Dmitry: I think it's pretty straightforward. Any open source product you have, you have to spend some resources to deploy and manage the deployment. If you deliver it ready to go and it's simpler and cheaper to use your SaaS product instead of deploying your code, people will go and use it. And you're the best person to know how to do it at scale.
Amit: How does your company generate revenue streams as you grow?
Dmitry: For now, we don't have any revenue streams. But as soon as I deliver a production API, people are ready to pay for it because there's no free transcription available on the market, and it's really expensive. I can make it three times less expensive. If you use Recall, you'll pay about $1 per hour and another 30 cents for transcription. I'm pretty sure I'll be able to deliver at least a third of that price.
Amit: So let me understand—it's open source in the sense that you don't charge any monthly recurring fees, and you're going to charge for transcription. Is that what you're saying?
Dmitry: Yes. I'm saying you have a choice. You can deploy it yourself and run it for free, but actually you'll need infrastructure, you'll need GPUs, you'll need someone to deploy and maintain it. Alternatively, you can just grab an API key and start using it right away, pay as you go, and it will scale with you. But you'll always know that as soon as you're big enough, you'll be able to deploy it yourself. You'll feel very flexible. You'll feel free to deploy it yourself when you're ready, if you ever want to switch to your own deployment.
Amit: That's the nature of open source—you're trading off that right. Typically, open source businesses give out the core functionality, the API, whatever is there, and people pay for implementation, maintenance, support, etc. So that's the same model you're following?
Dmitry: Yes, maybe the first idea for a revenue source is the public API, and then we'll see how it goes.
Amit: Someone asked me this question: with AI and with all the vibe coding, people being able to build products easily, business folks testing things out in the market, people talking about vibe marketing and vibe sales—do you think the open source software movement will actually grow for some reason? I want to understand if there's a reason why we will see more open source software because of AI. Are there reasons like that, or do you think otherwise?
Dmitry: I think we will have just more coding, more products, and more open source, more proprietary—more of anything. You have a fixed number of software developers, but you have a lot of newcomers, and you have more people and more code because a single person can produce 10, 20, 100 times more code. That's actually one of the reasons why I decided to go open source—because there's no moat, no technological moat anymore. Whatever you're building right now, in months anyone could wipe it. In six months, someone probably will be able to wipe Vexa with just one prompt. I don't know—it might be the case. But I'll establish a community now, and they would need to beat Vexa—they'd just clone it, enhance it, and contribute it back.
Amit: Super interesting. I also like the framework you're using. On one side, there's an argument about optimization—people say because of AI, a lot of people are going to lose jobs, and so on. But then there's a second school of thought, which I subscribe to, which is the maximization framework. It says exactly what you said: there will be more product, more code, there will be more open source, more proprietary, because we will try to automate everything which was not even possible before to automate.
As an example, I've been looking at speech-to-text for a very long time since the Nuance days. Then Google machine learning models came and did about 95-96% accuracy. Till then, it was not possible to have commercial applications. Now with LLMs, when we have reached like 99.99% or maybe even 100% very soon, now you can think about commercial applications like medical transcription, wealth advisors using this. Now imagine there are thousands of such unlocks and new use cases which will get figured out over the next 5-10 years.
Dmitry: I think we've been through many technological shifts as human beings—in the 19th century and earlier, like industrialization. We adapt like an ecosystem. We adapt to whatever comes next, and we'll have more of the product, which is a good thing. So why should we care whether we'll have to—obviously people will lose their jobs, but hopefully they'll find new ones with better productivity.
Amit: Very interesting. Since you're an early stage startup, I think there's a lot of lessons people can learn regarding team structure, culture—just the lessons all of us learn as founders. Everybody goes through their own journey and has a bunch of learnings that we can all share. One thing we've seen—I was asking my CTO co-founder, who also grew up just like you in the machine learning era, did a lot of AI research, published papers, and now is doing commercial applications. He says that now the team looks very different. His technology team is very different from the previous startup he did 5 years back. I wanted to ask you: how has your team changed in the way it looks and the way it works after the advent of AI?
Dmitry: Good question. When I raised pre-seed funding, I hired a team of professionals—seven people. I had a couple of frontend engineers, a backend engineer, QA, and someone was taking care of agile things, scrum. We'd been building this proprietary product with daily meetings, scrum meetings, all that stuff. But that was less than a year ago. Now I don't have any permanent team at all. The things we were doing last year with all these people, I can easily do myself now. Any frontend thing—which I basically don't know how to do frontend coding at all. I have no idea how it works. I cannot read the code almost. I just wipe the frontend easily now.
Amit: What tool do you use for frontend?
Dmitry: I start with V0 to get just an initial thing, the latest stack, and continue in Cursor as usual. It works perfectly well. Right now with Gemini 2.5 latest model, it works amazing.
Amit: So you had a full team, and now you're able to do everything—frontend, backend, DevOps, infrastructure, everything yourself, and of course the AI components as well.
Dmitry: Yeah, I have a few people who contribute to open sourcing, but I think about 90% I'm doing myself. About 90% of what I'm doing is myself.
Amit: One question though: vibe coding is great—we don't have to have big teams to build great stuff anymore. But recently, one of our engineers said he was working for 5 days, shipped something, and it started working, but then lots of issues suddenly came up—not very critical issues—and we went back and checked and basically found that Cursor had made a lot of logical issues in the code. Does that happen? Is that common?
Dmitry: Absolutely. It's very common. The danger is that you become lazy. You just go ahead and fix this, fix that. This shouldn't happen. It's still tough work. I'd say it's like riding a crazy horse. I've switched from normal walking to riding a crazy horse. You can get faster to whatever place, but it's tough to keep it on track. You need to be very dedicated to know what you're doing, where you're going, how you want things to be exactly, what's the design you're implementing, what's the system design, what's the algorithm—not the code, but the algorithm that you're implementing. That's something very, very important. Still, things happen—duplicate code appears here and there, stupid things—because the AI struggles to keep on track, and you struggle to keep it on track. But basically, that's your only job here: to keep this thing doing what you need.
Amit: Interesting. I like that wild horse analogy. So basically, you have to learn to train it in every respect. It's not dangerous to ride it, even though you may have to where it can go faster. Just a couple of very quick last questions. What is the biggest mistake or painful lesson that you have learned as a founder?
Dmitry: The biggest mistake was that I never considered open sourcing the things I've been doing for all those years. Now I think that the default mode should be that you open source what you are doing. Why? Because I had a few amazing products that I just didn't have a clear idea how to put on the market. Now they're just outdated. But if those were open source, they would have gotten traction.
For example, I was playing with the CLIP model, which is image-to-text, in late 2021, beginning of 2022. We had a question of how to store this large amount of vectors and search not in memory but in some kind of database. We understood that we needed some kind of vector database, and we found those very early stage vector databases that everybody is now using. Those times, these things were all starting to come into existence. If I had open sourced that development back in late 2021, beginning of 2022, that would have been a success—I'm pretty sure about it. So go open source everything you are doing as a default state.
Amit: This is great. I think it was very interesting talking to you, especially given that you're building in the infrastructure API space. As a background, as an angel investor, I've invested in about 30 companies—22 of them were financial infrastructure API companies. One thing I've seen is that there will be a lot of consumer apps and consumer companies that will come and fail, but there are these infrastructure companies which are powering the entire market, and these are great B2B businesses. So congratulations on both picking this space as well as open source, and I wish you all the best in your journey. Thank you so much for your time today, Dmitry. It was lovely talking to you.
Dmitry: Thank you very much, Amit. It was a deep, incredible conversation. It was a lot of pleasure talking to you.



