Build AI, push the limits

She's Building Voice AI To Reinvent Hiring | Roli Gupta, babblebots.ai

Build AI Podcast

Tune in to our podcast.

Build AI Podcast

Tune in to our podcast.

Tune in to gAI Ventures podcast

Tune in to gAI Ventures podcast

Tune in to gAI Ventures podcast

A podcast by gAI Ventures, exploring the latest in AI, startups, and innovation. Join us for expert insights, founder/investor stories, and deep dives into how AI is shaping the future.

A podcast by gAI Ventures, exploring the latest in AI, startups, and innovation. Join us for expert insights, founder/investor stories, and deep dives into how AI is shaping the future.

A podcast by gAI Ventures, exploring the latest in AI, startups, and innovation. Join us for expert insights, founder/investor stories, and deep dives into how AI is shaping the future.

In conversation with Andreas (Roli Gupta, babblebots.ai)

"Vibe coders, they get you to the tip of the iceberg. But 90% of the problem is the old-style engineering issues: stability, backup, storage, auto-scaling."

"Why did we build this technology? Of course, everybody is like, 'Guys, why are you doing hiring?' Actually, candidates like talking to AI recruiters. You are, let's say, in your 40s of age and you are feeling like somebody needs to give you a white-glove treatment before you go on a job. This is a very different thing. It's actually a sale."

"Your specific case, you are actually designing a product which goes a little bit further, right? You are almost like designing judgment. How do you build that nuance, you know, without putting biases and other issues? I think anybody who has actually spent time in HR—and we have had this in recording from many of the companies that we have worked with—AI is reducing bias. As much as it is counterintuitive to a lot of people."

"Since we are also building in the AI space, I feel like every use case, there is much more than what we see on our websites, and there is so much complexity hidden. And especially I feel like in voice AI, that was the biggest risk: that will candidates like it? And we have completely debunked it. If you wanted to assess like a thousand candidates for a coding round, you will have like a 15–20% response rate, because people don't want to do those coding round exercises anymore, right? Babblebots interviews is at least 3x of that."

"You are your customers—explainers, ROI. What metrics do you use? One of our products is closer to an outcome, right? And the other is more around capability. The whole idea was that can you remove people at the top of the funnel? Give my team the top five, ten people that should be spoken to, are the most highly qualified. And it will be very, very easy for them. It's a less stressful thing. Everybody enjoys it. And if there is one thing I want to leave this audience with today, it is that..."

[Podcast Starts]

Amit:
Hello and welcome to yet another episode of [Podcast Name]. Today I have Roli Gupta, who is the Founder and CEO of Babblebots AI. Hi, guys!

Roli: Ya, hey! Interesting to talk to you again. We recently spoke about what's happening in the voice AI recruitment space, and you know, I have been following your company, and it was always on my agenda to do this podcast.

Amit: So for the audience, Babblebots AI works for companies that are hiring, and they are AI agents basically doing like interviews, right? And Roli is obviously going to keep the clip expanded, but just in terms of the background: She is an IIT Bombay grad, a Tuck School of Business alum, and has also studied at the University of Michigan. Before Babblebots, she spent about 20 years in startups across the US and India. She has been in roles around product and growth, and has been on multiple zero-to-one journeys. In her last job, she was the SVP at Toppr. So, you know, very relevant background.

The company launched in 2022, and then the product in 2023, and they have already secured pre-seed funding from SINE IIT Bombay, Z21 Ventures, and other angels. I am so excited—you know, this is the first time I am actually doing this podcast with somebody who's doing an AI-first company. So welcome again.

Roli: Wow! Okay, that's awesome.

Amit: So, maybe, you know, it always helps to give some context to the audience. What is the problem that you are solving? I could probably define it in a very simplistic way, but if you could actually sort of expand on it: What is the problem you are solving? When did you come across it actually, and how did you build the conviction to solve it as well? You know, that initial story?

Roli: Ya, absolutely. So thanks for having me here, Amit. And you are my first one in AI-first. I am thinking I have almost forgotten what are people building if it's not AI now.

Amit: Yeah.

Roli: So, ya, I know founders are building a lot of things, but ya, it's magical. I mean, I think we got on this journey a little bit earlier than I think a few other people, but I honestly have felt... so this is my 10th year as an entrepreneur. Actually, I have finished my 10 years as an entrepreneur, and this is my second company. I just think that the conversation changes every week.

One of the reasons I moved on from my previous startup was that I felt it was a renewable energy sector. It's a very good impact sector. I was super passionate about, like, you know, covering the whole world with solar energy panels and whatnot, at this company called Ushva. The company is doing really well now, but what I realized was that it was a relatively slower-moving sector at that time. It's doing much better now. And the conversations don't evolve. So what happens in such cases is that if you go to conferences, if you meet the other people from the industry, everybody is kind of talking about the same stuff, like, year after year.

So, to me, it felt like, you know, the conversation has to evolve, right? And then from there, suddenly it came to AI, where the conversations are evolving every day. Every day, literally! Another problem is that you can't keep up with all the advancements in AI. But it's accelerating. I am loving every moment of building Babblebots. How it came across this?

See, as an entrepreneur, I think all of us end up, at some point, we do realize, in a very internal way, that we cannot build whatever we want to build if you don't have the right team. So I think that realization comes to every entrepreneur eventually. The question is, like, do you do anything about it? Do you solve it for yourself? Do you solve it for others?

So when I had moved out of my first startup, I had, I was trying to figure out, like, what is the next big problem I want to look at and do something about. And I ended up speaking at... edtech was doing really well at this time. So this is 2019. And I ended up speaking to the CEO of Toppr, Ishan. He said that they had raised a round—I think Series B and Series C—and they needed to build this whole distribution with salespeople. And effectively, the mandate that I had picked up was that to hire hundreds of salespeople and train them, and re-train them every month.

So that seemed like an impossible task, because every process around recruitment was so people-oriented, and it was so slow and inefficient, that it was actually an eye-opener to me: how broken hiring was at scale. And I am an engineer, and I actually think a lot like an engineer and a product person. And to me, a recruitment ended up being an engine which was 1% efficient, right? You hire one out of the 100 people that you are sort of, in some way, engaged with. And I was like, there is no other machine in the world that runs at such low efficiency. So I felt like there is something that needs to be done about that.

And then I think some dots connected, because I have spent about 10 years in the Bay Area as well, working in a couple of startups. One of them was actually building natural language capabilities for a search engine. So, I had some idea about using generally computational linguistics and computers to build products. So I think that's kind of where it connected, that I felt that the bottleneck in hiring was around our ability to speak to a lot of people with a very fresh mind every time, which is very, very difficult if you want to hire hundreds of people. And then be able to sort of be a little bit, like, you know, structured about who you are hiring.

So, so that was really the seed of Babblebots: to be able to remove that bottleneck where, at the top of the funnel in hiring, we need people. So the whole idea was that can you remove people at the top of the funnel? Give my team the top five, ten people that should be spoken to, are the most highly qualified. And it will be very, very easy for them. It's a less stressful thing. Everybody enjoys doing it, and you are able to get to the right talent really fast.

So I think in the absence of removing that bottleneck, I could not find a way to actually get to that outcome. So, that's really how I think the story of Babblebots started. And the inspiration came, but it's evolved quite a bit. I mean, we are still solving the same problem, but I think it's even bigger than I thought. So I think in some way, it's a happy problem to have, like, "Oh my god." Like, you know, Toppr was a very narrow window, but I think this problem is really big, and there are some really fundamental things that you can do with AI, and we can talk about.

Amit: Yeah. No, absolutely. I can relate to it so much. I am a third-time founder in India, and mostly I did like very high-value work-type of companies that required very high analytical talent. It's like finding a needle in a haystack in India. But I think the biggest problem that we were facing, to your point, is the amount of management time that goes into talking to a whole lot of people that could have easily been taken care of. But it's just like the amount of time it takes for everybody to do those interviews. And it makes a lot of sense, actually.

So, and you know, like, something. Since we are also building in the AI space, I feel like every use case there is much more than what people see on our websites, and there is so much complexity hidden. And especially I feel like in voice AI. Like, you know, my favorite example is like, if people are calling up a restaurant to book a table, they have a lot of motivation to get a table booked because they need to take the family or friends. So they will, they will be okay, you know, listening to an AI, talking to them, and taking the reservation. But like an outbound sales call just doesn't work, because the other guy has no motivation, right?

So basically what I was getting to is, given that this is an AI product, we know there is a lot more than meets the eye on the website and Twitter and LinkedIn and all. And this is especially like, somewhat, you know, something where if an AI recruiter is calling, it has to be sounding very natural, and the person should be, like, you know, motivated enough to go through the interview. What are the kind of complexities that you have solved in building the solution?

Roli: Yeah. Okay. Great question. And I am glad that we are going a little bit deep on the technology side, because I think what has happened is, I think a lot of people have started assuming, "Yeh toh ban jayega," right? See, that you know, you kind of almost write off whatever has been built. And so, just a little background: We started building in January of 2022. And at that time, actually, ChatGPT was also not released. So none of this GenAI madness was actually on the horizon. ChatGPT was released in November of 2022, and 2023 is when we were basically getting something new from one of these big labs every day, pretty much. And then 2024, people started talking about agents. And 2025 is now—I think they are talking about at least some amount of enterprise adoption. But 2025, I think, is turning out to be more a year of, like, these web browser agents. And vibe coding is, I think, the flavor of this year so far, right?

So, we started building. The reason I was bringing this up is, it was actually there were no easy stacks with which we could, in fact, connect with, right? So you had to do STT (Speech-to-Text), Text-to-Speech, the whole processing, figuring out what should be the next follow-on question, figuring out all the exceptions that happen in a human-life conversation, right? If I ask you to speak something again, if I just ask you to wait a minute while I am thinking, if you want to say, "Hey, I want to just take a pause." Like, there are so many little, little things that actually we don't think we are not thinking, we are not expending zero energy on making those decisions, right? But AI had to be, at that time, had to be trained to be able to actually understand what are all these complications. Not even complications, honestly. They are more like how a natural conversation looks, right?

So we spent a lot of time trying to mimic that natural conversation through a sort of a conversational AI framework that were available at that time. And we built a bunch on our own. We even ended up building a lot of models—not too many, though, because those would have become a little bit redundant after GenAI, I think. So we spent a lot of time in fixing the voice. And the biggest thing there are two main things with voice, especially when you are having like this real-life conversation, right?

So, one is actually pause detection: when do I know you are done talking as a candidate, right? So with human beings, we don't think about that anymore. I can pretty much gauge whether you are done with your answer or not. So that is one.

The second thing is actually latency. So, you and I are talking, and I am taking like five seconds to respond every time. This podcast will get over in 10 minutes, because it's a very unpleasant conversation then, because it's too slow, right?

And the third thing, which used to be a very key design principle, and it's still—so our first agent is called Tina. And I have a picture of a board which had the Tina design principles, basically. And one of them was that Tina could never sound stupid. Right? So again, if you are talking to a bot and then you realize that this thing is not really understanding me, again it's a very big put-off to you. Even if you are a job seeker, right, you want to have an intelligent conversation. So intelligence at the speed of human conversation, while not speaking over when you are speaking, like being respectful about, like, you know, your part in the conversation—those are the three things that are actually fairly... At that time, even now, they are not fully solved, but definitely more solved than before. So those are the things that actually you don't see behind the product, because you feel that conversation just feels like, "Oh achha, it's feeling like I am talking to somebody on Google Meet," but you are actually not talking to a human on Google Meet. And to simulate that was a big endeavor that we started off with.

Amit: Yeah. Many things that you said, actually, I can relate to it. On the first one itself, we tried out outbound calling agents a couple of times. I mean, I meant like a couple of industries. And one of the things was, the minute there is a voicemail, all our agents feel they can't figure out like what? So I will tell you one thing from there, Amit.

So when we built this technology, of course, everybody is like, "Guys, like why are you doing hiring? You know, this is a generally applicable stack, right? So why are you only using it for hiring?" And we actually spent some cycles brainstorming: Is there some other use case that seems to be a good use case for us to enter the market with? We couldn't figure out, because sales is the last one. You don't like to buy from a bot. That is, it's a human problem. It has nothing to do with technology. That is the thing, right?

And so now you have one side: you are somebody from whom you are trying to take a check, right? Of course, that person has all the power in this conversation. But on the other side, there is somebody who wants a check from you. So even if the technology is not as clean and sophisticated, this person is the job seeker, is going to go through hoops to be able to get a chance to actually represent themselves and have at least a chance to talk to somebody at the company, which otherwise they would not have gotten, because a resume would have been rejected, or many, many other things would have happened, right?

So the power dynamics are completely reversed when it comes to sales versus when it comes to hiring. And that is why hiring is going to be one of the first use cases that will see mass adoption, before phone bots. I mean, I mean, you can spam people on the phones, but I just don't think that that's actually going to... yeah, I think it's not a good human UX.

Amit: I think that's the problem, yeah. Motivation is required. Absolutely. And so what I gather is that when you are looking at this number one, two, three, and other issues that you solve in that complexity, there is a lot of engineering which would have gone into it. And so tell us a little bit about that. Like, apart from you, who else is there in the team? And how do you, like, what part of product engineering do you take care of? Are there—is there something else like a CTO, engineering head, engineering?

Roli: Yeah, very happy, always happy to talk about my team. So we've been around three and a half years. So the team in the beginning is not the full team now, but we do have some people from that time as well. So I am not—I am a product person. So I can, I am the interface between what a customer wants and what a user wants and the experience that we can provide. But I am not the one who is writing code. So we have never had, like, in some sense, like a full-time CTO. But we have had very sharp engineers. So, like, one of our lead engineers is from, he's from one of the IITs, CS. He did his, like, BTech and artificial intelligence and data science. So very, very relatable. He's been with us for almost three years now, built some of the earliest versions. I mean, what we had built. We had a couple of people spend more than two and a half years, many of them from IITs. So we had a very good, I think, we had a very sharp team in terms of, especially because this is a very ambiguous area. You want people who are comfortable in ambiguity and figuring out a path there, because there was nobody to guide, in particular, that "this is what." Nobody knew. It's not even a function of seniority or juniority. It is just that nobody knew how to really do this, right?

So, I think we have always been a team of fairly fast-moving and smart engineers who are comfortable with ambiguity and sort of, you know, build something out. We have all also sort of have an in-built desire to solve this problem. I think everybody in some way, everyone in the team has had some frustration with either being hired or hiring. So I think all of us can imagine what a better experience looks like. So I think in that sense, there is a lot of cohesiveness in how we want to approach this.

So, yeah, we have a half-our team. So we are about 14 people now. Half the team is in engineering. Then myself, and the rest are on the customer side and marketing. So that's the split right now. And I think we have always had generally a majority of the team in engineering, because we are building a lot of things from scratch.

Amit: Yeah. And it's not easy to build an engineering team. So it was tough for you.

Roli: Yes. Yeah, yeah. And especially, my product, I will tell you, it's a... the SaaS compare to GenAI SaaS feels like a piece of cake right now. Because SaaS is a deterministic workflow, right? If you can get the database stable enough and you know the workflow, and I know it takes time to get the workflow right, but once you get it right, you have a very high, like, they will be like a 99.5% availability of everything to be working as it was designed, right? It's very deterministic. The biggest problem with GenAI is—and it's not only just us, like anybody who is wrangling with GenAI—the non-deterministic nature, right? You yourself would have seen, if you ask the same question to ChatGPT like five times, you get a variation in the responses. You are getting, it's not the same. So that actually, if you multiply that multiple times over, the multiple turns in a conversation, it starts to have a very bespoke flavor to every conversation you are having, and the issues that come around it in terms of accuracy, stability, and other things like that. So, yeah, I think that is something that we have had to learn as we go along. We are still learning, yeah.

Amit: I actually did an episode with my CPO on various approaches to solve some of these problems. Absolutely real problems. Yeah. In fact, on that, since you also mentioned about vibe coding. It looks like there are a lot of influencers on Twitter and LinkedIn who talk about these five-minute, bubble-wrapped apps and all that. And people who are actually building enterprise applications, and even like, you know, something for SMBs and B2B business applications, they realize there's so much that goes before, after, and solving all this complexity. I think we had a... we had put on a land, I think there was a plan to put, but I think my latest close is analogy of wrangling with GenAI is it's an iceberg. The vibe coding gets you to the tip of the iceberg. But 90% of the problem is the old-style engineering issues like stability, backup, storage, you know, how to make sure that you have enough instances, how can you auto-scale these things. It's just those kinds of things, right? I don't think vibe coding is getting you there. I think this is just... it's getting... yeah, you can create a personal fitness app maybe, but yeah.

Amit: I think there is a systems engineering topic to pick.

Roli: A systems engineering. That's right. That is right.

Amit: And then in your specific case, you are actually designing a product which goes a little bit further, right? You are almost like designing judgment. And there are multiple things that come along with it, right? So there could be biases, there could be, you know, I don't know, like, there could be other nuances also. Tell us a little bit about how do you, since you take care of the product, how do you build that nuance and, you know, without putting biases and other issues?

Roli: Yeah. So, I think anybody who has actually spent time in HR—and we have had this in recording from many of the companies that we have worked with—AI is reducing bias. As much as it is counterintuitive to a lot of people in general, and this is not only India, Amit, anywhere in the world. When hiring has a lot of bias, there is a tendency for us to want to work with people who look like us, who sound like us, who maybe are our own gender, you know, all the other parameters in which you can determine people. Skill is not always the number one sort of criteria, right? And I am not even saying that people are doing it consciously. I think many times you are just unconsciously liking people who are like yourself. And this is like, there is enough research around it to say that, like, that's what happens. That's what salespeople do: they mirror your personality, because that's the closest, the highest way to build trust, right?

So, I think if you talk to any HR leader who has actually looked at GenAI solutions with an open mind, they will tell you actually the solutions are in fact reducing the bias. Because they are giving a uniform platform to every candidate. I am pretty sure, definitely for Babblebots, but even for other sure ecosystem players, I am confident nobody is actually evaluating gender or religion or which area of the world you are coming from, right, just from the resume. Because there is only no upside in in doing that.

At I think earlier, it used to happen because there was a lot of human judgment involved. So at the screening level, at the first level, second level, I would say it's only skill-based now. It's skill-based hiring, right? You evaluate somebody on things like how good they are at the job that they have applied for, and if you are working in the parameters in which the job is available—you know, salary, availability, your office preferences, and other things like that, right? So there is a much more, I would say, in that sense, you are evaluating people a lot more in a standardized way.

So, first of all, I would just like to say that actually bias is less. By just even default, systems bias is less. There are places where I can see bias can come in. So for example, let me give you an example, right? And that is where we have to be mindful. I will give an example of how we remove that. So there is one model that we created, which is a very popular model, which is to understand the confidence with which a candidate is speaking.

Amit: And just before you go ahead, I just want to, like, for the sake of the audience, but just can you clarify that are you building it on top of the foundational models? Is it, as you know, what they call as a wrapper? I mean, there is a lot of complexity in... yeah, yeah. You have built your own, you fine-tune models, you build your own models.

Roli: So, that's what I was about to say. We actually have a bunch of our own models. But we are not using our own models for everything, because it's not necessary to do that, right? So if you see the back end of Babblebots, it's a combination of things that we have built ourselves, things that we have taken the LLM and just sort of like, you know, fine-tuned it. And in some cases, it's the LLM doing, there is no fine-tuning also. So there is a whole spectrum of: build it in-house, buy it and then refine it, and versus just like, literally, you know, pinging APIs at that point of time, real-time, and then go on with it. So there is no... So it's definitely not like a traditional, suppose the way you would think of a wrapper where there is no fine-tuning happening. There is a lot of stuff we have built.

But actually, I don't want to necessarily speak about this as, like, this is all towards the good user experience and how do you manage cost. Those are the only two things, right? Can you provide a great experience at an affordable price? So all our decisions are based on that: "Is my candidate liking the experience?" So candidate is... they don't give us the money, but they are sort of the true north. That they need to be liking the experience. So that's the candidate. And the second, of course, is, like, you know, how can I be more cost-efficient about it? And cost is of course, because I think LLMs, although they have become cheap, but voice is not necessarily as cheap right now. So if you see, because we are such a voice-heavy sort of user experience, we have to be careful about, like, what we are doing.

So I was just giving an example of one of the models that we have created in-house, where biases could have crept in, and if we were not mindful. So, this is around confidence and how do you... actually, it was around English language, sorry. Multiple, so I will give you an example, two: English language. So now English language, like, I am determining, the model is determining that if you are good in English or you are not good in English, right? So there are... but if we were measuring for the mother tongue influence, you would have given different ratings to different people. But in our case, we don't. If you even ask us to do an analysis on mother tongue influence, we have no way to do that right now. And in fact, I know there are large companies who want to do it for, because it is important for their work. But in those cases, you have to ask for these things in writing from them that you do actually want them. But our baseline models, what we did is that we recorded hundreds of these interviews, actually found annotators across the country. And we gave the same set of data to annotators across the country to rate them on low, medium, high. Right? What is the quality of the English? So three parts only: low, medium, high. And the reason we did it was that it should be influenced by how a person speaks English in Mumbai versus how they speak in Calcutta versus how they speak in the North, right, versus how they speak in the South, especially like Tier 2 cities in these parts of the country, right? They vary, strong accents. So if we had not taken care to actually have annotators across, it may have some regional judgment would have come in. So this is an example of how you have to be mindful of, like, some of these things to make sure that you are keeping all these kinds of biases away.

Amit: Super interesting. Like, I have not thought about so many things and bias. So thank you.

Actually, now that we talked a lot about the product, be a little bit about sales. It all comes down to that in startups: first few customers. Yeah. How do you convince them? In general, but then especially if you have to change behavior from a traditional recruiter to what you are doing? So, you know, it would be great to actually, if you can throw some light on how did you acquire customers, and what were the objections, and how did you overcome?

Roli: Yeah, absolutely. So, I think for any of these startups, I think the first customers come from people that you know, right? Because at least they will give you a chance, at least they listen to you. So I think it did come from people, not necessarily very close. So when I was doing the research for Babblebots, I had spoken to a lot of people before we launched, even started building the first line of code. So I spent like maybe three, four months talking to different kinds of stakeholders. Like, you know, actually, funnily, just backstory: In the beginning, we thought of launching Babblebots as an upskilling platform to help people interview better. But then I spoke to a lot of people, and it just happened that the kind of people that I was speaking to are at a senior position, and they are like, "Roli, why are you trying to take money from people who don't have a job? It's not a great business model, versus the companies have this as a problem and they will pay for it." So we said, "OK, never mind," we pivoted to B2B then. And that is how we actually raised funding. I don't think we would have been able to raise funding for a B2C solution, because the Indian investors, first of all, they were not... edtech was dead by then. Edtech was dead, and nobody believed that voice could be done. So in this thing of that, it was unbelievable that I could have had. Like, if I, when I was proposing what I said today? They just seem so simple and in hindsight, but when you are living through it, people, there is a huge disbelief that this could be possible to build, right? So that was, at that time, it's what I saw. But once we moved to B2B, we were able to raise a little bit of funds to get started.

And sorry, yeah. So how to get the... so it took us about a year, actually more than a year, a little bit more than a year, to launch the product in the market. So we had some design partners and all who came from people we knew. But then we did a launch, and then after that, a lot more. I would say unknown customers have started to come.

The key objection, if we go back to what has been the key objection actually from the early customers, is most people were not confident that the candidates will like it. And if there is one thing I want to leave this audience with today, it is that actually candidates like talking to AI recruiters. Candidates are not like... if you are, let's say I don't know, in your 40s of age and you are feeling like somebody needs to give you a white-glove treatment before you go on a job. This is a very different thing. It's actually a sale. It's not an interview. It's not an assessment, right? But 95% of the country is not that. They are people who are like earlier in their careers, and even mid in their careers, and they want the right opportunities to come to them. So they are willing. They don't have the same kind of thinking.

Amit: Is it because the AI interview that is being done by Babblebots is slightly better or much better than the average recruiter quality which is there in?

Roli: Absolutely. Absolutely. Because there are a couple of things that happen. And I, with all due respect to all the recruiters out there. First of all, you can talk to the AI recruiters anytime that you want. So half-hour interviews are happening at night and weekends. So what can the poor recruiter do, right? They are not working at that time.

Second thing that is happening is you can speak in your mixed language. You can speak in... so for example, I will give you an example of a company. Large company, listed company. They were working in the infrastructure sector. All the recruiters based in Mumbai, South Mumbai. And all the candidates they had were in UP and Bihar, working on a road project. And that mismatch is so high. Just being able to carry on a conversation for that... and I am from UP, so I totally empathized with the guys they were talking about, who used to come for these interviews. You see that they feel more confident. Like they are able to represent themselves, because you don't feel judged. This, actually candidates have told us that they don't feel judged in these interviews. Accent can be managed. The accent for some of these bots are already very, sort of like, they are not very polished in a normal sense, right? They are like, I would say, more mainstream Hindi, mainstream English. So there is a relatability that you are speaking to someone who is speaking like you. So that's the second thing.

The third thing is that there is no time limit, right? If the candidate wants to speak for half an hour, we are not stopping that interview. "You are excited to speak to Tina?" Tina is, is the name of our first agent, "for half an hour? Like, keep going." So you feel like you have a full chance to represent. You can talk to them anytime, and it sounds more like you. And I think on all those metrics, so these are actually the more, I would say, subtle things. Even if you come into the conversation now, which is where the LLMs come in, by the way, right? So what I told you was more around voice and access, right? Where LLMs come in is that if this is a highly technical candidate, LLMs will come back with a very nice, deep, probing question, which will go deeper into whatever they spoke about—whether it is sales or whether it is technical or whether it is, like, you know, some other affiliated field. The LLM will, like, the human-like brain in some sense, right? And it can go deeper. That level of knowledge is not there with a lot of recruiters, right? Because it's a very specific area and you are going in deep. So I think even in the term of the quality of the probing that you can do to evaluate somebody's skills, these AI recruiters are very, very good.

So I think that is the thing that I want to leave people with. That this was the biggest risk for people, non-technology adoption-wise: this was the biggest risk: "Will candidates like it?" And we have completely debunked it. I just, like, if you compare, for example, if you wanted to assess like a thousand candidates for a coding round, you will have like a 15–20% response rate, because people don't want to do those coding round exercises anymore, right? Babblebots interviews is at least 3x of that.

Amit: Yeah.

Roli: And because it's easy, you know, it's a 15-minute conversation. You can do it anytime. You don't feel judged. It's not like an assessment. So it's very... yeah, you are able to just connect with a lot more pipeline.

Amit: Fascinating insight in that use. And so, what industry? Your first few customers basically came from the network, and, you know, people you already spoke to.

Roli: Yeah, they spoke to some, how they got interested in us. It was very warm. So there was no sales involved. They were curious. We were able to deliver, and that's how it came.

Amit: Right. Quick question about how do you and your customers actually measure this? ROI? Is it like, what metrics do you use?

Roli: Yeah. So, see, I think there are... we have a couple of products. Our product is quite big now. So there are a couple of different ways in which people are measuring the outcome. So in this whole conversation over the last couple of years, you would see that a lot of GenAI solutions have gone towards outcomes. So one of our products is closer to an outcome, right? And the other is more around capability.

So, with smaller companies, like startups, we work in a much more outcome-based way, where it is around how many very qualified people are they able to speak to, right? So this whole idea that "I am able to talk to my top five candidates," how close it is to reality. So they want to see how quickly can they fill the role? And are they able to find these people sooner than even at the hiring manager level, than normally through the normal process? So basically, time to hire is the metric that rules when it comes to hiring. So that is one thing.

If you are working with larger enterprises, they actually care a lot more about productivity. So, if you see, organizations are now not wanting to actually add a lot more headcount before they have evaluated the AI options. Because they know that by using these AI assistants, their existing recruiters will be 2x, 3x more productive with a little bit of training and sort of, you know, with a little bit of time. So they are checking... so for example, one of our customers is Ajmera Realty. So Ajmera Realty was, when we spoke to them about like the impact Babblebots had on their recruitment process, it was around improvement of productivity, right? They felt that everything was 20 to 30% faster, or like, you know, recruiters were working at, that their internal metrics were improving on how their existing team was able to do more with less.

Amit: So just to understand, basically they will do a bunch of things before and after, but they will send the AI people to actually do the first screening.

Roli: Yes, that's right. That is right. In fact, so we are only focused on the top of the funnel, right? So, once you have identified through these AI scores and other things that these are my top candidates, then negotiating with them and sort of, you know, in some sense closing that offer, sending that offer letter out—those kinds of things are still done by the job. Even the hiring manager interviews, right? So these are only the first round of interviews that are getting done, at least so far. We are now going deeper also. Because the goal is still: can I find the top three candidates for any role very, very quickly? That is really our one singular thing, on this agent that we are going for, right?

Amit: It is a quick question that, you know, for any startup, this is like an existential question: what if this gets copied? You know, what if there are like 10, 20 more competitors which come in? What is it? The product? Is it the context? The understanding? Is it sales, marketing? Because the AI part of it is something which is getting very commoditized.

Roli: Yeah, absolutely. So, I think there are a few competitors now. There may be, maybe, speaking similar stuff as us. I don't know, but I mean, I know that I have noticed that if there is a lot more people talking about AI recruiters than when we started speaking about it for the first time. So definitely, I think competition catches on.

When it comes to these products, I don't even think it's such a new thing, by the way. I think if you give enough time, any product can be copied. Like, if you see, I think recently only, Sridhar Vembu said that Zoho was never the first adopted product. It always was, but they were there at the right price point, right motion, and they were able to get their adoption. So it wasn't like none of their products actually took off like a cursor or something, right? So I think in terms of thinking, it's easier to build now. I think building has almost always been a little bit easy, unless you're doing fundamental tech, right? But now it's even more easier than whatever we were used to.

But where we differentiate, by the way, is that first of all, we have done around 100,000 different candidates. We have dealt with, so we have a lot of data. A lot of our conversations have become smoother because we have worked with a lot of them. So if you just take our code base and replicate it today, you will still not be able to tap into the models that we have created. You will not tap into the intelligence that we have created by dealing with so many, so many candidates over a period of time. Even that you can say, "OK, if I am around for a couple of years, you will be able to get there." First of all, we will also be there in a couple of years, so we are moving there as well. But definitely, the data layer is something that you can't just, like, wake up and create that. Because that, especially for interview data. Actually, I want to just share one other insight. When we have started to build our models, we thought, "Yaar, why do we have to get real data? Like, can we just, like, not have like these recorded recordings from YouTube and work on it?" Right? If you see, very few interviews, actual real interviews don't get published, because a lot of privacy concerns and other things like that, right? So you have all kinds of content out there, but you will have hardly any content around actual interviews that are happening. And even when people are recording it on their Google Meet and other things, it's just like, not something that is stored and recorded in a way that it is any asset so far, right? So there is no way for anybody to just suddenly come across, like, "reams of interview data," wherever that is.

The second is, I think, which is even more interesting, and I think how this broadcast came about in the first place was: we took a platform and we have actually almost modularized it in a way that it's a BLEGO brick. So any existing HR tech ecosystem player can plug into these components and, in fact, provide these GenAI solutions for interview and assessment overnight. So that is this new Babblebots API platform that we have launched, right? Now, this is actually giving us a very good, very good avenue to be very embedded in these. A few of the large ecosystem players already are, and we are building more and more.

Amit: So you mean to say, like, all these HR SaaS platforms and other platforms are there, plug it in.

Roli: Yeah. Or, or, yeah. Now they are plugging us in at a code level, right? So that is very difficult to, if you build Babblebots today, tomorrow morning, you have like this thing, and you know, Harry Potter wants Amit, and you create Babblebots, you will still not be embedded in all these different players who are much bigger than Babblebots. And we are hoping that we can give them enough hiring intelligence via an API that they can, basically we get that distribution layer from there, and they get the intelligence layer from us. And we want to keep innovating on the intelligence layer: you know, psychometric tests, DASS, scope in challenges, everything can be done with GenAI. But if every team goes out to build these themselves, they are going to take two years, three years each. Like, it has taken us. That has been quite an eye-opener. And we've signed multiple customers like that already, and it just seems like it's something that is going to connect very well.

Amit: Right. No, absolutely. There are proprietary feedback loops, there are data sets, you know, there is customer, there are partners, there's a lot to... yeah, there's a lot more than just the product. On the product side, it's data. But on the outside, it's just like the way you are embedded, how quickly can they get out of your system. I think that is the thing.

For actually, before this podcast, I was just thinking about it. Even that we are also like AI builders, I thought, "If Roli has a recruitment AI, what if we build an interviewee-side AI agent? Because I don't want to take the call myself. I will send my consumer-side AI agent to take the interview call." Do you see, like, a future in which this could happen?

Roli: You know, it's already happening. So, like, we have a very strong proctoring method where proctoring and trying to find people who are cheating during interviews has been a cat-and-mouse game that has been going on for, like, you know, forever, right? Even before screens, there were like other ways in which people used to cheat. So it's not something that we haven't thought about here. It's not happening. We see it happening a lot more in technical cases. Like, for non-technical roles, it's like low single digits, like 2, 3, 4% will be people found or suspected of cheating. It will be around 12–13%, 12–14%, we have seen in technical roles. So there is definitely a higher propensity to do that.

But you know what I think? It's, maybe the issue is this, again, that the candidate is actually looking for a job. And if you do all this stuff and you keep getting caught, you want to put the best foot forward, basically. But I'll tell you...

Amit: My question was less to do with the cheating part of it, and more like, you know, you are trying to game the system with a much more intelligent, like AI doing the interview. What I meant was that, let's say, if somebody thinks about what would be a better "Naukri" today, like somebody who can actually build a great database of multi-dimensional attributes on a candidate. So it's not just like a two-dimensional resume, but actually the AI has spoken to the candidate, understood everything. And now imagine, like, Babblebots AI talking to this AI agent and understanding about the candidate. That is, is there a future? I feel like, you know, that bots will be paying to payments to the other, why not agents talking to other agents?

Roli: I think it's difficult. Because if you see, what you are saying is that your whole human context will now be hosted on whatever is the next version of Naukri, let's say, right? Your whole person as, your whole experience and everything that you could say, "Amit," you are somehow able to port that into a digital being, and that digital being is talking to an AI, and I can give like 50 interviews now every month.

Amit: More. Yeah. Yeah, yeah, right.

Roli: Actually, so I think we are quite far from that. I don't think that's something that I even see in the next five years. Where you are able to... your whole personality... like, you can have small conversations. But you can't have a 30-minute conversation where I feel confident that I know Amit at the end of it. You will be sounding like ChatGPT with a few dashes, basically, in that case. So there is no personality. You will be accurate, but there will be no personality. And I don't believe that actually... I doubt.

I think there will be some other format of that. See, I think broadly, what I think is going to happen is that the candidates will also have a little bit more agency. But if I see actually, so this whole hiring ecosystem has become a very fascinating, like, sort of an almost an academic problem at this point to me: what is the long-term workforce going to look like? And if you see more and more, the tech roles are kind of going down, right? In the last two, three years, they have come down. And there was something that was published that was saying that's actually lesser than 2019. Before COVID and everything, the tech roles are going small.

So it's almost like a K-type, a sort of a graph that I am imagining, where you will see that actually the very good candidates may, in fact, want to interview with 10, 15, 20 companies together. So within like two, three days, they can decide which are the best options they have. And the good ones will always be in demand. It's just like now, they can move much faster, because they are not physically limited by their own presence. And there may be other ways in which they can demonstrate that they are actually very, very good.

The problem is going to be with the people who are not that good, right? Because they are going to have... so that ratio is going to become more skewed. If the ratio is one out of 100, it is going to be one out of 150, one out of 200, very quickly. So I think in that case, when you have so little power in the equation, you will want to do whatever is humanly possible to get to land that opportunity. So I think there will be some kind of people who will have an abundance because of AI, and some that will, in fact, need to work a little harder and move faster.

I think broadly, one thing that I see that goes across anything that I am saying is that these cycle times of time to hire will just go down, because there is so much bottlenecking that is getting removed with AI.

Amit: Right, yeah. I hope that answers that question?

Roli: No, no, absolutely. I think that's a proper one-hour discussion in itself: what will happen in the future? But in the interest of time, I just... there is just one last question I have. I mean, we can talk for hours, and I am sure there are so many other topics. But I just wanted to ask that, given both your understanding of recruitment and this industry, and that you are building in this space, is there a contrarian view you have on the future of hiring itself, which most of the industry will not agree with?

Roli: What is the industry agreeing with? I think both, right? So the traditional recruitment industry, and let's say some of your AI peers, if you look at it, you know, I don't know, like, there are some regular beliefs, right? Like, the traditional recruitment industry will believe that, let's say, AI will never be able to do interviews. OK, maybe, right? But then the AI peers believe—all of them believe that it will be possible.

I will give you a little history of Babblebots, because we have been contrarian right from day one. So, you know, when we went back to office—actually, February of '22 is when we were all back in office, when the whole world was work from home. We said that there is no way to build a high-tech, fast-moving business if you are working from home. At least, I didn't know how to build it. And I also built the team that was equally passionate about co-working in the same thing. So we always sort of had very independent views on things.

The second contrarian view at that time was also that candidates are not going to like this experience. That was the biggest risk that I thought about, right? That also got debunked. That actually is not true. You have to see, what is it that you are like? A lot of these things are in our mind, actually, not transpired, right? Not everybody wants to work from home, right? Not all... some candidates are just happy to get a chance to speak.

The one that I feel right now—I don't know if it's contrarian or not—but I have started to see this one thing. See, I think, what is happening: recruitment is a $650 billion dollar market. It's a really big market globally, right? So that means, that amount of money is changing hands every year for the services of making people show up at work and deliver, whatever they deliver, your skill set towards economic goals. If you see the top of the funnel, there is a lot of tech there, right? So you have job boards generally, and you have, like, you know, career pages. They're not tech-enabled? But as you go towards the bottom of the funnel, it becomes more and more process, right? There are more and more people involved, right? So at the absolute end, the situation is that somebody from HR calling that candidate every two, three days in this like two months' notice period: "Are you showing up or not? Are you showing up or not?" That is how human-intensive it is towards the bottom of the funnel, right?

And what I feel is going to happen is that this whole stack is going to get displaced. I don't think the ends will change. OK? But the stack is going to get displaced. So for example, you will have a lot more product work that is being done, which was being done by human beings. That work will still be done, but it is now getting done by AI agents. So they are going to see... and if we have seen, Amit, in many times, about 70–80% of the top of the funnel work that the recruiters used to do is getting done by AI right now. So now you have these databases of candidates, then you have AI agents, and then you have humans. So that is going to be the new spell. And if you continue to believe that there is going to be no AI in the middle, I think that is like a big mistake. I think people are in for a rude shock there.

And another thing is also interestingly, I will tell you. Even if you see that interface between AI agents and the jobs and the LinkedIn of the world, right? The Naukris and the LinkedIns of the world, I think they will be disrupted in non-standard ways. There won't be a next LinkedIn or the next Naukri, because there has never been like a "next" of anything. It's always new, right? I think they are going to get... yeah, as a company, and as a, let's say, a founder, your access to candidates is going to get more democratized, and you will be not so dependent on Naukris and LinkedIns of the world to find your own talent, because of social media and our ability to actually use social media at scale because of AI. Right? So if you have a, let's say, a really powerful GenAI-powered career page, you have full control over the message that you want to create to the candidate. So you don't need to pay money to LinkedIn, because all you need is a good presence. So I think they are going to get disrupted in unexpected ways.

But on the other side of the process side, it is going to get disrupted in the expected way, which is that a bunch of the work that is being done by human beings is now going to get done by AI agents. I don't know how many people truly believe it. But I 100% believe that is going to be the world. And I think it's going to be good. I think that for the candidates, it's going to be a better thing than what it is today.

Amit: Amazing. Contrarian views and what startups are built on. And it has been wonderful talking to you. Really thank you so much for your time. And, you know, for the audience, if you are building in the AI space and want to jam on real-world use cases, reach out to us. Thanks for listening, and thanks, Roli, again.

Roli: Yeah, this was really, really fun, Amit. Great questions. Absolutely nice chatting.