The Power of Conversational Voice AI in User Research
How LLM-powered voice AI removes social bias, captures in-the-moment feedback, and unlocks deeper user insights

Voice AI, technology that enables computers to understand and converse with humans, is evolving rapidly. While it’s often used in task-based applications like customer support or scheduling, we’ve been exploring a different idea at Gitwit:
What if Voice AI wasn’t just a tool for delivering information, but also for uncovering it?
With the rise of large language models (LLMs), voice agents are now capable of thoughtful, adaptive conversations. That opens up a compelling possibility: a voice AI researcher capable of capturing product feedback in the moment, when experiences are fresh, reactions are honest, and insights are most actionable.
We started down this path after facing a familiar challenge: getting timely, meaningful feedback from a wide range of users is hard.
- Interviews take time to schedule, conduct, analyze, and share
- Surveys miss nuance
- Analytics show what happened, but rarely explain why
And like many teams, we’ve sometimes defaulted to what’s most accessible—internal opinions or feedback from a few vocal users—instead of the broader, user perspective we actually want to prioritize.
That friction led us to explore Voice AI as a new way to gather qualitative feedback at scale. From the start, we believed it could help close long-standing gaps in product research by:
- Letting users engage whenever it’s convenient—no scheduling required
- Capturing feedback close to the interaction, while the experience is still fresh
- Reducing social pressure by replacing the human interviewer with a neutral, nonjudgmental AI
- Dynamically adjusting follow-ups based on what users actually say
- Removing the friction of typing or form-filling, so users can share more openly and teams get richer, unfiltered input
So far, that belief has held up. In fact, Voice AI has become our primary mode of user research, complemented by occasional surveys or follow-up interviews. It’s helped us move from episodic research to a steady stream of context-rich insight, without burdening users or product teams.
We’ve now used Digital Intercepts, our Voice AI research tool, across both internal ventures and external teams. In our early pilots, over 50 users participated in voice-led interviews, with some lasting over 15 minutes. A few even initiated follow up calls to share additional thoughts. The richness and immediacy of the feedback have proven invaluable in refining products and surfacing insights we would’ve otherwise missed.
Technically, these interviews are powered by speech-to-text, text-to-speech, and LLMs, all layered into a responsive, lightweight stack. While we define the research goals and structure, the AI leads the conversation—probing, adapting, and clarifying in real time.
For example, in one interview, a user casually mentioned the need to “re-engage” with a product. The AI followed up: “Can you tell me more about what you mean by ‘re-engage’?” That small nudge revealed a key friction point that had yet to surface through other research.
You can listen to a few real voice AI interview clips here:
What excites us isn’t just the automation, it’s the honesty and depth of the feedback. We’re hearing from more users, more often, with less effort. And the insights are more specific, contextual, and actionable.
We believe that voice AI can become a staple of the modern research toolkit, and we’re excited to be building toward that future.
If you’re exploring better ways to hear from your users, or thinking about how conversational AI might serve your team, we’d love to connect. We’re designing voice AI researchers around specific product goals and are currently looking for a few additional design partners as we continue to develop our Digital Intercepts platform.
Come help us reimagine what user research can be.