At nearly every product conference I’ve spoken at recently, someone has suggested that AI-generated synthetic users can replace customer interviews. The idea is appealing. Instead of scheduling interviews, navigating awkward silences, and asking for a customer’s time, you just query an AI persona to get clean, articulate answers. Here’s why that’s a mistake.
TL;DR
-
Synthetic users can only tell you what’s already known. But product opportunities live in what isn’t.
-
If you don’t have time for real user research, acknowledge that openly rather than mistake synthetic research for the real thing.
-
Use AI to make user research more productive, not to avoid discomfort.
The core problem with synthetic users
The biggest breakthroughs from user research come from figuring out the “unknown unknowns” — insights you get when users surprise you, and they almost always do. When you ask questions out of genuine curiosity, you discover something about how they think that you didn’t even know to ask about.
Synthetic users, by definition, can only reflect what’s already in their training data. They can tell you the known knowns. You’ll get confident, articulate, well-structured answers that reflect the world as it was already documented. But they cannot surprise you with their idiosyncratic behaviors.
What research shows
Research by the Nielsen Normal Group to test the efficacy of synthetic users illustrates this more concretely:
-
Synthetic users were somewhat useful for broad attitudinal questions to understand general feelings about topics. But even here, responses felt “one-dimensional” because they were a flat approximation of thousands of people.
-
Critical nuances vanished and responses felt too shallow to be useful. For example, in describing interactivity as one of seven factors that made an online course interesting, it suggested that adding quizzes, exercises, and practical projects would be helpful. But these are very different forms of interactivity — someone who likes projects might not like quizzes. The lack of nuance made the responses less actionable.
-
Synthetic users were terrible predictors of human behavior. They were often sycophantic (wanting to please) and praised concepts that real users questioned. They also predicted idealized behavior from academic literature and didn’t reflect actual usage patterns. For example, the synthetic user said it completed all its online courses and participated in forums “to deepen understanding and as an opportunity to connect with fellow learners from diverse backgrounds”. In contrast, real users said they often didn’t complete their online courses and avoided forums, calling the interactions “contrived”.
At this point, the skeptics among us might reasonably point out that Nielsen Norman Group’s bread and butter is user research — of course they’d find that synthetic users don’t work. Fair enough. So even though their findings mirror my own experience, I went looking for a different perspective. I reached out to an AI researcher who was among the earliest employees at one of the companies that shaped the field. Given the sensitivity of publicly acknowledging AI’s limitations while working at a company that’s often mentioned in the media, they agreed to speak candidly but requested anonymity.
What an AI expert thinks
“Interviewing synthetic users is like asking an LLM about the contents of a long document you don’t have time to read. It can usefully answer questions about the document, citing page and line number. It could also answer some limited hypotheticals about the author’s intent given the document’s context, again citing the original document for why it believes what it believes. So one use case for interviewing synthetic users fine-tuned on user logs is to save the interviewer the labor of poring through the training data to distill a mental model of the user themselves.”
But could you go beyond this use case and ask synthetic users about their workday and what they’re trying to accomplish to uncover unknown unknowns? Could you test a solution with synthetic users to see if it actually works? Here’s what our AI expert had to say:
“If you’re asking questions that cover nuances of the problem, and later if you’re engaging in interactive discussions to figure out whether the solution you are testing is helping solve their problem, I can easily see such conversations with an LLM go off the rails via sycophancy.
If you treat the training data as a massive textbook, and your questions are mostly factual recall-type questions in the context of that book, you’re in safer territory. If you’re probing user’s mental models, I’d be more skeptical about the LLM’s ability to understand something that users themselves mostly do not: what they need from a novel product.”
What you miss without a human in the room
But even in asking factual, recall-type questions, you might miss critical insights from silence and body language.
During a user interview, I asked a senior leader how he evaluated the brokers in his firm — what made someone good at their job or not. He was silent for a while, visibly struggling to answer. Eventually someone prompted him: “Is accuracy important for the broker?” and he latched onto that, agreeing it was probably the most important factor.
AI could have easily concluded from this transcript that accuracy is key, so build accuracy features. But his silence and discomfort told me that he himself didn’t have a clear sense of what made a broker good or mediocre, beyond bringing in revenues. If the CEO couldn’t articulate what good looked like, the brokers had little incentive to go out of their way to adopt a new tool. Top-down adoption was never going to work — we needed to win over the brokers themselves.
How I use AI in user research
I seem to be painting a pretty damning picture. But does that mean AI has no place in user research at all? On the contrary, here are ways in which I’ve found AI to be really useful:
-
Use AI for user interview prep: Use synthetic users to get familiar with the market and the vocabulary your customers use. Every user interview is precious. This is especially true of B2B companies where the sales team may often be hesitant to organize user interviews for fear that you’ll ask “basic questions that you should already know the answers to”. You want to ask such basic questions about workflow anyway, but you don’t want to sound like an amateur who knows little about the business. Use synthetic users to absorb as much public data as possible about your market, build familiarity with industry-specific vocabulary, and walk into every interview sounding like you belong there.
-
Don’t use AI to generate hypotheses too early: Don’t use synthetic users to generate hypotheses before your human interviews. This goes against advice you might read elsewhere, but this is crucial. When you run user interviews, your mission is to be genuinely curious and listen with openness. If you’ve already formed a hypothesis, you’ll unconsciously listen for cues that confirm it, and you’re likely to miss something critical. Generate hypotheses after you’ve understood the problem space with openness, when you’re ready to start exploring solutions. Your hypotheses should be possible solutions to a problem you’ve uncovered and you’ll then test those hypotheses through prototypes.
-
Use AI for rapid prototyping: Technically, user research is supposed to focus on exploratory questions rather than testing solutions. But because every interview is precious, when you’ve developed an understanding of the problem space and a hypothesis starts to take shape, I’ve found it’s worth using AI to quickly generate prototypes you can test with users in the same session. Beyond the speed benefit, showing customers a sneak preview of how you’re thinking about their problem gets them genuinely excited about participating — they’ll be far more likely to say yes the next time you ask for an hour of their time. Using AI for rapid prototyping is a way of squeezing tangible results out of a process that is traditionally only exploratory.
Conclusion
Synthetic users are alluring because they let you avoid the messiness of human interaction — the scheduling, the awkward silences, the worry that you’re asking too much of someone’s time. But they’re a flattened average, and the insights that will help you build products that truly make a difference can only come from the messy, nuanced details of how people actually think and work. No training data captures those moments, because they haven’t been written down yet.
Use AI to prepare better and prototype faster. But when it comes to discovering what you don’t know yet, there’s no shortcut — you have to be curious, ask questions, and sit with the discomfort of human interaction.
FAQs
What are AI-generated synthetic users?
AI-generated synthetic users are AI personas trained on existing data that you can “interview” instead of real customers. They generate responses to your questions without requiring you to schedule or conduct real customer interviews.
Can synthetic users replace customer interviews?
No. Synthetic users can only reflect what’s already in their training data. They cannot surface the unknown unknowns — the unexpected insights you get when real users surprise you — which is where the biggest product opportunities typically lie.
When are synthetic users useful?
Synthetic users are useful for interview preparation — familiarizing yourself with industry vocabulary and absorbing public data about your market before speaking to real customers. They are not reliable for uncovering user needs, testing solutions, or predicting real user behavior.
Why are synthetic users unreliable for product decisions?
Research by the Nielsen Norman Group found that synthetic users give one-dimensional responses, miss critical nuances, and are often sycophantic — telling you what you want to hear rather than reflecting how people actually behave.
What should I use instead of synthetic users?
Real customer interviews, combined with AI for preparation and rapid prototyping. Use AI to walk into interviews better prepared, and to quickly generate prototypes you can test with real users once you’ve developed a hypothesis.