Synthetic Research: The Future of Predicting Human Behavior

One of the most fascinating applications of large language models (LLMs) is synthetic research. Traditionally, market research, election polling, and policy surveys rely on interviewing human participants—a process that takes months and hundreds of thousands of dollars with often noisy results. But what if we could produce more accurate results 100X faster and more cheaply using AI?
With synthetic research, you can survey a population of AI agents modeled to reflect human demographics. These agents are calibrated using data from sources like census records, credit card transactions, browser cookies, Reddit activity, and other biographical datasets. Essentially, we’re creating simulations of people—at scale.
What’s remarkable is that these agents are beginning to converge on human behavior, in some cases more accurately than traditional surveys when compared to real-world outcomes. Corporations and governments routinely spend hundreds of thousands of dollars and wait months to gather data from human panels. Synthetic research offers a faster, cheaper alternative—with the potential for a continuous stream of insight rather than discrete snapshots in time.
Imagine being able to assess how much people would pay for your product, anticipate the impact of a new tax, or even predict election outcomes—in real time. This could fundamentally reshape how decisions are made across industries.
Beyond the practical implications, this space raises deep philosophical questions around free will and determinism. We're building high-fidelity simulations of ourselves—and the results are surprisingly convincing. AI agents are now demonstrating human-like rationality, preferences, and economic elasticity. So what does it mean when AI knows us as well as, or even better than, we know ourselves?
Traditionally, the survey has been the gold standard of user and market research. But surveys aren’t ground truth—they’re self-reporting. Companies like World Labs are developing world models: dynamic, AI-powered simulations that allow us to observe behavior rather than just ask about it. If these models become accurate enough, we may no longer need to ask people what they would do—we’ll already know.
Are we inching closer to proving the simulation hypothesis? Maybe. But as the saying goes, if we’re in a simulation, it’s probably impossible to tell anyway.
Of course, this space is still nascent, and skepticism is understandable. Can AI really respond like humans? Is it responsible to make high-stakes decisions based on synthetic responses? While these concerns are valid, I believe skepticism will fade as we see more compelling results—results that are faster, cheaper, and often more accurate than the traditional methods.
Or maybe Nate Silver is right, and this is one of the worst ideas ever. (In which case, someone should teach him how to short it.

We've met quite a few companies in the space, including Aaru, Evidenza, Synthetic Users, Keplar, Brox.Ai, Quno AI, Semilattice, Artificial Societies, Viewpoints AI, and others currently in stealth.
If you want to learn more about some of the original research in this field, check out some seminal papers (here and here) from Joon Sung Park at Stanford.
It’s a space we’re very interested in—and we’d love to connect with others who are building or researching at the intersection of AI, simulation, and behavioral insight.