The honeymoon phase with AI is over. Americans want accountability, and they’re ready to take legal action. A new study by Pearl.com, an AI-powered search platform, reveals that 57% of U.S. adults believe AI platforms should be legally responsible for inaccuracies, while a striking 39% say they’d consider suing if an AI’s mistake caused harm.
With AI now answering people’s most sensitive questions—often in place of human experts—trust is on the line. Two-thirds (66%) of Americans see AI as generally trustworthy, and over half (53%) admit they’d rather ask AI embarrassing questions than another person. But that trust is fragile: 47% say they’d feel more confident if AI responses were verified by real humans.
There’s even money on the table. Forty-two percent of respondents are willing to pay for AI services if they come with better accuracy. But boosting reliability isn’t cheap: even a 10% improvement could cost the industry a jaw-dropping $1 trillion.
As AI companies race to perfect their models, Americans are making one thing clear: if AI is going to play expert, it better be ready to defend itself in court.
Andy Kurtzig, CEO of Pearl.com, said: “AI companies face a pivotal moment. Consumers crave convenience, but they also demand accuracy—and they’re ready to take legal action to get it. Our data shows we’re 22% more helpful than other GPTs, especially when it comes to important questions like those you’d ask a doctor, a lawyer or another professional. Instead of pouring billions into incremental accuracy improvements, businesses can embrace human-validated AI right now to drive trust, reduce legal risk, and deliver real value.”