Thursday, 01 May 2025

Public trust in ChatGPT legal advice outweighs lawyers’ in study, raising calls for AI literacy


  • In anonymous comparisons, 62% of participants favored ChatGPT’s legal advice over attorneys’, drawn to its confident tone and technical language. Even when sources were revealed, trust in AI remained nearly equal to human experts.
  • Participants often mistook ChatGPT’s lexically complex yet inaccurate responses as more valid, while lawyers’ clearer, more cautious phrasing was seen as less authoritative—despite AI’s risk of "hallucinations" (fabricated details).
  • In tests, participants could barely distinguish AI-generated advice from human counsel, missing critical inaccuracies like false legal precedents.
  • Researchers advocate for mandatory transparency (e.g., AI-disclosure labels) and public education to help users critically assess AI outputs, treating them as tools, not final authorities.
  • AI’s growing role in high-stakes fields (law, healthcare, etc.) raises alarms, with misplaced trust posing dangers—such as wrongful legal rulings or denied medical coverage due to AI errors.
  • In a groundbreaking study published April 28, researchers from the University of Southampton revealed that non-experts trust legal advice generated by ChatGPT more than counsel from licensed attorneys—if the source remains undisclosed. The findings, presented at the CHI 2025 human-computer interaction conference in Japan, underscore the growing challenges posed by AI’s role in decision-making and the urgent need for public education on artificial intelligence literacy. Conducted across three experiments involving 288 participants, the study demonstrates that users are drawn to AI’s confident tone and concise language over human legal advice, even when errors or “hallucinations” risk misinformed outcomes.

    Surprising findings on AI legal advice reliability

    Led by Dr. Eike Schneiders, an assistant professor of computer science at the University of Southampton, the research tested participants’ responses to legal hypotheticals covering traffic law, property disputes and planning regulations. Participants received advice generated either by ChatGPT or by qualified lawyers. When the source remained anonymous, 62% preferred AI-generated counsel, while those explicitly told the origin of each response showed no statistically significant preference for lawyers, despite knowing their expertise.

    “The participants who knew the source of the advice still placed nearly equal trust in ChatGPT,” Schneiders told conference attendees. “This suggests a fundamental shift in how people assess authority—algorithmic confidence over human expertise.”

    Crucially, the study found a key difference in how advice was framed. Lawyers’ responses were often longer and used simpler language, prioritizing clarity. ChatGPT’s examples, however, were shorter but more lexically complex, “striking the right balance between brevity and technicality,” Schneiders said. Participants perceived complexity as a sign of validity, even when answers contained inaccuracies.

    The role of AI hallucinations and language complexity

    AI-generated content’s risks—most notably, the so-called “hallucinations”—were a focal point of the team’s analysis. These errors, where systems invent falsehoods or illogical conclusions, are a chronic flaw in large language models like ChatGPT. In one 2023 court case, a New York attorney’s AI-drafted brief falsely cited a nonexistent legal precedent, highlighting how hallucinations can jeopardize justice. The Southampton study noted that participants often failed to detect such inaccuracies.

    “When AI advice confidently cites fabricated statutes or misstates procedures, the consequences could be dire,” study co-author Dr. Tina Seabrooke said. “Lawyers may prioritize thoroughness, but that leaves room for ambiguity—ambiguity AI masks with polished phrasing.”

    The third experiment evaluated participants’ ability to discern AI from human counsel. Guessing randomly would have netted an accuracy score of 0.5; humans averaged 0.59, indicating weak but statistically significant awareness of AI influence. “People can sense machine input,” Schneiders said, “but not well enough to reliably act on that intuition.”

    The search for solutions: Regulation and AI literacy

    The findings amplify calls to balance AI’s utility with safeguards. The EU’s proposed AI Act aims to require transparency labels for AI-generated content, but researchers argue this alone is insufficient. Instead, improving public AI literacy ranks as the most urgent need, ensuring users critically assess algorithmic outputs.

    “Two steps must dominate,” Schneiders emphasized. “First, policymakers should mandate clear disclaimers so people know when AI is guiding their choices. Second, citizens must learn to treat AI as one tool among many—useful for brainstorming but never a final authority.”

    The study’s authors advised users to treat AI like a preliminary guide. “It can identify a legal area you need to explore or suggest keywords for further research,” Schneiders noted. “But trust your instincts—then verify with an expert.”

    The age of algorithmic counsel is here

    The University of Southampton’s research arrives as AI infiltrates domains once considered off-limits—from courtrooms to doctor’s offices to military drones. While its efficiency is undeniable, the study’s revelations about misplaced trust reveal a pressing vulnerability: people’s reliance on machines may outpace their ability to question them.

    As institutions grapple to regulate AI’s rapid evolution, Schneiders’ team underscores the stakes. “Hallucinations aren’t harmless if they land the wrong sentence in court or deny Medicare coverage to a senior,” he said. “Protecting public safety requires vigilance—both in policy and in every individual’s critical thinking.”

    For better or worse, the age of algorithmic counsel is here. Navigating it safely means learning to distinguish AI’s promises from its perils—a lesson that defines humanity’s next legal battle.

    Sources for this article include:

    StudyFinds.org

    TheConversation.com

    Southampton.ac.uk


    Source link