Get Your News From Ai? Watch Out - It's Wrong Almost Half The Time

Trending 3 weeks ago
gettyimages-1613660613
Iana Kunitsa/Moment via Getty

Follow ZDNET: Add america arsenic a preferred source on Google.


ZDNET's cardinal takeaways

  • New investigation shows that AI chatbots often distort news stories.
  • 45% of nan AI responses analyzed were recovered to beryllium problematic.
  • The authors pass of superior governmental and societal consequences.

A caller study conducted by nan European Broadcasting Union (EBU) and nan BBC has recovered that starring AI chatbots routinely distort and misrepresent news stories. The consequence could beryllium large-scale erosion successful nationalist spot towards news organizations and successful nan stableness of populist itself, nan organizations warn.

Spanning 18 countries and 14 languages, nan study progressive master journalists evaluating thousands of responses from ChatGPT, Copilot, Gemini, and Perplexity astir caller news stories based connected criteria for illustration accuracy, sourcing, and nan differentiation of truth from opinion.

Also: This free Google AI people could toggle shape really you investigation and constitute - but enactment fast

The researchers recovered that adjacent to half (45%) of each of nan responses generated by nan 4 AI systems "had astatine slightest 1 important issue," according to nan BBC, while galore (20%) "contained awesome accuracy issues," specified arsenic mirage -- i.e., fabricating accusation and presenting it arsenic truth -- aliases providing outdated information. Google's Gemini had nan worst capacity of all, pinch 76% of its responses containing important issues, particularly regarding sourcing.

Implications

The study arrives astatine a clip erstwhile generative AI devices are encroaching upon accepted hunt engines arsenic galore people's superior gateway to nan net -- including, successful immoderate cases, nan measurement they hunt for and prosecute pinch nan news.

According to nan Reuters Institute's Digital News Report 2025, 7% of group surveyed globally said they now usage AI devices to enactment updated connected nan news; that number swelled to 15% for respondents nether nan property of 25. A Pew Research poll of US adults conducted successful August, however, recovered that three-quarters of respondents ne'er get their news from an AI chatbot.

Other caller information has shown that moreover though fewer group person full spot successful nan accusation they person from Google's AI Overviews characteristic (which uses Gemini), galore of them seldom aliases ne'er effort to verify nan accuracy of a consequence by clicking connected its accompanying root links.

The usage of AI devices to prosecute pinch nan news, coupled pinch nan unreliability of nan devices themselves, could person superior societal and governmental consequences, nan EBU and BBC warn.

The caller study "conclusively shows that these failings are not isolated incidents," said EBU Media Director and Deputy Director General Jean Philip De Tender said successful a statement. "They are systemic, cross-border, and multilingual, and we judge this endangers nationalist trust. When group don't cognize what to trust, they extremity up trusting thing astatine all, and that tin deter antiauthoritarian participation."

The video factor

That endangerment of nationalist spot -- of nan expertise for nan mean personification to conclusively separate truth from fabrication -- is compounded further by nan emergence of video-generating AI tools, for illustration OpenAI's Sora, which was released arsenic a free app successful September and was downloaded 1 cardinal times successful conscionable 5 days.

Though OpenAI's position of usage prohibit nan depiction of immoderate surviving personification without their consent, users were speedy to show that Sora tin beryllium prompted to picture deceased persons and different problematic AI-generated clips, specified arsenic scenes of warfare that ne'er happened. (Videos generated by Sora travel pinch a watermark that flits crossed nan framework of generated videos, but immoderate clever users person discovered ways to edit these out.)

Also: Are Sora 2 and different AI video devices risky to use? Here's what a ineligible clever clever says

Video has agelong been regarded successful some societal and ineligible circles arsenic nan eventual shape of irrefutable impervious that an arena really occurred, but devices for illustration Sora are quickly making that aged exemplary obsolete.

Even earlier nan advent of AI-generated video aliases chatbots for illustration ChatGPT and Gemini, nan accusation ecosystem was already being balkanized and echo-chambered by societal media algorithms that are designed to maximize personification engagement, not to guarantee users person an optimally meticulous image of reality. Generative AI is truthful adding substance to a occurrence that's been burning for decades.

Then and now

Historically, staying up-to-date pinch existent events required a committedness of some clip and money. People subscribed to newspapers aliases magazines and sat pinch them for minutes aliases hours astatine a clip to get news from quality journalists they trusted.

Also: I tried nan caller Sora 2 to make AI videos - and nan results were axenic sorcery

The burgeoning news-via-AI exemplary has bypassed some of those accepted hurdles. Anyone pinch an net relationship tin now person free, quickly digestible summaries of news stories -- moreover if, arsenic nan caller EBU-BBC investigation shows, those summaries are riddled pinch inaccuracies and different awesome problems.

More