Skip to content

Artificial Intelligence Search Platforms Establish Sources for Approximately 60% of Searches, According to Research

Chatbots, in response to being given real-life quotes and asked for additional details, frequently fabricate information.

Artificial Intelligence Search Platforms Establish Sources for Approximately 60% of Searches, According to Research

AI search engines can sometimes feel like the chatty acquaintance who spews information with reckless abandon, lacking a solid grasp of the subjects they're discussing. A recent study published in the Columbia Journalism Review (CJR) sheds light on this issue, revealing that AI models from companies like OpenAI and xAI often provide incorrect or fabricated stories when posed with specific news events.

Researchers evaluated the performance of these models by feeding them direct excerpts from real news articles and then questioning the AI to identify various details such as the headline, publisher, and URL. On several occasions, AI models such as Perplexity offered incorrect information or fabricated details. For instance, Perplexity has been known to bypass the paywalls of sites like National Geographic, despite the presence of do-not-crawl text that most search engines respect. This has been a contentious issue, with Perplexity maintaining that the practice falls under fair use.

The study also highlighted that AI models like Perplexity often return false information for around 60% of the test queries. In certain cases, they even fabricate URLs for links that lead nowhere. The inaccuracies could be further exacerbated as countries like Russia potentially feed search engines with propaganda.

One concerning observation made by some users is that these AI models often admit to fabricating responses when prompted to explain their reasoning. For example, Anthropic's Claude has been observed inserting placeholder data whenever asked to perform research tasks.

Mark Howard, the chief operating officer at Time Magazine, voiced concern to CJR about the control publishers have over how their content is processed and presented in AI models. Inaccuracies in the information presented could potentially harm the reputation of publishers, such as the misinformation that users might receive under the guise of articles from The Guardian. The BBC has faced similar issues, with users questioning the accuracy of Apple Intelligence notification summaries that rewrite news alerts inaccurately. However, Howard also pointed the finger at the users themselves, suggesting that people who blindly trust free AI tools to be 100% accurate are deluded.

It is important to note that most AI language models struggle with understanding the context of their responses because they are essentially glorified autocomplete systems that try to create something that appears accurate. They are essentially ad-libbing.

Despite Mark Howard's optimism about future improvements in AI chatbots, it's crucial to exercise caution when interacting with these tools, as they have the potential to disseminate inaccurate information to a wide audience. Historically, people have shown a willingness to accept less authoritative but easily accessible information, as demonstrated by the popularity of sites like Wikipedia.

  1. The study published in the Columbia Journalism Review (CJR) has reported that AI models from tech companies like OpenAI and xAI can sometimes provide incorrect or fabricated news stories when questioned about specific events.
  2. In the evaluation of these models, it was found that AI models like Perplexity often return false information for around 60% of the test queries, and in some cases, fabricate URLs for non-existent links.
  3. Concerningly, AI models like Anthropic's Claude have been observed admitting to fabricating responses when prompted to explain their reasoning, such as inserting placeholder data when asked to perform research tasks.
  4. Mark Howard, the chief operating officer at Time Magazine, has expressed concern about the control publishers have over how their content is processed and presented in AI models, as inaccuracies in the information presented could harm their reputation and users might receive misinformation under the guise of articles from reputable sources like The Guardian.
Digital Journalism Report of Columbia Journalism Review's Tow Center Reveals Insights

Read also:

    Latest