AI in Journalism: How to Maintain Reader Trust

Photo - AI in Journalism: How to Maintain Reader Trust
AI has revolutionized the media industry. Quickly integrating into editorial workflow and newsrooms, artificial intelligence has enabled editorial teams to achieve higher productivity.
However, mishandling AI tools can lead to serious issues for journalists: inaccuracies in reporting, fake images and videos, ethical dilemmas, and a potential loss of reader trust. 

How AI Can Streamline Editorial Processes

Journalists can use AI tools to streamline their workflows and enhance efficiency. 

Some of the key applications of AI in newsrooms include: 

1. Automation: AI is often used in the media to automate the creation of short news stories, particularly in areas like sports, finance, and weather. For instance, The Associated Press uses an AI model to automatically generate concise summaries of thousands of quarterly financial reports from U.S. companies. In sports journalism, AI can swiftly produce match recaps by analyzing game statistics. By handling these routine tasks, AI enables journalists to focus more on in-depth analysis and investigative reporting.  
Input data for AP (left) and an AI-generated article (right) Source: emerj.com

Input data for AP (left) and an AI-generated article (right) Source: emerj.com

2. Data Analysis: Journalists can use AI to efficiently analyze massive volumes of government reports, financial records, social media content, and other data sources. AI excels at detecting trends, patterns, connections, and anomalies that might be difficult to identify manually. For instance, during the Panama Papers investigation, AI helped journalists sift through enormous amounts of financial data, revealing links between offshore accounts and well-known figures.  

3. Data Visualization: AI-powered data visualization tools simplify the explanation of complex topics by enhancing the storytelling with clear visuals. For instance, AI-generated charts, graphs, and interactive maps turn dry, numerical data into content that’s easier to understand and more engaging for readers.  

4. Personalized Content: Machine learning algorithms analyze readers' habits, preferences, and interaction patterns on the platform, and then suggest articles that are likely to capture their interest. This approach is employed by platforms like Google News, Flipboard, and others to deliver tailored content to users.

5. AI-Powered Research: AI can assist journalists in research by quickly summarizing large volumes of text, such as scientific papers, legislative documents, or legal records. For example, reporters covering court cases can use AI to scan and categorize hundreds of legal documents, extracting the key points in just seconds. Additionally, AI can help detect inconsistencies in public statements or fact-check information related to individuals' backgrounds and actions. AI fact-checkers help reduce the spread of misinformation. However, it's important to remember that even the most advanced AI models can still produce inaccuracies and illogical conclusions, known as AI hallucinations. Therefore, it’s premature to fully rely on AI for summarizing extensive data sets.

6. Comment Moderation:
Managing user-generated content, especially in the comment sections of news websites, is a time-consuming task. AI-based moderation tools can automatically flag inappropriate messages or spam, allowing human moderators to focus on more nuanced issues. For example, Reddit uses an AI-powered safety filter to screen out offensive comments and messages.   

7. Multimedia Content Creation:
By automating the production of multimedia content, editorial teams can broaden their audience engagement, appealing not only to readers but also to those who prefer video and audio formats. New AI tools enable the creation of video news segments with AI avatars, and AI-powered audio technology can convert text into podcasts or voiceovers, among other formats.  

How Readers Perceive AI in Journalism

According to the 2024 Digital News Report by the University of Oxford’s Reuters Institute, readers have widely differing opinions on the use of AI in media. 
Our data shows audiences are still deeply ambivalent about the use of these technologies, which means publishers need to be extremely cautious about where and how they deploy them
the researchers note.
The study surveyed people across 47 countries and conducted focus groups in the UK, the USA, and Mexico. It revealed that nearly half of the respondents were not aware of AI technologies. However, among those who are familiar with AI, there is considerable concern about the accuracy and reliability of AI-generated news.

Many respondents worry that the use of AI in creating articles, images, and videos could make it difficult for them to distinguish between fact and fiction. Additionally, readers expect complete transparency from news organizations when AI is involved in any aspect of news production.

Overall, audiences tend to be less accepting of AI-generated content, particularly when it comes to important public-interest reporting. Sensitive topics such as politics, elections, crime, and finance remain areas where readers are particularly wary. However, the public is generally more open to the use of AI in producing entertainment content or sports coverage. People are also comfortable with AI assisting newsrooms with technical tasks, such as transcribing or summarizing large documents. Still, there is strong opposition to the idea of AI fully replacing journalists in covering sensitive topics.
If it was disclosed to me that this was produced by an AI [I] will probably go, ‘Okay, well, then I'll just not read that
Male, 40, UK.
Moreover, trust in specific news outlets—and in the media as a whole—significantly influences how people perceive AI’s role in news production. Those with higher trust in the media tend to be more accepting of AI, particularly when its use is carefully monitored by journalists. On the other hand, for readers already skeptical of the media (six out of ten, according to the study) AI’s involvement could exacerbate existing distrust. In short, careless and irresponsible use of AI could widen the gap between news outlets and an already wary audience. 

How to Avoid Google Penalties on Your AI-Generated Content?

Google doesn’t explicitly ban all AI-generated content. However, the company stresses that the content must be valuable, helpful, and adhere to the E-A-T (Expertise, Authoritativeness, and Trustworthiness) standards—a framework Google uses to evaluate content quality. 

Google’s algorithms are designed to prioritize high-quality, original content in search results. If a news outlet relies too heavily on AI-generated content without proper fact-checking and expert oversight, the risk of producing low-quality content increases. Such content is unlikely to rank well on Google. This caution applies to all low-quality content, not just that produced by AI.

Important: Google considers automatically generated content aimed at manipulating search engines as spam. AI should not be used just to stuff a site with keywords or to churn out large quantities of "raw" articles. Google pays close attention to user experience metrics, such as engagement and the time users spend on a site, when determining rankings. If AI-generated content lacks the depth, context, or creativity that engages readers, it won’t perform well—and Google will take note. 

In short, to “satisfy” Google, AI-generated content must genuinely provide value to readers by offering useful information. If your site publishes AI-created content that meets these criteria, you should avoid penalties.

That said, the integration of AI technologies into search engines is poised to change how people search for information. For example, in 2022, the startup Perplexity launched one of the first solutions that successfully combined traditional search engines with AI tools. Within a year, Perplexity processed around half a billion queries. According to Nvidia CEO Jensen Huang, he uses Perplexity AI at every opportunity.
And so, notice how often we search these days, and notice how often we ask questions. Any random question—I’ll be asking Perplexity. I love using it. And even if I know the answer, I’ll just ask it anyway, just to see what it comes up with
he says.
The familiar search results page that lists websites in response to a query is evolving. With the introduction of AI Overviews (currently available in select countries), Google is shifting towards a more assistant-like role, providing quick, direct answers to user queries. This feature uses generative AI to analyze its training data and create new responses. However, the rapid development of this technology comes with some challenges: the system can occasionally produce inaccurate or even offensive answers. At this stage, AI Overviews are not entirely reliable, so it’s important to be cautious. 


Perplexity AI User Interface (screenshot) Source: perplexity.ai

Perplexity AI User Interface (screenshot) Source: perplexity.ai

How to Maintain Reader Trust When Using AI

The increasing use of AI in editorial processes has sparked concerns about the potential erosion of media credibility and reader trust. To effectively incorporate AI tools while maintaining reader confidence, it’s crucial to follow these principles:

1Transparency:
Readers need to know how AI contributes to the creation of content. It’s also helpful to explain the specific AI tools being used, offering readers a better understanding of how content is generated. This can be achieved through explanatory articles, videos, or dedicated sections on your website that detail AI's role in content production.

2. Human Oversight:
Journalists should remain responsible for fact-checking, adding context, and ensuring the content aligns with editorial standards. AI-generated content should be viewed as a draft or a starting point for further development. For writers facing the blank page, AI can provide a quick starting boost. In journalism, when reviewing AI-generated news pieces, journalists should focus on verifying facts and refining the style. More complex content, such as editorials, reports, or interviews, will require significant editing and fine-tuning of AI-generated material.

3. Ethical Considerations: 
Media houses must ensure that AI-generated content meets the same ethical standards as traditional journalism. Ethical concerns arise when AI-created stories are published without adequate review, or when algorithms prioritize sensationalism over accuracy. It’s vital to avoid using AI to create misleading or clickbait headlines that sacrifice truthfulness. AI should not be used to manipulate or deceive readers. These guidelines should be clearly defined in the editorial policy. Moreover, there should be a clear understanding within the newsroom of who is accountable for AI-generated content. This approach ensures that any mistakes are promptly addressed.

Overall, AI can optimize content production, improve newsroom efficiency, and provide new opportunities for journalists. However, to maintain reader trust, media outlets must use AI responsibly. 

By focusing on transparency, human oversight, accuracy, ethics, and accountability, media outlets can harness the advantages of AI without undermining the fundamental values of journalism.