AI in Journalism: Content Farms Eclipsing Traditional Media
Artificial intelligence swiftly scans reputable sources and produces reworded versions of the content in nearly real-time. According to researchers at NewsGuard, these rewritten articles are being disseminated across hundreds, if not thousands, of online platforms.
Founded in 2018, NewsGuard is a research firm established by media entrepreneur and journalist Steven Brill, in collaboration with the former Wall Street Journal publisher, Gordon Crovitz. Together, they embarked on a journey to develop tools that combat misinformation.
The recent report from NewsGuard suggests that the unchecked adoption of AI in the publishing world might undermine the trust in media companies as a whole, posing a significant risk to the online news industry. The study by Brill and Crovitz pinpointed 37 websites that hosted articles with content, images, and quotes mirroring those from leading news outlets like the New York Times, Reuters, and CNN.
These sites, largely content mills and news aggregators, often omitted references to the original sources or authors. While there's nothing novel about such unethical practices by certain publishers and journalists, the scale of this behavior has been amplified with the incorporation of artificial intelligence, streamlining the content rehashing process for these platforms.
Identifying these 37 online entities was relatively simple due to several telling indicators. For example, those texts usually contain telltale error messages specific to AI systems. Some pieces even ended with acknowledgments of AI's role in repurposing the content, and there were recurring statements such as “As an AI language model, I cannot…” NewsGuard notes that these sites were uncovered primarily because of their lackluster work and mishandling of AI settings. The number of such undetected platforms out there remains a matter of conjecture.
Experts anticipate a rise in the number of plagiarists unless there's a marked improvement in tools for detecting AI-generated plagiarism and the implementation of relevant regulatory actions. This growth is concerning, especially given that OpenAI has explicitly banned the use of its expansive language models for acts of plagiarism. Google has issued a similar prohibition for its generative AI systems. The tech giant has warned against harnessing AI to produce content that might deceive or mislead readers, especially regarding the origin of the information. There's a particular emphasis on prohibiting the distribution of AI-generated content, pretending to be the result of human intellectual effort. But the research by NewsGuard suggests that these bans are more in letter than in spirit, failing to deter the rise of content farms.
We’re now in a world where it is increasingly difficult to distinguish between human content and AI-generated content, and increasingly difficult to identify these types of potential examples of plagiarism,remarks Amir Tayrani, a partner at Gibson Dunn law firm.
However, the challenges aren't merely ethical. For content farms, the use of AI to rehash articles from diverse sources is viewed as a primary avenue to amplify advertising revenue. Authentic content from established outlets requires significant time and effort, often involving collaboration among several individuals. Manual plagiarism of such materials, prevalent before AI's ascent, was also time-consuming. Consequently, creators of original content previously had a buffer period to draw advertising funds for their exclusive material. As highlighted in the NewsGuard report, well-known companies are now unwittingly placing programmatic advertisements on content farms teeming with appropriated articles. Essentially, leading brands are inadvertently underwriting the fraudulent replication of copyrighted content through AI.
There's growing tension between the media sector and tech companies, as AI tools frequently infringe upon copyrighted material. This affects both authors and publishers, particularly by reducing potential advertising earnings. As we navigate the emerging landscape of AI-generated content, striking the right balance between innovation, precision, and authenticity becomes paramount. Whether we embrace it or not, the evolution of AI in media persists. Hence, stakeholders and regulatory bodies need to establish definitive ethical standards for AI usage.
Previously, GN delved into the potential impact of artificial intelligence on Hollywood's future.