Slop: What It Is and How to Combat It

Photo - Slop: What It Is and How to Combat It
Online communities always come up with new slang words and expressions. Some of these go viral and become commonly used terms.
For example, where did the word "spam" come from, and what does it mean? While it is now one of the most widely used and understood internet terms, only some remember that SPAM is actually a brand of canned meat from the American corporation Hormel Foods. At one time, the company advertised SPAM so extensively that it became the subject of jokes among Americans. In 1986, a user began posting repetitive advertising messages on the Usenet computer network. Users, drawing an analogy with the ubiquitous canned meat ads, started calling these messages spam.  

Since then, it has become a common practice to give new problems a clear, concise, and simple name, says British software developer Simon Willison.  
It gives people a concise way to talk about the problem
he says.
Simon is credited with coining the term "slop". This term refers to unwanted content created by artificial intelligence. Since the widespread adoption of AI, every internet user has encountered slop.  

Posts with nearly identical, melodramatic stories that appear out of nowhere in your feed, short Reels with the same plots and unnaturally moving characters that look like clones—these are examples of slop created by neural networks.  

AI-generated songs and music videos that become subjects of lawsuits and plagiarism accusations. Endless streams of photos with identical features. Paintings never created by humans or viral images like Jesus with shrimp hands produced by AI—this is also "slop."

"Almost scientific" articles or "almost real" e-books made by AI are both slops that complicate finding genuinely interesting content and "arrows" that redirect traffic to external servers and websites that are only tangentially related to the search query.  

Getting distorted information about a tourist route in an unfamiliar city means wasting part of your travel time. This phenomenon could be seen as a minor downside of technological progress if many AI creations didn't pose direct dangers.  

For instance, AI-generated books for novice mushroom hunters, classifying many poisonous mushrooms as edible. A recommendation from Google's AI to add some glue to cheese to make it stick better to pizza. Advice to consume small stones daily to avoid digestive problems. Or how about the suggestion to jump off a high bridge into a river to cure the blues?  

Such seemingly funny and absurd trash content can lead to tragic consequences, especially for teenagers who have not yet learned to think critically.   

The problem is that neural networks still struggle to understand nuances of meaning and have difficulty “reading” sarcasm, figurative language, and the moral of a story if it is even slightly camouflaged. AI is guided by the number of clicks, reposts, and reactions to a publication or comment.  

Example of a slop image Source: Х

Example of a slop image Source: Х

For example, the "Shrimp Jesus" image can clearly offend Christians, yet algorithms continue to reproduce this image persistently. A sarcastic comment about the ease of collecting fly agaric could garner thousands of likes, just as posts advising "to jump off a bridge or hit a wall to relieve boredom" might. By liking these posts, users unintentionally help AI draw incorrect conclusions, making it perceive these foolish "jokes" as truthful and relevant information.   

So, when you come across a "scientific" article with extensive evidence claiming the Earth is flat, don't be surprised. It's just another example of slop.
Before the term ‘spam’ entered general use it wasn’t necessarily clear to everyone that unwanted marketing messages were a bad way to behave. I’m hoping ‘slop’ has the same impact – it can make it clear to people that generating and publishing unreviewed AI-generated content is bad behaviour
Simon Willison noted.
Regardless of which term becomes ingrained in the public consciousness, it’s essential to introduce an appropriate slang term as soon as possible.  

In May 2024, the search giant Google integrated its AI chatbot Gemini into user search results. Instead of the traditional list of links matching the query, the AI responds with its own overview (“A.I. Overview”). Now, concise, ready-made texts that Gemini considers comprehensive appear at the top of search results, followed by the usual links. This experiment is currently being conducted only in the U.S.   

Shortly before this, Microsoft included AI in generating results for its Bing search engine, although it encountered a number of errors. It’s already clear that leading search engines have made AI integration a priority in their development.  

Internet search and internet surfing are ways to research an issue you are interested in. However, according to Kristian Hammond, director of the Center for Advancing Safety of Machine Intelligence at Northwestern University in Chicago, the problem is that AI offers users a pre-formed and final answer.   
What it’s becoming, in this integration with language models, is something that does not encourage you to think. It encourages you to accept. And that, I think, is dangerous
Hammond says.
Not all hope is lost, though.

In early 2024, Nick Clegg, President of Global Affairs at Meta, stated that the network is already being trained to recognize AI-generated content.
As the difference between human and synthetic content gets blurred, people want to know where the boundary lies
he said.
The motivation behind this is financial: the issue of slop has started to concern advertising agencies, which are the primary revenue source for social networks. Users are beginning to mass-mark genuine ads as "trash" created by AI, negatively impacting advertisers.

It’s possible that soon, alongside your email service’s "Marked as spam" warning, you’ll see notifications about slop in search results, publications, and advertisements.