AI Terms Glossary: Your Cheatsheet to Understand AI Jargon

icon FOR
Photo - AI Terms Glossary: Your Cheatsheet to Understand AI Jargon
Many AI terms have become somewhat casual. As technology has entered our daily lives, you donā€™t need to be a researcher or a tech expert to come across AI jargon here and there, for example, in the news or on social media. This situation can be confusing, so weā€™ve pulled together an easy-to-understand glossary to help you cut through the AI jargon.

15 AI Terms You Need to Know 

In this cheat sheet, weā€™ve included popular AI words and explained their definitions and importance. Here youā€™ll find ordinary words and phrases that have gained new meanings in AI, alongside more technical terms regularly used in the field.

And weā€™ll begin with the most popular term:

1. Artificial Intelligence or AI 

No type of intelligence is fully understood, and artificial intelligence is no exception.  Historically, AI was used to describe human-level intelligence achieved by machines. Computer scientist John McCarthy was the first to coin the term in 1955

Now, however, AI is a broader concept that refers to computer systems exhibiting some characteristics of human intelligence. For example, systems that recognize speech and images, perform translations, and offer navigation are all AI systems, but they donā€™t yet possess common sense knowledge and reasoning.

The limits and full reasoning capabilities of these systems are still unknown.

2. Artificial General Intelligence or AGI 

With the growth of Artificial Intelligence (AI), the term AGI has taken over news headlines and has become a central part of AI discussions. Unlike narrow AI, which has limited capabilities, AGI is supposed to have self-awareness, emotional intelligence, and other attributes of human-level intelligence.

Imagine an AI robot from sci-fi movies that is able to make its own decisions and act without needing human instructions. This scenario is not close to reality as long as AI isnā€™t able to understand context or learn on its own as people do.

But things can change.

Tech companies are actively engaged in AGI research. OpenAI, the startup behind ChatGPT, announced plans to build AGI in 2023, with the companyā€™s CEO Sam Altman stating the technology can be developed in the close-ish future. 

That being said, not all AGIs will have equal capabilities for understanding and performing tasks. 
Sony, a central character in the movie I, Robot can feel emotions,  learn, and make decisions. Source: imbd.com

Sony, a central character in the movie I, Robot can feel emotions, learn, and make decisions. Source: imbd.com

3. Machine Learning or ML 

Machine learning is a subcategory of artificial intelligence focused on developing algorithms and models that enable machines to learn from data and predict trends. Without ML, the modern advanced AI systems we have today would not exist.

Hereā€™s how machine learning works: engineers feed large amounts of data into the system, which it analyzes repeatedly to recognize patterns in the inputs. For example, scientists might provide the system with thousands of labeled images of animals, which it examines, gradually improving its ability to identify them accurately.

Another example is when a system is trained on language structures and vocabularies, enabling it to translate between languages. The more data a system learns from, the more capable it becomes of answering questions and performing tasks based on user requests.

4. Neural Networks 

For artificial intelligence to function, the technology is inspired by the structure and functionality of the human brain, which contains billions of neurons. Neural networks in AI models simulate how human neurons interact.

These networks form various connections, building a structure capable of learning from data, adjusting, and making predictions. In a neural network, data moves through several layers, with each layer further refining the data. It first enters an input layer, where raw data such as text or images is fed in. Then, it moves through hidden layers, where more complex processing takes place.

For example, in an image-recognition model, early layers might detect simple edges, while deeper layers identify more complex shapes or entire objects. Finally, the data reaches the output layer, which makes a prediction or classification, like identifying an object in a photo.

During training, the network adjusts its connections based on feedback from the data. This adjustment helps it improve at making accurate predictions over time.
Neural networks are interconnected and pass information through different layers. The image features Saul Goodman from Breaking Bad. Source: Instagram.

Neural networks are interconnected and pass information through different layers. The image features Saul Goodman from Breaking Bad. Source: Instagram.

5. Deep Learning 

Deep learning is a branch of machine learning that uses neural networks with multiple layers to process and learn from large sets of data. Itā€™s a more advanced way for computers to recognize patterns, especially in complex tasks like identifying images or understanding speech.

What sets deep learning apart from machine learning is its need for a lot of data. While traditional machine learning can work with smaller datasets, deep learning thrives on vast amounts of information. This allows it to automatically detect patterns without relying on humans to select features in the data. Plus, deep learning models are usually more complex, making them capable of tackling intricate problems.

Another aspect to consider is the computational power required for deep learning. It often needs powerful hardware, like GPUs, to handle the heavy lifting involved in training these models.

6. Transformer Network 

Under the hood of modern AI systems like ChatGPT, there are layers of computer blocks called transformers. The "GPT" stands for Generative Pre-Trained Transformer.

These transformer networks are a special type of neural network designed to handle sequential data, like words in a sentence, by analyzing all parts of the input simultaneously.

This capability allows them to understand complex relationships within the data more effectively than traditional models.

A key feature of transformers is the attention mechanism, which helps the model focus on important words in a sentence relative to one another. The system breaks down the input into smaller words and fragments to decide what itā€™s going to say next. The individual pieces of text are called tokens.For example, in the sentence ā€œThe cat sat on the mat,ā€ the model can understand that "cat" and "sat" are closely related, allowing for a more nuanced understanding of context.

Transformers were first described in a 2017 paper by Google called ā€œAttention Is All You Need.ā€ 
image
ChatGPT transforms words into token IDs depending on how they are structured within the text. Source: openai.com/tokenizer

ChatGPT transforms words into token IDs depending on how they are structured within the text. Source: openai.com/tokenizer

7. Large Language Models (LLMs)

LLMs are super-smart algorithms trained to understand and generate human-like, natural language text. They are used in AI models like ChatGPT, Claude, Llama, and others.

LLMs are built by feeding massive datasets through complex neural networks, allowing them to predict and articulate responses. 

For example,  if you ask an LLM to write a poem about space, it not only smashes together verses but might even whip up cosmic imagery and metaphors. Or, if you give it an incomplete email draft, itā€™ll finish it off.

These models offer everything from content creation to language translation and even coding assistance. Theyā€™re not perfect, sure, but theyā€™re rapidly learning and becoming snappier with each update and fine-tuning.

8. Generative AI 

Generative AI is a type of artificial intelligence designed to create new content, such as images, music, code, and videos. Unlike traditional software, which simply follows the rules set by programmers, generative AI models use the knowledge they are trained on to produce new, original material.

Remember the neural networks and the connections between them?

Thanks to these networks, generative AI combines different elements from its database to create something new.

For example, ChatGPT can respond to questions and write essays by analyzing and reinterpreting data, rather than copying it directly.

So, if you start a sentence, it tries to predict the next logical word or phrase based on patterns it has seen before.

Generative AI has gained mainstream popularity, with ChatGPT leading the way with around 200 million weekly active users.
image
9. AI Hallucinations 

When we communicate with generative AI and ask questions, it responds based on prediction techniques. No consciousness, context, or self-awareness is involved. Because of this, AI models can make incorrect assumptions or generate responses that donā€™t make sense.

For example, Googleā€™s AI-powered search feature, AI Overviews, previously advised users to put glue on pizza to prevent cheese from slipping. This happened because it generated the answer based on a Reddit post without recognizing the sarcasm.

As a result, Google rolled back its AI search function in June 2024, about two weeks after its launch.

Hallucinations have been one of the biggest problems in large language models. Although researchers are working on new methods to address the issue, itā€™s still unknown whether it can be completely resolved.

10. AI Bias 

Interestingly, AI models can be biased and show signs of stereotypical thinking. The term AI bias describes the phenomenon where artificial intelligence systems treat different groups of people unfairly. 

There are several reasons for this.

First, AI learns from data, and if the data is unbalanced or contains existing societal prejudices, the AI adopts those same unfair patterns. For example, imagine you only taught someone about doctors by showing pictures of men in white coats. 

They might start thinking only men can be doctors. AI can make the same mistake. If itā€™s trained mostly on data about one group, it might not work as well for others. 

The tricky part is that this bias often sneaks in without anyone intending it. Bias can come from historical patterns (like old, unfair hiring decisions) or simply from not having diverse enough data to train the AI properly.

On top of this, research has found that large language models reflect the ideology of their creators. Not very surprising, huh?
Meme inspired by the movie I, Robot. Source: markmcneilly.substack.com

Meme inspired by the movie I, Robot. Source: markmcneilly.substack.com

11. AI Ethics 

AI ethics suggests the responsible development of AI systems guided by moral principles and values. These practices include protecting human rights to safety, security, and privacy. AI systems are also expected to promote diversity, fairness, and sustainability.

A significant ethical question in AI development is protecting copyright and intellectual property rights. AI is often trained on books, art, music, code, and other types of content, which raises concerns about whether itā€™s a violation to use human work without permission.

Thereā€™s also a question of who owns the material created by AI: the creator of the original content or the AI company.

There have been a number of lawsuits by content producers against AI companies. One case was filed by the New York Times against OpenAI and Microsoft for using their articles for training. Another was initiated by Dow Jones, parent of The Wall Street Journal and The New York Post, suing generative AI search engine Perplexity.  

Rules and laws regulating the industry could be a solution to these ethical problems.

12. Data Poisoning 

The practice of feeding incorrect data to AI to manipulate its results and degrade its performance is called data poisoning

In a data poisoning attack, the data or web page used to train the AI model can include a trigger phrase, pattern, or image. If a user uploads a file with this element into the AI, it can corrupt the model.

Data poisoning can be used to compromise AI models or protect artists' works from unauthorized use. For example, tools like Nightshade prevent AI systems from using data from images by altering pixels.

As a result, images appear entirely different to AI systems. For example, an image of a person with subtle pixel changes might be perceived as an image of a cat by the AI.
AI poisoning tool Nightshade. Source: reddit.com

AI poisoning tool Nightshade. Source: reddit.com

13. Inference 

AI development generally consists of two stages: training and inference. Training is the process of feeding data into the model and optimizing it to recognize patterns.

In the inference stage, the model is deployed and becomes available for real-world use cases. This is when the system makes decisions based on new, unseen data.

Inference is typically faster and less resource-intensive than training, as it involves applying the learned knowledge from training to perform tasks in real-time or near-real-time.

Itā€™s the practical side of AI, as it allows models to actually apply what they have learned.

14. Prompt 

Prompts are cues or commands that guide the AI in generating a response or completing a task. You input a prompt like, "Write a short story about a heroic cat" or "Create an image of a sunny beach day," and the AI gets to work, using complex algorithms to deliver the best possible output based on your input.

People use AI prompts for a variety of purposes, ranging from creative writing and art generation to solving complex problems and automating tasks.

The more concise and detailed your prompt is, the better the chances of receiving a quality answer from the AI model. Another related term is prompt engineering, the process of designing and optimizing prompts for better outcomes. 
Finding the right prompt to generate the needed results is complex. Source: meme-arsenal.com

Finding the right prompt to generate the needed results is complex. Source: meme-arsenal.com

15. Turing Test 

The Turing Test is a concept created by British mathematician and computer scientist Alan Turing back in 1950. Itā€™s designed to see if a machine can show capabilities of thinking like a human. 

Hereā€™s how the Turing Test works: a person has a text-based chat with both a human and a machine (like a chatbot) without knowing which is which. If the person canā€™t tell the machine from the human, then the machine is said to have passed the test, meaning it can mimic human-like intelligence.

Turing originally called it the "Imitation Game," and itā€™s been a big part of discussions around artificial intelligence. The test raises interesting questions about whether machines can really think or understand like we do

Web3 writer and crypto HODLer with a keen interest in market trends and recent technologies.