AI Literacy Series: The must-know terms for anyone who wants to build AI literacy

Whether you’re already using tools like ChatGPT or just getting started, one thing is clear: AI is becoming part of everyday work. But if the language around AI still feels overwhelming, you’re not alone. This glossary was made for the real world, for leaders navigating change, professionals upskilling mid-career, and curious minds wondering if they’re “behind.” We’ve stripped away jargon and distilled the must-know terms into clear, relatable language,  so you can actually use them, not just read them. It is part of our AI Literacy Series, designed to help both leaders and everyday professionals build practical understanding and confidence in using AI.

We’ve broken it down into sections to help you understand the most important terms, what they mean, and why they matter. Let’s make AI literacy practical, human, and clear.

The AI basics (core concepts)

Before you explore tools or strategy, it helps to get grounded in the basics. These are the foundational ideas you’ll see pop up everywhere, from articles and tool descriptions to conversations at work.

Term Plain English meaning
Artificial Intelligence (AI)

AI is a broad field in computer science focused on building machines and systems that can mimic human intelligence. This includes the ability to learn from experience, understand patterns, make decisions, and solve problems. AI shows up in many forms — from simple rule-based systems to advanced models capable of generating content or holding conversations.

AI Literacy

AI literacy is a set of skills, knowledge, and ways of thinking that enable people to understand, engage with, and evaluate artificial intelligence systems — not just as users, but as informed participants in a world increasingly shaped by AI. It includes the ability to understand how AI works at a basic level, use AI tools effectively and responsibly in real-life situations, and think critically about when and how AI is applied and question its outputs.

AI literacy means knowing how to work with AI, not just around it. It’s about becoming confident, capable, and thoughtful in a world where AI is increasingly part of our tools, tasks, and decisions. It’s not about doing everything with AI — it’s about knowing how to do the right things with it.

Machine Learning (ML)
Machine Learning is a type of AI where systems learn from data and improve over time without needing to be explicitly programmed for every outcome. It is present in our everyday life and is the foundation of how many AI applications get smarter the more they’re used.

Examples:

  • Netflix recommending shows based on your past viewing habits.
  • Smartphone Face ID, which uses the TrueDepth camera and machine learning for a secure authentication solution.
  • Smartphone Photo apps recognising pets and grouping them into albums
  • Salesforce Einstein case classification, classifying cases based on case details
  • Automated content moderation on social media platforms
Generative AI (GenAI)
Generative AI creates new content like text, images, or music.

Examples:

  • When you ask ChatGPT to write a poem or Midjourney to create an image, that’s GenAI.
  • Text-to-video applications, generating short product teaser videos from simple text prompts, helping them quickly test creative concepts without hiring a videographer (eg: RunwayML)
  • Boilerplate code generation, unit tests, or SQL queries from pompts (also known as Vibe coding)
Large Language Model (LLM)
A Large Language Model is a type of AI trained on massive amounts of text to generate human-like responses. It powers most text-based generative AI tools.

ChatGPT, Perplexity and Claude are an example of this.

Neural Network

Algorithms loosely inspired by how the human brain works. They help AI learn patterns — like recognising faces in photos or predicting which emails are spam. Neural networks are a core part of machine learning. Most modern AI tools, including voice assistants and image generators, are built on neural networks.

Natural Language Processing (NLP)
NLP is how computers understand and work with human language. Voice assistants like Siri or Alexa use NLP to interpret your questions.
Prompt
A prompt is what you type into an AI tool to get a response like asking ChatGPT, ‘Summarise this article in 3 bullet points.’
Model
In AI, a model is the brain of the system — it’s what has been trained on data to make predictions, generate content, or perform tasks. There are different types of models trained for different purposes — some understand language, some recognise images, others generate music or write code. For example, the models behind ChatGPT are large language model trained on a vast amount of text. There are different types of models trained for different purposes — some understand language, some recognise images, others generate music or write code.  Some models are designed to be domain-specific, meaning they specialise in a particular task or industry (like financial forecasting, customer service, or medical diagnostics). Meanwhile, general-purpose models like the ones used in ChatGPT or Claude are trained on a broad range of internet text to understand and generate human-like language. These systems often contain multiple model versions (e.g. GPT-4, Claude 3) with different strengths. Some may be better at reasoning, others at creative writing or summarising. That’s why you might notice one model writing smoother responses while another excels at structuring data.
Token
Tokens are the building blocks of text that AI models read and process — usually a word or part of a word. Tools like ChatGPT break down your prompt and their response into tokens to understand and generate language. Tokens are used to calculate processing limits and pricing. For instance, if ChatGPT has a 4,000-token limit, that includes both your input and its full reply. So, the longer your conversation, the more tokens it uses.’

Practical AI in use

You’ve probably seen or used AI tools,  maybe without even realising it. This section introduces terms that describe how AI shows up in your day-to-day tools and workflows. From chatbots on websites to automated insights in your docs, this is where AI gets hands-on.

Term
Plain English meaning
Chatbot agent

AI that simulates conversation, often used in customer service or support roles. For example, the pop-up chat on a website that helps you track a package or reset a password is usually a chatbot.

Unlike classic chatbots, which rely on fixed, rule-based scripts, AI-powered chatbots use natural language processing to interpret user requests and generate relevant responses or next steps.

More advanced systems like Salesforce’s AgentForce take this further, combining AI with CRM integration to personalise support, escalate issues, or even complete tasks within internal systems.

Agentic AI

AI systems that can make decisions, complete multi-step tasks, and interact with other systems or tools, often with minimal human intervention.

A real-world example is Daisy Lite, our content agent at Meadow Brooke, which orchestrates tasks like writing, reviewing, linking, and preparing content for publication, all while working in sync with human feedback.

AI-Assisted / AI-Augmented

Tools that support your work without taking over. Think of Canva’s Magic Write suggesting a content draft, which you then edit. The AI does the heavy lifting, but you stay in control.

Text-to-Image / Text-to-Video

Tools like Midjourney or Runway generate visual media from a text prompt. Say “a sunset over a futuristic city” and you get a unique, AI-created image or video.

AI Embedding / Integration

Tools that support your work without taking over. Think of Canva’s Magic Write suggesting a content draft, which you then edit. AI does the heavy lifting, but you stay in control.

Copilot

AI embedded in tools to help you work smarter. Microsoft Copilot, for instance, can suggest text in Word, summarise emails in Outlook, or generate slide content in PowerPoint.

Key Terms for responsible use

AI is powerful, and with that comes responsibility.

Whether you’re a team lead, an individual contributor, or just curious, knowing how to use AI responsibly is part of being digitally fluent. This section covers key terms around fairness, accuracy, and human oversight. Understanding these doesn’t just help you stay safe, it helps you ask better questions and be a more thoughtful user or buyer.

Term Plain English meaning
Bias

AI systems are only as good as the data they’re trained on. If that data reflects stereotypes or unequal patterns, the AI can unintentionally amplify them. For example, an image generator that always associates nurses with women or CEOs with men reveals underlying training bias. Spotting and questioning bias is essential for fair and inclusive outcomes.

Hallucination

When AI generates confident but incorrect or made-up information, it’s called a hallucination. This is especially common with language models. For example, ChatGPT might invent statistics or references if it doesn’t have real ones.

Transparency

Transparency in AI is about understanding how and why an AI system makes decisions. Transparent tools let users know what data was used, how results were produced, or whether human review is part of the process. It’s key for trust and accountability — especially in regulated industries like healthcare or finance.

Ethical AI

Ethical AI is designed and used in ways that prioritise fairness, safety, privacy, and social good. It means considering the human impact of your AI project, not just technical success. For example, building an AI to help screen candidates should include safeguards against discriminatory practices.

Human-in-the-Loop (HITL)

AI often works best when paired with human judgment. HITL is when people remain involved in key parts of the process — reviewing, correcting, or overriding AI outputs. This is especially useful in high-stakes or nuanced work, like legal research, medical advice, or content moderation.

Data Privacy

Any time AI uses personal or sensitive data, privacy matters. Responsible tools anonymise or protect this data, and users should know what information is being collected, how it’s stored, and how it’s used. Think of your voice assistant storing recordings — transparency and control are critical.

Want to keep learning?

Oni Leach

I’m passionate about building Agentic AI systems that work with people, systems that enhance human creativity, reduce busywork, and actually make teams better at what they do. I believe in starting simple, building smart, and scaling collaboratively, because sustainable change doesn’t come from massive launches, it comes from useful tools people want to keep using.

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *