AI vs Machine Learning vs Generative AI - The Hierarchy Explained

Nam

Nam Hoang / Jan 29, 2025

4 min read

In the current technological landscape, buzzwords like Artificial Intelligence (AI), Machine Learning (ML), and Large Language Models (LLMs) are often used interchangeably. However, these terms represent distinct concepts that exist within a specific hierarchy. To understand the modern AI boom, it is crucial to understand how these technologies relate to one another.

Think of these concepts like a set of Russian nesting dolls. AI is the largest doll, encompassing everything else. Inside AI fits Machine Learning. Inside Machine Learning fits Deep Learning. Finally, inside Deep Learning fits the most recent and explosive advancement: Generative AI. While early AI research began in the 1980s with "Expert Systems," the introduction of modern Foundation Models has accelerated adoption rates drastically.

I. The Hierarchy of Artificial Intelligence

To visualize the ecosystem, one must view each technology as a subset of the previous one.

Artificial Intelligence (AI) is the broad umbrella term. It refers to any technique that enables computers to mimic human intelligence, specifically the ability to learn, infer, and reason. The goal is to create systems that match or exceed human capabilities.

Machine Learning (ML) is a subset of AI. Unlike traditional programming, where humans write explicit rules, ML algorithms learn the rules themselves by analyzing data. The central premise is optimization: if you feed an algorithm enough training data, it can learn to recognize patterns and make predictions on new, unseen data.

Deep Learning (DL) is a specialized subset of Machine Learning. It utilizes Neural Networks, which are multi-layered structures designed to simulate the human brain. While highly effective, Deep Learning models are often considered "black boxes" because their complex, multi-layered decision-making processes can be difficult for humans to interpret.

Generative AI (GenAI) is a subset of Deep Learning. This includes technologies like Chatbots, image generators, and Deep Fakes. Rather than just analyzing existing data, these models generate new content.

II. Machine Learning Paradigms

Regardless of whether a model is "Classic ML" or modern AI, machines generally learn through three specific paradigms.

1. Supervised Learning
This method is akin to learning with a teacher. The model is trained on labeled data, also known as "ground truth." The goal is to map inputs to the correct outputs.

  • Regression: Used to predict continuous numerical values (e.g., predicting house prices based on square footage).
  • Classification: Used to predict categories (e.g., determining if an email is "Spam" or "Not Spam").
Input:  Email Content -> "Congratulations! You won a lottery..."
Label:  Spam
Result: Model learns to associate specific keywords with the "Spam" label.

2. Unsupervised Learning
This method involves learning without a teacher. The machine is fed unlabeled data and must discover the hidden structure or patterns on its own.

  • Clustering: Grouping similar items together. For example, an IT department might use clustering to group support tickets into themes like "Password Resets" or "Hardware Failures" without pre-defining those categories.
  • Dimensionality Reduction: Techniques like PCA (Principal Component Analysis) reduce the complexity of data by narrowing down variables while keeping the meaningful information.

3. Reinforcement Learning
This method focuses on trial and error. An "agent" interacts with an environment and learns a "policy" to maximize rewards and minimize penalties.

  • Example: A self-driving car.
    • Reward: Staying in the proper lane; obeying traffic lights.
    • Penalty: Hard braking; hitting a curb.

III. Foundation Models and Generative AI

The recent explosion in AI adoption is largely due to the development of Foundation Models. These are massive models trained on broad datasets that can be adapted to a wide variety of tasks. A Large Language Model (LLM) is a specific type of foundation model trained on text.

A helpful analogy for Generative AI is Music Composition. Every musical note has already been invented; a composer does not invent new frequencies. Instead, they rearrange existing notes into new patterns to create a unique song. Similarly, Generative AI takes the massive amount of data it has been trained on (the "notes") and rearranges it to generate new content (the "song").

To ensure these models are helpful and polite, developers use a technique called RLHF (Reinforcement Learning with Human Feedback).

# Conceptual flow of RLHF
1. AI generates a response.
2. Human annotator rates the response (Reward/Penalty).
3. AI updates its policy to maximize future rewards.

By combining the pattern recognition of classic Machine Learning with the scale of Foundation Models and the nuance of Human Feedback, we have arrived at the current era of Generative AI.