How Large Language Models Work

An interactive, step-by-step explainer of what large language models actually do under the hood, written for non-technical readers.

What This Explainer Covers

  1. The AI landscape: where LLMs sit in the broader field of artificial intelligence and machine learning.
  2. Tokenization: how text gets chopped into pieces the model can process.
  3. Embeddings: how those pieces become numbers that capture meaning.
  4. Attention: how the model decides which words in your input matter most when producing each output word.
  5. Transformer layers: the stacked architecture that makes modern LLMs possible.
  6. Training: how a model learns from massive amounts of text.
  7. Alignment: how labs try to make models helpful, harmless, and honest.
  8. Generation: how the model actually produces a response, one token at a time.
  9. Reasoning models: the new generation of models that think step by step before answering.

Who It Is For

Built for nonprofit leaders, social impact professionals, and anyone curious about how AI tools like Claude, ChatGPT, and Gemini actually work. No technical background or math required.