Skip to main content

Glossary

Quick reference for important terms used throughout Prompt University

A

Abstractive Summarization

A summarization approach that generates new text to capture the main ideas, rather than extracting sentences directly from the source.

API (Application Programming Interface)

A set of protocols that allows different software applications to communicate with each other. In prompting, APIs enable LLMs to interact with external tools and services.

C

Chain of Thought (CoT)

A prompting technique that encourages step-by-step reasoning by making the thinking process explicit. Dramatically improves accuracy on complex reasoning tasks. Example: “Let’s think step-by-step: First, we calculate… Then, we…”

Cloze-Style Prompting

A fill-in-the-blank approach where the model completes missing parts of text. Example: “The capital of France is ___.”

Constrained Generation

Text generation with specific requirements or limitations (length, tone, format, etc.). Example: “Write exactly 100 words in a professional tone.”

Context Window

The maximum amount of text (measured in tokens) that an LLM can process at once, including both the prompt and the response.

D

Decomposition

Breaking down complex problems into smaller, manageable sub-problems that can be solved sequentially or independently.

E

Ensembling

Combining multiple approaches, reasoning paths, or model outputs to create more robust and accurate results.

Extractive Summarization

A summarization approach that selects and combines key sentences directly from the source text.

F

Few-Shot Learning

Providing a small number of examples (typically 2-5) in the prompt to demonstrate the desired task or output format. Example:
Example 1: Input → Output
Example 2: Input → Output
Now: Your input → ?

Fine-Tuning

The process of further training a pre-trained model on specific data to adapt it for particular tasks. Different from prompting, which doesn’t modify the model.

H

Hallucination

When an LLM generates information that sounds plausible but is factually incorrect or not grounded in the provided context. RAG helps reduce hallucinations.

I

In-Context Learning

The ability of LLMs to learn and adapt to new tasks based solely on examples and instructions provided in the prompt, without any parameter updates.

Instruction

The part of a prompt that tells the model what task to perform or what output to generate.

L

Least-to-Most Prompting

A decomposition strategy that starts with the simplest sub-problem and progressively builds to more complex ones.

LLM (Large Language Model)

A neural network trained on vast amounts of text data that can generate human-like text and perform various language tasks.

M

Multi-Label Classification

Classification where an item can belong to multiple categories simultaneously (vs. multi-class where it belongs to exactly one).

N

Named Entity Recognition (NER)

The task of identifying and categorizing named entities (people, organizations, locations, dates, etc.) in text.

NLP (Natural Language Processing)

The field of AI focused on enabling computers to understand, interpret, and generate human language.

O

One-Shot Learning

Providing exactly one example in the prompt to demonstrate the desired task or output format.

Output Indicator

A prompt component that signals where the model should begin its response (e.g., “Answer:”, “Output:”, “Response:”).

P

Parameter

A learned weight in a neural network. LLMs have billions of parameters that encode knowledge from training data.

Prompt

The input text provided to an LLM that includes instructions, context, examples, and/or questions to guide the model’s response.

Prompt Engineering

The practice of designing and optimizing prompts to achieve desired outputs from LLMs.

R

RAG (Retrieval-Augmented Generation)

A technique that retrieves relevant information from a knowledge base and includes it in the prompt context before generation, reducing hallucinations and improving factual accuracy.

Reasoning

The process of drawing conclusions or making inferences based on available information. Advanced prompting techniques like CoT make reasoning explicit.

Role Assignment

Instructing the model to adopt a specific persona or expertise (e.g., “You are a Python expert”).

S

Self-Consistency

A technique that generates multiple independent reasoning paths and uses majority voting to select the most reliable answer.

Self-Refinement

An iterative process where the model generates output, critiques it, and produces an improved version. Search based on meaning rather than exact keyword matching, often used in RAG systems to find relevant context.

Shot

Refers to the number of examples provided in a prompt (zero-shot, one-shot, few-shot).

Style Transfer

Transforming text to match a different tone, reading level, or writing style while preserving the core meaning.

T

Temperature

A parameter that controls randomness in generation. Lower values (0.0-0.3) produce more deterministic outputs; higher values (0.7-1.0) produce more creative, varied outputs.

Token

The basic unit of text that LLMs process. A token can be a word, part of a word, or punctuation. Most models have token limits for input and output. Example: “Hello world” = 2 tokens

Tool Integration

Connecting LLMs to external tools (calculators, APIs, databases) to extend their capabilities beyond text generation.

V

Verification

The process of checking whether a generated answer or reasoning is correct, often included as a step in CoT prompting.

Z

Zero-Shot Learning

Asking the model to perform a task without providing any examples, relying solely on instructions and the model’s pre-trained knowledge. Example: “Translate this to French: Hello” (no translation examples provided)

Zero-Shot CoT

Triggering Chain of Thought reasoning without examples by using phrases like “Let’s think step-by-step.”
I