Duration: 60 minutes
Introduction
Here’s something remarkable: LLMs can learn new tasks just by seeing examples in the prompt—no training required. This “learning during inference” is called in-context learning, and it’s one of the most powerful features of modern LLMs.The Big Idea: You can teach an AI system a new task simply by showing it examples within your prompt. No model updates, no training data, no technical expertise required.
The Three Learning Modes
In-context learning comes in three flavors, each with different use cases and effectiveness:- Zero-Shot
- One-Shot
- Few-Shot
No examples provided—pure instruction followingBest for: Well-defined tasks, strong models, clear instructions
Zero-Shot Learning
What It Is
Zero-shot learning means giving the model instructions without any examples. You rely entirely on the model’s pre-trained knowledge and ability to follow directions.Example: Grammar Correction
When Zero-Shot Works Best
Strong Models
Modern LLMs like GPT-4 or Claude excel at zero-shot tasks
Common Tasks
Translation, summarization, basic Q&A
Clear Instructions
When you can describe exactly what you want
Standard Formats
Well-known output structures
Zero-Shot Examples
Translation
Translation
Sentiment Analysis
Sentiment Analysis
Code Generation
Code Generation
One-Shot Learning
What It Is
One-shot learning provides a single example to demonstrate the desired pattern or format. This helps clarify ambiguous instructions and shows the model exactly what you want.Example: Grammar Correction
The Power of One Example
A single example can:- Clarify format: Show exactly how output should look
- Demonstrate style: Establish tone and structure
- Reduce ambiguity: Make implicit requirements explicit
- Improve accuracy: Guide the model toward correct patterns
One-Shot Examples
- Classification
- Data Extraction
- Format Conversion
Few-Shot Learning
What It Is
Few-shot learning provides multiple examples (typically 2-5) to establish a clear pattern. This is the most powerful form of in-context learning for complex or ambiguous tasks.Example: Grammar Correction
Pattern Recognition in Action
Few-shot learning excels at teaching patterns:Simple Translation Pattern
Mathematical Reasoning
Complex Classification
Choosing the Right Approach
1
Start with Zero-Shot
Try the simplest approach first—it often works!
2
Add One Example if Needed
If output format is unclear or results are inconsistent
3
Use Few-Shot for Complex Tasks
When patterns are subtle or requirements are ambiguous
4
Balance Examples vs. Context
More isn’t always better—quality over quantity
Decision Matrix
Task Complexity | Model Strength | Instruction Clarity | Recommended Approach |
---|---|---|---|
Simple | Strong | Clear | Zero-shot |
Simple | Weak | Clear | One-shot |
Moderate | Strong | Unclear | One-shot |
Moderate | Weak | Unclear | Few-shot |
Complex | Strong | Clear | Few-shot |
Complex | Any | Unclear | Few-shot |
The Science Behind It
How does this work without training?During pre-training, LLMs learn broad patterns across massive datasets. In-context learning doesn’t teach new knowledge—it activates existing patterns by showing the model which “pathway” to follow. Examples serve as routing signals, guiding the model toward the right type of response.
Key Research Findings
- More examples generally help (up to a point—typically 5-10 examples)
- Example quality matters more than quantity
- Example diversity improves generalization
- Example order can affect results
- Larger models benefit more from few-shot learning
Best Practices
Diverse Examples
Cover different aspects of the task
Clear Patterns
Make the relationship between input and output obvious
Consistent Format
Use the same structure for all examples
Representative Cases
Include typical scenarios, not edge cases
Common Pitfalls
Avoid these mistakes:
- Contradictory examples: Examples that suggest different patterns
- Too many examples: Overwhelming the context window
- Biased examples: All examples from one category or type
- Unclear formatting: Inconsistent structure between examples
- Irrelevant examples: Demonstrations that don’t match the task
Practice Exercises
Exercise 1: Build a Few-Shot Classifier
Exercise 1: Build a Few-Shot Classifier
Create a few-shot prompt to classify programming questions into:
- Syntax Error
- Logic Error
- Conceptual Question
- Best Practice
Sample Solution
Sample Solution
Exercise 2: Zero-Shot vs Few-Shot Comparison
Exercise 2: Zero-Shot vs Few-Shot Comparison
Take this task and create both zero-shot and few-shot versions:Task: Convert technical jargon to plain EnglishCompare the outputs and note differences.
Sample Solutions
Sample Solutions
Zero-Shot:Few-Shot:
Exercise 3: Progressive Example Building
Exercise 3: Progressive Example Building
Start with zero-shot, then add examples one at a time for this task:Task: Extract meeting action items from notesTest after each addition and observe improvements.
Key Takeaways
1
In-Context Learning is Powerful
Teach new tasks through examples without any training
2
Three Modes, Different Uses
Zero-shot for simple tasks, few-shot for complex ones
3
Quality Over Quantity
Well-chosen examples matter more than many examples
4
Experiment and Iterate
Start simple, add examples as needed
Next Steps
You’ve learned how to teach models through examples. Next, you’ll discover the core principles that make any prompt more effective.Continue to Lesson 1.4: Core Prompting Principles
Master the four fundamental principles of effective prompting