Skip to main content
Duration: 60 minutes

Introduction

Here’s something remarkable: LLMs can learn new tasks just by seeing examples in the prompt—no training required. This “learning during inference” is called in-context learning, and it’s one of the most powerful features of modern LLMs.
The Big Idea: You can teach an AI system a new task simply by showing it examples within your prompt. No model updates, no training data, no technical expertise required.

The Three Learning Modes

In-context learning comes in three flavors, each with different use cases and effectiveness:
  • Zero-Shot
  • One-Shot
  • Few-Shot
No examples provided—pure instruction followingBest for: Well-defined tasks, strong models, clear instructions

Zero-Shot Learning

What It Is

Zero-shot learning means giving the model instructions without any examples. You rely entirely on the model’s pre-trained knowledge and ability to follow directions.

Example: Grammar Correction

SYSTEM: You are a helpful assistant, great at grammar correction.

USER: You will be provided with a sentence in English. 
      Output the correct sentence.

Input: She don't like going to the park.
Output:
Model Output:
She doesn't like going to the park.

When Zero-Shot Works Best

Strong Models

Modern LLMs like GPT-4 or Claude excel at zero-shot tasks

Common Tasks

Translation, summarization, basic Q&A

Clear Instructions

When you can describe exactly what you want

Standard Formats

Well-known output structures

Zero-Shot Examples

Translate the following English text to French:

Text: "The weather is beautiful today."

Translation:
Output: “Le temps est magnifique aujourd’hui.”
Analyze the sentiment of this review and classify it as 
Positive, Negative, or Neutral.

Review: "The product works well but shipping took forever."

Sentiment:
Output: Neutral
Write a Python function that calculates the factorial of a number.
Include error handling for negative numbers.

Function:
Output:
def factorial(n):
    if n < 0:
        raise ValueError("Factorial not defined for negative numbers")
    if n == 0 or n == 1:
        return 1
    return n * factorial(n - 1)

One-Shot Learning

What It Is

One-shot learning provides a single example to demonstrate the desired pattern or format. This helps clarify ambiguous instructions and shows the model exactly what you want.

Example: Grammar Correction

SYSTEM: You are a helpful assistant, great at grammar correction.

DEMO: 
Input: There is many reasons to celebrate.
Output: There are many reasons to celebrate.

USER: 
Input: She don't like going to the park.
Output:
Model Output:
She doesn't like going to the park.

The Power of One Example

A single example can:
  • Clarify format: Show exactly how output should look
  • Demonstrate style: Establish tone and structure
  • Reduce ambiguity: Make implicit requirements explicit
  • Improve accuracy: Guide the model toward correct patterns

One-Shot Examples

  • Classification
  • Data Extraction
  • Format Conversion
Classify customer inquiries into categories.

Example:
Inquiry: "I can't log into my account"
Category: Account Access

Inquiry: "When will my order arrive?"
Category:
Output: Shipping

Few-Shot Learning

What It Is

Few-shot learning provides multiple examples (typically 2-5) to establish a clear pattern. This is the most powerful form of in-context learning for complex or ambiguous tasks.

Example: Grammar Correction

SYSTEM: You are a helpful assistant, great at grammar correction.

DEMO 1:
Input: There is many reasons to celebrate.
Output: There are many reasons to celebrate.

DEMO 2:
Input: Me and my friend goes to the gym.
Output: My friend and I go to the gym.

DEMO 3:
Input: The team are playing good today.
Output: The team is playing well today.

USER:
Input: She don't like going to the park.
Output:
Model Output:
She doesn't like going to the park.

Pattern Recognition in Action

Few-shot learning excels at teaching patterns:

Simple Translation Pattern

狗 → dog
猫 → cat
鸟 → bird
马 →
Output: horse

Mathematical Reasoning

12 5 → (12 + 5)/(12 × 5) = 0.283
3 1 → (3 + 1)/(3 × 1) = 1.33
19 73 →
Output: (19 + 73)/(19 × 73) = 0.066

Complex Classification

Text: "This movie was absolutely fantastic! Best film I've seen all year."
Sentiment: Positive
Confidence: High
Reasoning: Strong positive language ("fantastic", "best")

Text: "The service was okay, nothing special."
Sentiment: Neutral
Confidence: Medium
Reasoning: Lukewarm language ("okay", "nothing special")

Text: "I'm disappointed with the quality. Expected much better."
Sentiment: Negative
Confidence: High
Reasoning: Clear negative indicators ("disappointed", "expected better")

Text: "The product works fine but shipping was slow."
Sentiment:
Output:
Sentiment: Neutral
Confidence: Medium
Reasoning: Mixed feedback (positive product, negative shipping)

Choosing the Right Approach

1

Start with Zero-Shot

Try the simplest approach first—it often works!
2

Add One Example if Needed

If output format is unclear or results are inconsistent
3

Use Few-Shot for Complex Tasks

When patterns are subtle or requirements are ambiguous
4

Balance Examples vs. Context

More isn’t always better—quality over quantity

Decision Matrix

Task ComplexityModel StrengthInstruction ClarityRecommended Approach
SimpleStrongClearZero-shot
SimpleWeakClearOne-shot
ModerateStrongUnclearOne-shot
ModerateWeakUnclearFew-shot
ComplexStrongClearFew-shot
ComplexAnyUnclearFew-shot

The Science Behind It

How does this work without training?During pre-training, LLMs learn broad patterns across massive datasets. In-context learning doesn’t teach new knowledge—it activates existing patterns by showing the model which “pathway” to follow. Examples serve as routing signals, guiding the model toward the right type of response.

Key Research Findings

  1. More examples generally help (up to a point—typically 5-10 examples)
  2. Example quality matters more than quantity
  3. Example diversity improves generalization
  4. Example order can affect results
  5. Larger models benefit more from few-shot learning

Best Practices

Diverse Examples

Cover different aspects of the task

Clear Patterns

Make the relationship between input and output obvious

Consistent Format

Use the same structure for all examples

Representative Cases

Include typical scenarios, not edge cases

Common Pitfalls

Avoid these mistakes:
  1. Contradictory examples: Examples that suggest different patterns
  2. Too many examples: Overwhelming the context window
  3. Biased examples: All examples from one category or type
  4. Unclear formatting: Inconsistent structure between examples
  5. Irrelevant examples: Demonstrations that don’t match the task

Practice Exercises

Create a few-shot prompt to classify programming questions into:
  • Syntax Error
  • Logic Error
  • Conceptual Question
  • Best Practice
Include 3-4 diverse examples.
Classify programming questions into categories.

Example 1:
Question: "Why am I getting 'undefined is not a function'?"
Category: Syntax Error

Example 2:
Question: "My loop runs but gives wrong results"
Category: Logic Error

Example 3:
Question: "What's the difference between let and const?"
Category: Conceptual Question

Example 4:
Question: "Should I use async/await or promises?"
Category: Best Practice

Question: "How do I fix 'cannot read property of null'?"
Category:
Take this task and create both zero-shot and few-shot versions:Task: Convert technical jargon to plain EnglishCompare the outputs and note differences.
Zero-Shot:
Simplify the following technical term for a general audience:

Term: "API endpoint"
Simplified:
Few-Shot:
Simplify technical terms for a general audience.

Term: "Cache"
Simplified: Temporary storage that helps things load faster

Term: "Bandwidth"
Simplified: The amount of data that can be sent at once

Term: "API endpoint"
Simplified:
Start with zero-shot, then add examples one at a time for this task:Task: Extract meeting action items from notesTest after each addition and observe improvements.

Key Takeaways

1

In-Context Learning is Powerful

Teach new tasks through examples without any training
2

Three Modes, Different Uses

Zero-shot for simple tasks, few-shot for complex ones
3

Quality Over Quantity

Well-chosen examples matter more than many examples
4

Experiment and Iterate

Start simple, add examples as needed

Next Steps

You’ve learned how to teach models through examples. Next, you’ll discover the core principles that make any prompt more effective.

Continue to Lesson 1.4: Core Prompting Principles

Master the four fundamental principles of effective prompting
I