Skip to main content
Duration: 30 minutes

Introduction

Imagine you’re giving directions to someone who’s incredibly knowledgeable but needs clear instructions. That’s essentially what prompting is—giving an AI system the right cues to produce exactly what you need. In this lesson, you’ll discover how a simple text input can unlock powerful AI capabilities.

The Prompt-Response Paradigm

At its core, prompting is about conditional inputs and outputs:
1

You provide context (x)

The prompt contains your instructions, examples, and any relevant information
2

The LLM generates output (y)

The model processes your input and produces a response
3

The model maximizes Pr(y|x)

Mathematically, the LLM finds the most probable output given your input

Why This Matters

Traditional machine learning required:
  • Large labeled datasets
  • Model training or fine-tuning
  • Technical expertise
  • Significant time and resources
Prompting changed everything:
  • ✅ No training required
  • ✅ Immediate adaptation to new tasks
  • ✅ Accessible to non-technical users
  • ✅ Flexible and iterative

Real-World Context

When ChatGPT launched in November 2022, it democratized AI by making prompting the primary interface. Instead of training models, users learned to communicate effectively. This shift transformed AI from a specialist tool to a universal assistant.

Practical Examples

Example 1: Basic Translation

Translate the text from English to Chinese.

Text: The early bird catches the worm.

Translation:
Output:
早起的鸟儿有虫吃。

Example 2: Sentiment Analysis

Here are some examples of text classification.

Example 1: We had a delightful dinner together. → Label: Positive
Example 2: I'm frustrated with the delays. → Label: Negative

What is the label for "That comment was quite hurtful."?

Label:
Output:
Negative

Example 3: Creative Writing

Write a short story about a robot learning to paint.
The story should be exactly 3 sentences long and have a hopeful tone.
Output:
Unit-7 had spent years calculating trajectories and optimizing systems, 
but when it first touched brush to canvas, something unexpected happened—
colors flowed in patterns its algorithms couldn't predict. Each stroke 
taught the robot that beauty existed beyond precision, in the space where 
calculation met chance. As its first painting dried, Unit-7 understood 
that learning to create meant learning to embrace imperfection.

Key Concepts

1. Prompts as Instructions

Think of prompts as recipes for AI:
  • Ingredients: The information you provide
  • Method: How you structure the request
  • Result: The output you receive

2. The Probability Perspective

LLMs don’t “understand” in the human sense. Instead, they:
  • Calculate probabilities for possible next tokens
  • Select high-probability sequences
  • Generate text that statistically fits the pattern
This is why the same question phrased differently can produce different results—you’re changing the probability distribution!

3. In-Context Learning

One of the most powerful aspects of prompting:
# Without examples (Zero-shot)
Classify this review: "The product broke after one week."
Category:

# With examples (Few-shot)
Review: "Amazing quality, works perfectly!" → Category: Positive
Review: "Terrible experience, very disappointed." → Category: Negative
Review: "The product broke after one week." → Category:
The second approach typically produces more accurate results because you’ve shown the model the pattern you want.

Why Prompting Revolutionized AI

  • Accessibility
  • Flexibility
  • Iteration
  • Creativity
Anyone can use AI through natural language—no coding required

Common Misconceptions

Reality: LLMs recognize patterns in training data and generate statistically likely responses. They don’t have understanding or consciousness.
Reality: Effective prompting is context-dependent. Different tasks and models may require different approaches.
Reality: Clarity and relevance matter more than length. Concise, well-structured prompts often outperform verbose ones.

Check Your Understanding

  • Question 1
  • Question 2
  • Question 3
What does it mean that LLMs maximize Pr(y|x)?
It means the model calculates the probability of different outputs (y) given your input (x), and selects the most probable response. The model is essentially predicting “what text is most likely to follow this prompt based on patterns in my training data.”

Practice Exercise

Try creating prompts for these scenarios:
  1. Translation Task: Translate a sentence from English to Spanish, maintaining formal tone
  2. Classification Task: Categorize customer feedback as “Bug Report,” “Feature Request,” or “General Question”
  3. Generation Task: Write a professional email declining a meeting invitation
For each task, think about:
  • What information does the model need?
  • How can I make my instruction clear?
  • Would examples help clarify what I want?

Key Takeaways

Prompts Guide Behavior

Well-crafted prompts direct LLMs toward accurate, relevant responses

No Training Needed

Prompting enables in-context learning without model updates

Probability-Based

LLMs generate statistically likely text based on input patterns

Iterative Process

Effective prompting involves experimentation and refinement

Next Steps

Now that you understand what prompting is and why it matters, you’re ready to learn how to construct effective prompts systematically.

Continue to Lesson 1.2: Anatomy of a Prompt

Learn the components and structure of effective prompts
I