Skip to main content

Best Practices

Essential guidelines for crafting effective prompts across all scenarios

General Prompting Principles

1. Be Clear and Specific

❌ Vague

“Write something about AI”

✅ Specific

“Write a 200-word explanation of how transformers work in AI, suitable for beginners”
Guidelines:
  • Define the exact task
  • Specify output format and length
  • Clarify the target audience
  • Include relevant constraints

2. Provide Context

  • Without Context
  • With Context
How should I respond?
❌ Model doesn’t know what “respond” refers to
Guidelines:
  • Include relevant background information
  • Provide examples when helpful
  • Specify the domain or industry
  • Clarify any ambiguous terms

3. Use Examples (Few-Shot Learning)

When to use:
  • Task is ambiguous or complex
  • Specific format is required
  • Quality standards need demonstration
How many examples:
  • Simple tasks: 1-2 examples
  • Complex tasks: 3-5 examples
  • Avoid: 10+ examples (diminishing returns)

4. Structure Your Prompts

Recommended structure:
[Role/Context]
[Task Description]
[Specific Instructions]
[Examples (if needed)]
[Output Format]
[Constraints]

Task-Specific Best Practices

Classification

  • ✅ Explicitly list all possible categories
  • ✅ Provide 2-3 examples per category
  • ✅ Specify output format: “Respond with exactly one of: A, B, C”
  • ✅ Handle edge cases: “If uncertain, respond with ‘Unclear’”
  • ❌ Use vague categories like “good” or “bad”
  • ❌ Assume the model knows your classification scheme
  • ❌ Forget to handle ambiguous cases
  • ❌ Use too many categories (>10 without hierarchy)

Information Extraction

  • ✅ Specify exact fields to extract
  • ✅ Provide output format (JSON, table, list)
  • ✅ Handle missing information: “If not found, use ‘N/A’”
  • ✅ Use progressive extraction for complex tasks
  • ❌ Ask for extraction without specifying format
  • ❌ Assume the model will infer what you need
  • ❌ Forget to handle incomplete data
  • ❌ Extract too many fields at once (break into steps)

Content Generation

  • ✅ Specify tone, style, and audience
  • ✅ Set clear length constraints
  • ✅ Provide attribute specifications
  • ✅ Include examples of desired style
  • ✅ Request specific elements (CTA, headers, etc.)
  • ❌ Use vague instructions like “make it good”
  • ❌ Forget to specify length
  • ❌ Assume the model knows your brand voice
  • ❌ Skip audience definition

Text Transformation

  • ✅ Preserve core meaning and facts
  • ✅ Specify target format/style explicitly
  • ✅ Provide before/after examples
  • ✅ Set clear transformation goals
  • ❌ Sacrifice accuracy for style
  • ❌ Use vague transformation requests
  • ❌ Forget to verify factual consistency
  • ❌ Transform without clear purpose

Question Answering

  • ✅ Provide relevant context when available
  • ✅ Request step-by-step reasoning for complex questions
  • ✅ Ask for source citations
  • ✅ Instruct to admit uncertainty when appropriate
  • ❌ Expect answers without sufficient context
  • ❌ Allow hallucination of facts
  • ❌ Skip verification for critical information
  • ❌ Forget to handle “I don’t know” cases

Advanced Techniques Best Practices

Chain of Thought (CoT)

When to use:
  • Multi-step reasoning required
  • Math or logic problems
  • Counter-intuitive questions
  • Verification needed
Best practices:
✅ Use "Let's think step-by-step"
✅ Number or label each step
✅ Show all intermediate calculations
✅ Include verification step
✅ State final answer clearly

❌ Skip steps
❌ Use for simple factual questions
❌ Forget to verify

Problem Decomposition

When to use:
  • Hierarchical problems
  • Multi-domain challenges
  • Sequential dependencies
  • Overwhelming complexity
Best practices:
✅ Start with simplest sub-problem
✅ Solve sequentially
✅ Use previous solutions
✅ Verify each sub-solution
✅ Synthesize clearly

❌ Decompose unnecessarily
❌ Solve out of order
❌ Skip synthesis step

Self-Refinement

When to use:
  • Quality-critical outputs
  • Complex creative tasks
  • Technical accuracy required
  • Ambiguous requirements
Best practices:
✅ Generate → Critique → Refine
✅ Be specific in critiques
✅ Iterate 2-3 times for important content
✅ Focus on concrete improvements
✅ Track what changed and why

❌ Vague critiques ("could be better")
❌ Stop at first draft
❌ Refine without clear criteria

RAG (Retrieval-Augmented Generation)

When to use:
  • Knowledge-intensive tasks
  • Dynamic information needed
  • Private/proprietary data
  • Citation requirements
Best practices:
✅ Ground answers in retrieved context
✅ Cite sources explicitly
✅ Admit when information is missing
✅ Never infer beyond context
✅ Verify source reliability

❌ Allow hallucinations
❌ Skip source attribution
❌ Infer missing information
❌ Use unreliable sources

Common Pitfalls to Avoid

1. Prompt Ambiguity

Problem: “Summarize this”Issues:
  • How long should the summary be?
  • What format (paragraph, bullets)?
  • What level of detail?
  • What’s the purpose?
Solution: “Summarize this article in 3 bullet points, each 1-2 sentences, focusing on key findings for a technical audience.”

2. Assuming Context

Problem: Using pronouns or references without contextBad: “How do I fix it?”Good: “How do I fix the ‘Connection Timeout’ error in my Python script when connecting to the database?“

3. Overloading the Prompt

Problem: Trying to do too much in one promptBad: “Analyze this data, create visualizations, write a report, and suggest improvements”Good: Break into steps:
  1. First, analyze the data
  2. Then, create visualizations
  3. Then, write the report
  4. Finally, suggest improvements

4. Ignoring Output Format

Problem: Not specifying how you want the responseBad: “Extract the key information”Good: “Extract the key information in JSON format with fields: name, date, amount, status”

5. Forgetting Edge Cases

Problem: Not handling unusual inputsBad: “Classify as positive or negative”Good: “Classify as positive, negative, or neutral. If the text is unclear or contains mixed sentiment, respond with ‘mixed’.”

Debugging Strategies

When Outputs Are Wrong

1

Check Clarity

Is your prompt clear and unambiguous?
2

Add Examples

Provide 2-3 examples of desired output
3

Increase Specificity

Add more constraints and details
4

Break It Down

Split complex tasks into simpler steps
5

Use CoT

Request step-by-step reasoning

When Outputs Are Inconsistent

1

Constrain Format

Specify exact output format
2

Add Structure

Use templates or schemas
3

Provide More Examples

Show consistent patterns
4

Lower Temperature

Reduce randomness (if you control this parameter)

When Outputs Are Too Generic

1

Add Specificity

Include more details and constraints
2

Provide Context

Give relevant background information
3

Show Examples

Demonstrate the level of detail needed
4

Request Specific Elements

Ask for particular details, data, or examples

Performance Optimization

Token Efficiency

Strategies:
  • Use concise language without sacrificing clarity
  • Remove redundant instructions
  • Combine related constraints
  • Use abbreviations consistently (after defining them)
Example:
❌ Inefficient (150 tokens):
"Please analyze the following text and provide a detailed summary. 
The summary should be comprehensive but also concise. Make sure to 
include all the key points. The summary should be easy to understand..."

✅ Efficient (50 tokens):
"Summarize this text in 100 words, covering all key points in clear, 
accessible language."

Prompt Reusability

Create templates for common tasks:
# Customer Support Template
Context: [CUSTOMER ISSUE]
History: [PREVIOUS INTERACTIONS]
Policy: [RELEVANT POLICIES]

Task: Draft a response that:
- Acknowledges the issue
- Provides solution
- Maintains professional tone
- Offers next steps

Response:

Quality Checklist

Before finalizing a prompt, verify:
Clear task definition
Sufficient context provided
Output format specified
Examples included (if needed)
Edge cases handled
Constraints clearly stated
Verification method included (for critical tasks)
No ambiguous language

Quick Reference Card

ScenarioTechniqueKey Tip
Simple taskZero-shotBe clear and specific
Complex taskFew-shotProvide 3-5 examples
Multi-stepChain of Thought”Let’s think step-by-step”
HierarchicalDecompositionStart simple, build up
Quality-criticalSelf-refinementGenerate → Critique → Refine
High-stakesEnsemblingMultiple approaches + voting
Knowledge-intensiveRAGGround in sources, cite
Action-requiredTool integrationClear tool selection + error handling

I