Best Practices
Essential guidelines for crafting effective prompts across all scenarios
General Prompting Principles
1. Be Clear and Specific
❌ Vague
“Write something about AI”
✅ Specific
“Write a 200-word explanation of how transformers work in AI, suitable for beginners”
- Define the exact task
- Specify output format and length
- Clarify the target audience
- Include relevant constraints
2. Provide Context
- Without Context
- With Context
- Include relevant background information
- Provide examples when helpful
- Specify the domain or industry
- Clarify any ambiguous terms
3. Use Examples (Few-Shot Learning)
When to use:- Task is ambiguous or complex
- Specific format is required
- Quality standards need demonstration
- Simple tasks: 1-2 examples
- Complex tasks: 3-5 examples
- Avoid: 10+ examples (diminishing returns)
4. Structure Your Prompts
Recommended structure:Task-Specific Best Practices
Classification
Do's
Do's
- ✅ Explicitly list all possible categories
- ✅ Provide 2-3 examples per category
- ✅ Specify output format: “Respond with exactly one of: A, B, C”
- ✅ Handle edge cases: “If uncertain, respond with ‘Unclear’”
Don'ts
Don'ts
- ❌ Use vague categories like “good” or “bad”
- ❌ Assume the model knows your classification scheme
- ❌ Forget to handle ambiguous cases
- ❌ Use too many categories (>10 without hierarchy)
Information Extraction
Do's
Do's
- ✅ Specify exact fields to extract
- ✅ Provide output format (JSON, table, list)
- ✅ Handle missing information: “If not found, use ‘N/A’”
- ✅ Use progressive extraction for complex tasks
Don'ts
Don'ts
- ❌ Ask for extraction without specifying format
- ❌ Assume the model will infer what you need
- ❌ Forget to handle incomplete data
- ❌ Extract too many fields at once (break into steps)
Content Generation
Do's
Do's
- ✅ Specify tone, style, and audience
- ✅ Set clear length constraints
- ✅ Provide attribute specifications
- ✅ Include examples of desired style
- ✅ Request specific elements (CTA, headers, etc.)
Don'ts
Don'ts
- ❌ Use vague instructions like “make it good”
- ❌ Forget to specify length
- ❌ Assume the model knows your brand voice
- ❌ Skip audience definition
Text Transformation
Do's
Do's
- ✅ Preserve core meaning and facts
- ✅ Specify target format/style explicitly
- ✅ Provide before/after examples
- ✅ Set clear transformation goals
Don'ts
Don'ts
- ❌ Sacrifice accuracy for style
- ❌ Use vague transformation requests
- ❌ Forget to verify factual consistency
- ❌ Transform without clear purpose
Question Answering
Do's
Do's
- ✅ Provide relevant context when available
- ✅ Request step-by-step reasoning for complex questions
- ✅ Ask for source citations
- ✅ Instruct to admit uncertainty when appropriate
Don'ts
Don'ts
- ❌ Expect answers without sufficient context
- ❌ Allow hallucination of facts
- ❌ Skip verification for critical information
- ❌ Forget to handle “I don’t know” cases
Advanced Techniques Best Practices
Chain of Thought (CoT)
When to use:- Multi-step reasoning required
- Math or logic problems
- Counter-intuitive questions
- Verification needed
Problem Decomposition
When to use:- Hierarchical problems
- Multi-domain challenges
- Sequential dependencies
- Overwhelming complexity
Self-Refinement
When to use:- Quality-critical outputs
- Complex creative tasks
- Technical accuracy required
- Ambiguous requirements
RAG (Retrieval-Augmented Generation)
When to use:- Knowledge-intensive tasks
- Dynamic information needed
- Private/proprietary data
- Citation requirements
Common Pitfalls to Avoid
1. Prompt Ambiguity
Problem: “Summarize this”Issues:
- How long should the summary be?
- What format (paragraph, bullets)?
- What level of detail?
- What’s the purpose?
2. Assuming Context
Problem: Using pronouns or references without contextBad: “How do I fix it?”Good: “How do I fix the ‘Connection Timeout’ error in my Python script when connecting to the database?“
3. Overloading the Prompt
Problem: Trying to do too much in one promptBad: “Analyze this data, create visualizations, write a report, and suggest improvements”Good: Break into steps:
- First, analyze the data
- Then, create visualizations
- Then, write the report
- Finally, suggest improvements
4. Ignoring Output Format
Problem: Not specifying how you want the responseBad: “Extract the key information”Good: “Extract the key information in JSON format with fields: name, date, amount, status”
5. Forgetting Edge Cases
Problem: Not handling unusual inputsBad: “Classify as positive or negative”Good: “Classify as positive, negative, or neutral. If the text is unclear or contains mixed sentiment, respond with ‘mixed’.”
Debugging Strategies
When Outputs Are Wrong
1
Check Clarity
Is your prompt clear and unambiguous?
2
Add Examples
Provide 2-3 examples of desired output
3
Increase Specificity
Add more constraints and details
4
Break It Down
Split complex tasks into simpler steps
5
Use CoT
Request step-by-step reasoning
When Outputs Are Inconsistent
1
Constrain Format
Specify exact output format
2
Add Structure
Use templates or schemas
3
Provide More Examples
Show consistent patterns
4
Lower Temperature
Reduce randomness (if you control this parameter)
When Outputs Are Too Generic
1
Add Specificity
Include more details and constraints
2
Provide Context
Give relevant background information
3
Show Examples
Demonstrate the level of detail needed
4
Request Specific Elements
Ask for particular details, data, or examples
Performance Optimization
Token Efficiency
Strategies:- Use concise language without sacrificing clarity
- Remove redundant instructions
- Combine related constraints
- Use abbreviations consistently (after defining them)
Prompt Reusability
Create templates for common tasks:Quality Checklist
Before finalizing a prompt, verify:Clear task definition
Sufficient context provided
Output format specified
Examples included (if needed)
Edge cases handled
Constraints clearly stated
Verification method included (for critical tasks)
No ambiguous language
Quick Reference Card
Scenario | Technique | Key Tip |
---|---|---|
Simple task | Zero-shot | Be clear and specific |
Complex task | Few-shot | Provide 3-5 examples |
Multi-step | Chain of Thought | ”Let’s think step-by-step” |
Hierarchical | Decomposition | Start simple, build up |
Quality-critical | Self-refinement | Generate → Critique → Refine |
High-stakes | Ensembling | Multiple approaches + voting |
Knowledge-intensive | RAG | Ground in sources, cite |
Action-required | Tool integration | Clear tool selection + error handling |