Skip to main content

Further Reading

A comprehensive collection of research papers, tools, tutorials, and community resources to advance your prompt engineering skills

Foundational Research Papers

Language Models & Transformers

Authors: Vaswani et al., Google BrainKey Contribution: Introduced the Transformer architecture that powers modern LLMsWhy Read: Understanding transformers is fundamental to understanding how prompts are processedLink: arXiv:1706.03762
Authors: Brown et al., OpenAI (GPT-3 Paper)Key Contribution: Demonstrated that large language models can perform tasks with just a few examples (few-shot learning)Why Read: Foundational paper on in-context learning and prompt-based task solvingLink: arXiv:2005.14165
Authors: Ouyang et al., OpenAI (InstructGPT Paper)Key Contribution: Showed how RLHF (Reinforcement Learning from Human Feedback) improves instruction followingWhy Read: Explains why modern models are better at following promptsLink: arXiv:2203.02155

Prompting Techniques

Chain of Thought & Reasoning

Authors: Wei et al., Google ResearchKey Contribution: Introduced Chain of Thought prompting, showing 30-50% accuracy improvements on reasoning tasksWhy Read: The definitive paper on CoT promptingLink: arXiv:2201.11903
Authors: Kojima et al., University of TokyoKey Contribution: Showed that simply adding “Let’s think step by step” dramatically improves reasoningWhy Read: Demonstrates the power of simple prompting modificationsLink: arXiv:2205.11916
Authors: Wang et al., Google ResearchKey Contribution: Introduced self-consistency (ensembling multiple reasoning paths)Why Read: Shows how to improve CoT reliability through multiple samplesLink: arXiv:2203.11171
Authors: Yao et al., Princeton UniversityKey Contribution: Extended CoT to explore multiple reasoning branches like a search treeWhy Read: Advanced technique for complex problem-solvingLink: arXiv:2305.10601

Retrieval-Augmented Generation

Authors: Lewis et al., Facebook AI ResearchKey Contribution: Introduced RAG, combining retrieval with generationWhy Read: Foundational paper on grounding LLM outputs in external knowledgeLink: arXiv:2005.11401
Authors: Ram et al., AI21 LabsKey Contribution: Showed how to effectively integrate retrieved documents into promptsWhy Read: Practical techniques for implementing RAGLink: arXiv:2302.00083

Prompt Engineering Surveys

Authors: Liu et al., Carnegie Mellon UniversityKey Contribution: Comprehensive survey of prompting methodsWhy Read: Excellent overview of the prompting landscapeLink: arXiv:2107.13586
Authors: Zhao et al., Renmin University of ChinaKey Contribution: Comprehensive survey covering LLM architectures, training, and promptingWhy Read: Up-to-date overview of the entire LLM fieldLink: arXiv:2303.18223

Tools & Frameworks

Prompt Development


Vector Databases (for RAG)


Evaluation & Testing


Community Resources

Learning Platforms


Blogs & Newsletters

Focus: Deep dives into LLM research and techniquesNotable Posts:
  • “Prompt Engineering”
  • “Large Language Models”
  • “Controllable Text Generation”
Link: lilianweng.github.io
Focus: Weekly AI news and insightsWhy Subscribe: Stay current with AI developmentsLink: deeplearning.ai/the-batch
Focus: Weekly newsletter on AI researchWhy Subscribe: Curated research paper summariesLink: jack-clark.net
Focus: Practical AI and prompt engineeringWhy Subscribe: Actionable tips and techniquesLink: magazine.sebastianraschka.com

Communities


Books

Authors: Various contributorsFocus: Comprehensive guide to prompt engineeringBest For: Structured learning pathAvailability: Free online
Author: Chip HuyenFocus: Production ML systems (includes prompting)Best For: Building real-world applicationsPublisher: O’Reilly Media
Authors: Tunstall, von Werra, WolfFocus: Deep dive into transformer modelsBest For: Understanding the underlying technologyPublisher: O’Reilly Media

Online Courses

Structured Learning


Advanced Topics

Cutting-Edge Research Areas

  • Constitutional AI
  • Multimodal Prompting
  • Prompt Optimization
  • Adversarial Prompting
Focus: Training AI systems to be helpful, harmless, and honestKey Paper: “Constitutional AI: Harmlessness from AI Feedback” (Anthropic, 2022)Why Important: Addresses AI safety and alignmentLink: arXiv:2212.08073

Research Groups & Labs

Leading Organizations


Staying Current

How to Keep Up

1

Follow Key Researchers

Twitter/X accounts: @AndrewYNg, @karpathy, @ylecun, @goodfellow_ian, @sama
2

Monitor arXiv

Subscribe to cs.CL (Computation and Language) and cs.AI categories
3

Join Communities

Participate in Discord servers, Reddit, and forums
4

Experiment Regularly

Try new techniques as they’re published
5

Read Release Notes

Follow model updates from OpenAI, Anthropic, Google, etc.

Practice Resources

Datasets for Practice



Note: This field evolves rapidly. Links and resources are current as of 2024, but new papers and tools emerge frequently. Check the communities and newsletters above to stay updated.
I