Prompting Techniques
- Zero-shot: ask the model directly without examples.
- One-shot: provide one example before the actual query.
- Few-shot: provide multiple examples to guide the model.
- Chain-of-thought: ask the model to reason step by step.
- System prompts: set the model's persona, constraints, and behavior.
Best Practices
- Be specific and clear — vague prompts produce vague outputs.
- Provide context and constraints to narrow the response.
- Use delimiters (###, ```) to separate instructions from content.
- Specify the output format (JSON, bullet points, table).
- Iterate and refine — prompt engineering is an iterative process.
- Test with diverse inputs to ensure robustness.
Common Pitfalls
- Hallucinations: model generates plausible but incorrect information.
- Prompt injection: malicious inputs that override system instructions.
- Context window limits: too much input causes truncation.
- Over-reliance on temperature: too high = nonsense, too low = repetitive.
- Not validating outputs: always verify critical AI-generated content.
When to Use What
- Zero-shot for simple, well-defined tasks.
- Few-shot when the model needs examples of the desired format.
- Chain-of-thought for complex reasoning or math problems.
- System prompts to enforce consistent behavior across conversations.
- RAG when the model needs access to current or proprietary data.
Practice Prompt Engineering Questions
Put your knowledge to the test with practice questions.