Advanced Prompting Techniques
Tree of thought, self-consistency, meta-prompting, prompt chaining, and structured outputs.
Beyond the Basics
Once you've mastered foundational prompting techniques like the CRAFT framework and chain-of-thought, it's time to explore advanced methods that unlock significantly more powerful results. These techniques are how professionals build reliable, repeatable AI workflows that deliver consistent quality at scale.
This module covers four advanced prompting strategies — tree of thought, self-consistency, meta-prompting, and prompt chaining — plus practical techniques for structured inputs and outputs that make your AI workflows production-ready.
Tree of Thought Prompting
Tree of thought (ToT) prompting asks the AI to explore multiple reasoning paths simultaneously, evaluate each one, and then select the strongest. Unlike chain-of-thought, which follows a single linear reasoning path, tree of thought branches out like a decision tree.
This technique was formalized in a 2023 research paper by Yao et al. at Princeton and Google DeepMind. It's especially powerful for problems where the first approach might not be the best one — creative tasks, strategic planning, and complex problem-solving.
How It Works
Tree of thought prompt template:
I need to solve the following problem: [DESCRIBE PROBLEM] Please approach this using tree-of-thought reasoning: 1. Generate 3 different possible approaches to solving this problem. 2. For each approach, think through 2–3 steps of reasoning. 3. Evaluate the strengths and weaknesses of each approach. 4. Identify which approach (or combination) is most promising. 5. Develop the best approach into a complete solution. Show your reasoning for each branch before selecting the best path.
When to Use Tree of Thought
- Strategic decisions where multiple viable paths exist
- Creative tasks like brainstorming or writing where you want to explore different angles
- Complex analysis where the first intuition might be wrong
- Problem-solving tasks that benefit from considering trade-offs
Self-Consistency Prompting
Self-consistency is a technique where you ask the AI to generate multiple independent responses to the same question, then compare them to find the most reliable answer. The idea is simple: if the AI arrives at the same conclusion through different reasoning paths, that answer is more likely to be correct.
How to Apply Self-Consistency
There are two ways to use self-consistency:
Method 1: Single-Prompt (ask the AI to self-check)
"Answer this question 3 times using different reasoning approaches. Then compare your answers and give me the one you're most confident in, explaining why. Question: [YOUR QUESTION]"
Method 2: Multi-Run (submit the same prompt multiple times)
Send the same prompt 3–5 times in separate conversations (or use the API with a higher temperature setting). Compare the responses yourself and look for consistent elements across all outputs. The points that appear in most or all responses are the most reliable.
Meta-Prompting
Meta-prompting is the practice of asking the AI to help you write better prompts. It's one of the most underused techniques, and it's remarkably effective. Instead of struggling to craft the perfect prompt yourself, you leverage the AI's understanding of what makes a good prompt.
Meta-Prompting Techniques
1. Prompt Generation
I want to use AI to help me [DESCRIBE YOUR GOAL]. Please write an optimized prompt that I can use to get the best results. The prompt should: - Include all necessary context - Specify the format I should expect - Set the right tone and constraints - Anticipate edge cases After writing the prompt, explain why you made the choices you did.
2. Prompt Improvement
Here is a prompt I've been using: "[YOUR CURRENT PROMPT]" The results I'm getting are [DESCRIBE WHAT'S WRONG OR MISSING]. Please: 1. Identify what's weak about this prompt 2. Suggest 3 specific improvements 3. Rewrite the prompt incorporating those improvements
3. Prompt Stress Testing
I've written this prompt for [USE CASE]: "[YOUR PROMPT]" Please evaluate this prompt by: 1. Identifying any ambiguities that could lead to unexpected results 2. Suggesting edge cases that might break it 3. Rating it on clarity, specificity, and completeness (1–10 each) 4. Providing an improved version
Prompt Chaining
Prompt chaining breaks a complex task into a sequence of smaller, focused prompts where each step builds on the output of the previous one. This is the single most important technique for building reliable AI workflows.
Why Chaining Beats Single Prompts
- Better quality: Each step gets the AI's full attention on a focused task
- Easier debugging: When something goes wrong, you can identify exactly which step failed
- More control: You can review and edit intermediate outputs before proceeding
- Consistent results: Smaller, focused prompts are more predictable than complex monolithic ones
Chaining Example: Research Report
Step 1 — Research and outline:
"I'm writing a report on [TOPIC]. Identify the 5 most important subtopics to cover and create a detailed outline with key points for each section."
Step 2 — Draft each section:
"Using this outline, write section 1: [SECTION TITLE]. Include specific examples and data points. Target 300 words."
Step 3 — Review and refine:
"Review this draft for clarity, accuracy, and flow. Flag any claims that need citations. Suggest improvements."
Step 4 — Executive summary:
"Based on the complete report, write a 150-word executive summary highlighting the 3 most critical findings and their implications."
Using XML Tags for Structured Inputs
XML tags are a powerful way to organize complex prompts, especially when working with Claude. By wrapping different parts of your prompt in descriptive tags, you make it unambiguous where one piece of context ends and another begins. Anthropic specifically recommends this technique in their documentation.
XML Tag Patterns
Separating context from instructions:
Here is the document I need you to analyze: <document> [PASTE FULL DOCUMENT TEXT HERE] </document> <instructions> 1. Summarize the main argument in 2–3 sentences 2. List the 3 strongest supporting points 3. Identify any logical weaknesses or gaps 4. Rate the overall persuasiveness on a scale of 1–10 </instructions>
Multiple inputs with XML tags:
Compare these two proposals and recommend which one to pursue: <proposal_a> [PROPOSAL A CONTENT] </proposal_a> <proposal_b> [PROPOSAL B CONTENT] </proposal_b> <evaluation_criteria> - Cost-effectiveness - Implementation timeline - Risk level - Alignment with company goals </evaluation_criteria> Evaluate each proposal against the criteria and provide a recommendation with reasoning.
Requesting Structured Outputs
Getting AI to produce structured, predictable outputs is essential for building workflows where the output of one step feeds into another — or where you need to parse the response programmatically.
Output Format Strategies
| Format | Best For | Prompt Instruction |
|---|---|---|
| JSON | Data that needs to be parsed by other tools or code | "Return the result as a JSON object with keys: name, category, priority, description" |
| Markdown table | Comparisons, feature matrices, data summaries | "Present this as a markdown table with columns for X, Y, and Z" |
| Numbered list | Step-by-step instructions, ranked recommendations | "Provide exactly 5 recommendations as a numbered list, most important first" |
| Labeled sections | Reports, analyses, multi-part responses | "Structure your response with these exact headings: Summary, Analysis, Risks, Recommendation" |
| XML-tagged output | When you need to extract specific parts of the response | "Wrap each section in XML tags: <summary>, <details>, <action_items>" |
JSON output example:
Analyze this customer feedback and return the result as a JSON object: "The app is great overall but crashes every time I try to upload photos. Customer service was helpful though." Return JSON with these exact keys: { "sentiment": "positive" | "negative" | "mixed", "topics": ["array of topics mentioned"], "issues": ["array of specific problems"], "positives": ["array of positive aspects"], "urgency": "low" | "medium" | "high", "suggested_action": "one-sentence recommendation" }
Building Reliable AI Workflows
Moving from ad-hoc prompting to reliable, repeatable workflows requires thinking about consistency, error handling, and quality control.
Temperature Settings
Temperature controls randomness in AI outputs. Most AI tools expose this setting in their API, and some consumer products let you adjust it indirectly.
| Temperature | Behavior | Best For |
|---|---|---|
| 0.0 – 0.3 | Highly deterministic, consistent outputs | Data extraction, classification, factual Q&A, code generation |
| 0.4 – 0.7 | Balanced creativity and consistency | General writing, analysis, summarization |
| 0.8 – 1.0 | More creative, varied outputs | Brainstorming, creative writing, generating diverse options |
Designing for Consistency
- Use system prompts to establish persistent behavior rules that apply to every interaction
- Include output format specifications in every prompt to prevent format drift
- Add constraints like "respond in exactly 3 bullet points" or "keep your response under 100 words"
- Use few-shot examples in your system prompt for tasks that require a specific style or format
- Validate outputs by adding a final step that checks the result against your requirements
When to Use Which Technique
Choosing the right advanced technique depends on your task. Here's a decision guide:
| Situation | Recommended Technique | Why |
|---|---|---|
| Need to explore multiple strategies before committing | Tree of Thought | Branches into multiple paths and evaluates each one |
| Accuracy is critical and errors are costly | Self-Consistency | Cross-checking multiple attempts reduces errors |
| You're struggling to write an effective prompt | Meta-Prompting | Leverages the AI's knowledge of what makes good prompts |
| The task has multiple distinct stages | Prompt Chaining | Each step gets focused attention and can be validated |
| The prompt includes long documents or multiple inputs | XML Tags | Creates clear boundaries between content and instructions |
| Output needs to feed into another tool or system | Structured Outputs | JSON, tables, and tagged outputs are machine-parseable |
| Building a repeatable workflow for a recurring task | Chaining + XML Tags + Structured Outputs | The combination creates reliable, production-grade workflows |
Putting It All Together: A Complete Workflow Example
Here's a real-world example that combines multiple advanced techniques to create a competitive analysis workflow:
Step 1 — Meta-prompt to design the workflow:
"I need to create a competitive analysis for my SaaS product. Design a 4-step prompt chain that would produce a thorough analysis. For each step, write the specific prompt I should use."
Step 2 — Use XML tags to provide context:
"Here is information about our product and competitors: <our_product>[details]</our_product> <competitor_1>[details]</competitor_1> <competitor_2>[details]</competitor_2> Analyze each competitor's strengths and weaknesses relative to our product."
Step 3 — Tree of thought for strategy:
"Based on this competitive analysis, generate 3 different strategic approaches we could take. Evaluate each one for feasibility, impact, and resource requirements. Then recommend the strongest approach."
Step 4 — Structured output for the final deliverable:
"Create the final competitive analysis report with these sections: Executive Summary, Market Overview, Competitor Profiles (as a comparison table), Strategic Recommendation, and 90-Day Action Plan (as a numbered list)."
Resources
Anthropic Prompt Engineering Documentation
Anthropic
Anthropic's comprehensive guide to prompting Claude, including XML tag usage, chain-of-thought, and structured output techniques.
Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Yao et al. (Princeton / Google DeepMind)
The original 2023 research paper introducing the tree of thought prompting framework for complex reasoning tasks.
Prompt Engineering Guide
DAIR.AI
A comprehensive open-source guide covering advanced prompting techniques including chain-of-thought, self-consistency, tree of thought, and more.
Prompt Engineering for ChatGPT
Vanderbilt University (Coursera)
A university-level course covering fundamental and advanced prompt engineering techniques with hands-on exercises.
Key Takeaways
- 1Tree of thought prompting explores multiple reasoning paths before selecting the best one — ideal for strategic and creative tasks.
- 2Self-consistency generates multiple responses and finds consensus, improving accuracy for high-stakes decisions.
- 3Meta-prompting — asking the AI to help you write better prompts — is an underused technique that quickly improves prompt quality.
- 4Prompt chaining breaks complex tasks into focused steps, making workflows more reliable and easier to debug.
- 5XML tags create clear boundaries in your prompts, preventing the AI from confusing instructions with content to analyze.
- 6Structured outputs (JSON, tables, labeled sections) make AI responses predictable and usable in downstream workflows.
- 7The most powerful workflows combine multiple techniques: chaining for structure, XML for clarity, and structured outputs for consistency.
Test Your Understanding
Module Assessment
5 questions · Score 70% or higher to complete this module
You can retake the quiz as many times as you need. Your best score is saved.