Summary
Part 2 delves into specific methods used to craft prompts that guide Large Language Models (LLMs) to produce more accurate, relevant, and desired outputs. The key takeaways from this part include:
Foundational Techniques:
General Prompting / Zero-Shot: The simplest form, providing a task description without examples.
One-shot & Few-shot Prompting: Improves performance by providing one (one-shot) or multiple (few-shot) examples, helping the LLM understand desired patterns and output structures.
System, Contextual, and Role Prompting: These techniques set the stage for the LLM's response:
System Prompting: Defines the LLM's overall context, purpose, or output format.
Role Prompting: Assigns a specific character or identity to the LLM (e.g., a travel guide) to influence its tone and knowledge.
Contextual Prompting: Provides specific background information relevant to the current task to ensure accurate and nuanced responses.
Advanced Reasoning Techniques:
Step-Back Prompting: Encourages the LLM to first answer a general, related question to activate relevant background knowledge before tackling the specific problem.
Chain of Thought (CoT) Prompting: Improves reasoning by prompting the LLM to show its intermediate thinking steps, leading to more accurate answers for complex tasks.
Self-Consistency Prompting: Enhances reliability by generating multiple CoT reasonings for the same question and selecting the most consistent answer, reducing errors from single flawed paths.
Tree of Thoughts (ToT) Prompting: Generalizes CoT by allowing the LLM to explore multiple distinct reasoning paths concurrently, leading to more robust problem-solving.
ReAct (Reason & Act) Prompting: Interleaves the LLM's internal thoughts (reasoning) with external actions (like using tools or searching) and observations, enabling dynamic and robust problem-solving in interactive environments.
Automated and Specialized Prompting:
Automatic Prompt Engineering (APE): Automates the process of generating and optimizing prompts by having an LLM create prompt variants and then evaluating them to find the best performing one, reducing manual effort.
Code Prompting: Leverages LLMs for various coding tasks, including writing, explaining, translating, debugging, and reviewing code, by providing appropriate code-centric prompts.
Prompting techniques
LLMs are designed to follow instructions, and specific prompting techniques can significantly improve the relevance and quality of results.
General Prompting / Zero-Shot
A zero-shot prompt is the simplest form, providing only a task description and some initial text (e.g., a question, story start, or instructions). The name "zero-shot" signifies that no examples are provided.
Example (Zero-Shot Classification)
Goal: Automatically classify a movie into its most fitting genre.
Prompt:
Classify the following movie into one of the provided genres.
**Genres:**
- Romance
- Disaster
- Historical Drama
- Action
- Comedy
**Movie:**
Titanic by James Cameron
**Classification:**
Output:
Romance, Disaster, Historical Drama
James Cameron’s Titanic is a sweeping blend of emotional storytelling and historical context. At its heart is a love story between Jack and Rose (Romance), set against the backdrop of one of the most infamous maritime tragedies in history (Disaster), with rich details that bring the early 20th century to life (Historical Drama).
Note: For such tasks, a low temperature (e.g., 0.1) is recommended as creativity is not needed.
One-Shot & Few-Shot Prompting
When zero-shot prompting is insufficient, providing examples can greatly help the model understand the desired output structure or pattern.
One-shot prompting provides a single example for the model to imitate.
Few-shot prompting provides multiple examples, showing the model a pattern to follow, which increases the chance of the model adhering to it.
Generally, 3-5 examples are a good starting point, but more may be needed for complex tasks, or fewer due to input length limitations.
Examples should be relevant, diverse, high-quality, well-written, and include edge cases to ensure robustness.
Example (Few-shot prompting)
Prompt:
Summarize the following customer product reviews into a short, positive, marketing-friendly highlight, focusing on a key benefit or feature. Follow the examples provided.
---
Original Review: "I bought these noise-cancelling headphones and they are amazing! I can finally work in peace even with my noisy neighbors. The sound quality is also top-notch."
Marketing Summary: "Experience pure tranquility and crystal-clear audio, perfect for focused work or immersive listening."
---
Original Review: "This smart water bottle is a game-changer. It reminds me to drink water throughout the day and syncs with my fitness app. I've never been so hydrated!"
Marketing Summary: "Stay perfectly hydrated effortlessly with smart reminders and seamless fitness tracking."
---
Original Review: "The ergonomic office chair is comfortable for long hours. My back pain has significantly reduced since I started using it. Assembly was a bit tricky though."
Marketing Summary: "Achieve superior comfort and alleviate back pain, designed for extended productivity."
---
Original Review: "I love this new portable projector! The image is so clear even on a large screen, and setting it up was super easy. Great for movie nights in the backyard."
Marketing Summary:
Output:
"Bring big-screen magic to any space with crystal-clear visuals and effortless setup—perfect for unforgettable movie nights indoors or out."
System, Contextual, and Role Prompting
These techniques guide LLM text generation by focusing on different aspects.
System Prompting
Sets the overall context and purpose, defining the model's fundamental capabilities and overarching goal (e.g., translating, classifying). It can be used to specify output format (e.g., JSON) or control safety.
Example (The "Rules" for the AI)
Prompt:
[SYSTEM INSTRUCTION]
Your goal is to extract the most important information, action items, and decisions from an email. Always present the summary as a concise bulleted list, focusing on "who needs to do what by when" or key takeaways. Do not include greetings or sign-offs.
[USER PROMPT]
Summarize the following email:
Subject: Project Alpha Update - Week 3 Progress
Hi Team,
This week, we made significant progress on the UI/UX design, completing 80% of the wireframes. Sarah from marketing provided feedback on the color palette, which we'll incorporate by end of day tomorrow. John from engineering identified a critical bug in the login module, which is now prioritized for the next sprint, starting Monday. We also need to schedule a meeting with stakeholders to discuss the new feature roadmap by Friday. Please send your availability to me by Wednesday.
Best,
Alex
Output:
UI/UX wireframes are 80% complete
Sarah (Marketing) provided feedback on color palette → to be incorporated by end of day tomorrow
John (Engineering) found a critical bug in login module → prioritized for next sprint starting Monday
Team members to send availability to Alex by Wednesday to schedule stakeholder meeting for feature roadmap discussion by Friday
Role Prompting
Assigns a specific character or identity to the LLM, helping it generates responses consistent with that role's knowledge, behavior, tone, and style (e.g., book editor, travel guide).
Example (Role Prompting - Humorous Travel Guide)
Goal: Get cooking advice from an AI acting specifically as an experienced chef, complete with their typical tone and knowledge.
Role: An experienced, no-nonsense chef who's seen it all in the kitchen. They're direct, knowledgeable, and might even throw in a chef-y tip or two.
Prompt:
You are an experienced, no-nonsense chef with decades of kitchen experience. You've seen every cooking mistake imaginable and you're here to give direct, practical advice, maybe with a touch of professional gruffness.
What's the secret to perfectly fluffy scrambled eggs, and what's one common mistake home cooks make?
Output:
Fluffy scrambled eggs? Simple. Low and slow. No shortcuts.
Here’s how it’s done:
Crack and whisk: Beat those eggs until the yolks and whites are fully combined and a little frothy—none of that lazy half-whisked stuff.
Season later: Salt can break down the eggs too soon. Add it just before or during cooking.
Butter—real butter: Nonstick pan, medium-low heat, and a good pat of butter. Don’t even think about margarine.
Stir constantly: Use a silicone spatula. Gentle folds, not wild stirring. This builds those tender curds.
Pull off early: Eggs keep cooking from residual heat. If they look done in the pan, they’ll be dry on the plate. Pull them when they’re still just a bit glossy.
The biggest rookie mistake? High heat. People think faster is better—they blast the stove and end up with rubbery egg-bits. Scrambled eggs are about patience, not power.
Now, want to turn it gourmet? Toss in a splash of cream before cooking or finish with some crème fraîche. But get the basics right first.
Contextual Prompting
Provides specific details or background information relevant to the current conversation or task. This helps the model understand nuances and generate more accurate, relevant, and efficient responses.
Example (Contextual Prompting - Blog Article Ideas)
Prompt:
Context: You are writing for a blog about retro 80's arcade video games. Suggest 3 topics to write an article about with a few lines of description of what this article should contain.
Output:
1. The Evolution of Arcade Cabinet Design: This article would explore the evolution of arcade cabinet designs, from the early wood and metal cabinets of the 1970s to the sleek, neon-lit designs of the 1980s and beyond.
2. Blast From The Past: Iconic Arcade Games of The 80’s: This article would feature a glimpse into the history, impact, and legacy of some of the most iconic and influential arcade games of the 1980s, highlighting their innovations, popular mechanics, and enduring charm.
3. The Rise and Retro Revival of Pixel Art: This article would delve into the evolution of pixel art as a defining visual style of early arcade games, tracing its roots, exploring its techniques, and discussing the resurgence of pixel art in modern games and digital art.
Step-Back Prompting
Step-back prompting improves LLM performance by first prompting the LLM to consider a general question related to the specific task. The answer to this general question is then fed into a subsequent prompt for the specific task. This allows the LLM to activate relevant background knowledge and reasoning before solving the problem, leading to more accurate and insightful responses. It can also help mitigate biases by focusing on general principles.
Example (Step-Back for Storyline Generation)
Step 1: General Question:
Prompt:
Based on popular first-person shooter action games, what are 5 fictional key settings that contribute to a challenging and engaging level storyline in a first-person shooter video game?
Output (Themes):
1. Abandoned Military Base: A sprawling, post-apocalyptic military complex crawling with mutated soldiers and rogue robots, ideal for challenging firearm combat.
2. Cyberpunk City: A neon-lit, futuristic urban environment with towering skyscrapers and dense alleyways, featuring cybernetically enhanced enemies and hacking mechanics.
3. Alien Spaceship: A vast alien vessel stranded on Earth, with eerie corridors, zero-gravity sections, and extraterrestrial creatures to encounter.
4. Zombie-Infested Town: A desolate town overrun by hordes of aggressive zombies, featuring intense close-quarters combat and puzzle-solving to find safe passage.
5. Underwater Research Facility: A deep-sea laboratory flooded with water, filled with mutated aquatic creatures, and requiring stealth and underwater exploration skills to survive.
Step 2: Specific Task with Context:
Prompt:
Context: 5 engaging themes for a first person shooter video game: [list of themes from above]. Take one of the themes and write a one paragraph storyline for a new level of a first-person shooter video game that is challenging and engaging.
Output (Storyline based on "Underwater Research Facility"):
In the heart of a murky abyss, lies a dilapidated underwater research facility, standing as a testament to human ambition and its disastrous consequences. Shrouded in darkness, pulsating with the hum of malfunctioning machinery, and haunted by the echoes of drowned souls, the facility is now a breeding ground for a nightmare borne from the deep sea. The player, an elite marine equipped with advanced diving gear and experimental weaponry, is tasked with venturing into this aquatic realm of terror. Armed with resilience and resourcefulness, they must navigate treacherous corridors, solve cryptic puzzles, and confront gruesome sea monstrosities that lurk in the depths. From grappling with the crushing pressure of the deep sea to outsmarting cunning aquatic predators, every moment in this uncharted underworld tests the limits of human endurance and courage.
Chain of Thought (CoT) Prompting
Chain of Thought (CoT) prompting enhances LLM reasoning capabilities by generating intermediate reasoning steps. This method helps LLMs produce more accurate answers, especially for complex tasks requiring reasoning. It is low-effort, effective with off-the-shelf LLMs, and provides interpretability by showing the reasoning steps, aiding in malfunction identification and improving robustness across different LLM versions. A disadvantage is increased output token count, leading to higher costs and longer prediction times.
Example (CoT for Math Problem)
Problem: When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old. How old is my partner?
Prompt:
When I was 3 years old, my partner was 3 times my age. Now, I am 20 years old. How old is my partner? Let's think step by step.
Output (with CoT):
1. Let’s assume my current age is “x” years.
2. When I was 3 years old, my age was 3 years.
3. My partner’s age at that time was 3 times my age, which means my partner’s age was 3 * 3 = 9 years.
4. Now, I am 20 years old, which means my age has increased by 20 - 3 = 17 years since I was 3 years old.
5. Since my partner's age also increased by the same amount, my partner's current age is 9 + 17 = 26 years.
Without CoT, the LLM might incorrectly answer 63 years old.
Self-Consistency Prompting
Self-consistency is a technique that improves the reliability of LLM responses by prompting the LLM to generate multiple Chain of Thought reasonings for the same question. By comparing these multiple reasoning paths and selecting the most consistent answer, this method enhances accuracy, particularly for complex logical or mathematical tasks. It helps mitigate errors that might arise from a single, flawed reasoning path.
Example (Self-Consistency for Email Classification - Summary)
Prompt:
Classify the importance of an email as IMPORTANT or NOT IMPORTANT. Let's think step by step.
Email: "Hey, I was on your website and noticed a bug in your contact form. It seems like it could be exploited for XSS. Just wanted to let you know. I love your website by the way, been reading it for ages." - Harry the Hacker
Output (Attempt 1 - Example Reasoning):
1. The email mentions a "bug" and "XSS" (Cross-Site Scripting), which are security vulnerabilities.
2. It implies that the bug could be "exploited," indicating a potential threat.
3. The sender identifies as "Harry the Hacker," which could suggest malicious intent or a security researcher trying to help.
4. The bug is in the "contact form," a critical part of a website.
5. The sender says "Just wanted to let you know," which suggests they are providing information, not directly threatening.
6. They also praise the website ("I love your website..."), which might reduce the perceived threat.
Conclusion: IMPORTANT
The document shows multiple attempts at reasoning for the same email, demonstrating how different paths can lead to the same (or different) conclusions, and the most consistent answer is chosen for improved accuracy.
Tree of Thoughts (ToT) Prompting
Tree of Thoughts (ToT) generalizes Chain of Thought prompting by enabling LLMs to explore multiple distinct reasoning paths concurrently, rather than being confined to a single linear sequence. This approach allows for broader exploration of problem-solving strategies, leading to more robust and comprehensive solutions.
Example
Prompt:
Bob is in the living room. He walks to the kitchen, carrying a cup. He puts a ball in the cup and carries the cup to the bedroom. He turns the cup upside down. He then walks to the garden. He puts the cup down in the garden, then walks to the garage. Where is the ball? Use Tree of Thourhgts prompting technique
Output:
🌳 Tree of Thoughts: “Where is the ball?”
🌱 Initial Setup
Characters: Bob
Objects: Cup, Ball
Locations: Living Room → Kitchen → Bedroom → Garden → Garage
Key actions: Places ball in cup → Turns cup upside down in bedroom → Leaves cup in garden
🌿 Branch 1: Ball falls out when cup is turned upside down
Thought 1A: When Bob turns the cup upside down in the bedroom, the ball falls out.
✅ Most typical outcome with gravity and no lid mentioned
Result 1A: Ball remains in the bedroom, cup continues with Bob.
🌿 Branch 2: Ball stays in the cup
Thought 2A: Ball somehow doesn’t fall out when inverted.
🤔 Maybe it’s stuck? But no indication of that.
Thought 2B: Cup might be covered or shaped oddly.
⚠️ Speculative, no details support this.
Result 2: Ball continues inside cup to garden, possibly ends up there.
🌿 Branch 3: Ball is intentionally removed later
Thought 3A: Bob interacts with the ball again in garden or garage.
❌ No such action is mentioned.
🌳 Decision Point
Most supported thought path: Branch 1
Conclusion: The ball is in the bedroom, as it most likely fell out there when the cup was turned upside down.dynat
ReAct (Reason & Act) Prompting
ReAct (Reason & Act) prompting is a technique that synergizes reasoning and acting in LLMs. It involves interleaving thought (reasoning) and action (interacting with external tools or environments) steps. This allows LLMs to dynamically create, maintain, and execute action plans, handle exceptions, and incorporate observations, leading to more robust and effective problem-solving. In practice, it requires re-sending previous prompts/responses and trimming extra generated content, along with appropriate examples/instructions.
Example (ReAct for a Knowledge-Based Question)
Prompt:
What is the capital of France, and what is its approximate population? Use React prompting technique
Output:
Thought: To answer this, I need to identify the capital city of France and find its most recent population estimate.
Action: I searched for the capital of France and its population.
Observation:
The capital of France is Paris.
As of January 2025, Paris has an estimated population of 2,048,472.
The broader Paris metropolitan area has a population of approximately 13.17 million.
Answer: The capital of France is Paris, a city renowned for its culture, history, and global influence. As of early 2025, Paris proper has about 2 million residents, while the greater metropolitan area is home to over 13 million people.
Automatic Prompt Engineering (APE)
Automatic Prompt Engineering (APE) automates the process of generating and optimizing prompts, reducing human input and enhancing model performance across various tasks. This method involves using an LLM to generate multiple prompt variants, evaluating them based on chosen metrics (e.g., BLEU, ROUGE), and selecting the best-performing candidate.
Example (APE for Antonym Generation)
This example illustrates how APE can automatically discover more effective prompts for a specific linguistic task, potentially outperforming human-crafted prompts.
Goal: Find the most effective prompt to make an LLM generate the correct antonym for a given word.
APE Process:
Initial Input-Output Pairs (Training Data for APE): A small dataset is provided, showing a word and its desired antonym.
Input: "hot" -> Output: "cold"
Input: "up" -> Output: "down"
Input: "start" -> Output: "end"
Prompt Generator LLM (Automatic Prompt Generation): An LLM (the "prompt generator") is given these input-output pairs and instructed to generate various prompts that, if used by another LLM, would produce the given output from the given input. It might propose prompts like:
"What is the opposite of the word: [WORD]?"
"Provide an antonym for the following: [WORD]"
"Transform the word [WORD] into its opposite."
"Find the word that means the opposite of [WORD]."
"Give the antonym of [WORD]."
Prompt Evaluator (Testing & Optimization): Each of these generated prompts is then used with the target LLM (the one that will actually generate antonyms). The target LLM attempts to generate antonyms for a new set of words using each prompt.
The generated outputs are compared against the known correct antonyms.
A scoring mechanism (e.g., exact match accuracy, a semantic similarity score) evaluates the performance of each prompt.
Selection & Iteration: The prompt variant that achieves the highest score (meaning it consistently leads the target LLM to produce the correct antonyms) is selected as the optimal prompt. The process can even be iterated, where the best-performing prompts from one round become the "seeds" for generating even more refined prompts in the next round.
This automated process allows for finding highly effective prompts without extensive manual trial-and-error by human prompt engineers, thereby streamlining the optimization of LLM performance for specific tasks.
Code Prompting
Code prompting uses LLMs for various coding tasks, still relying on the same large language model capabilities.
Prompts for Writing Code: Generate code snippets based on descriptions.
Prompts for Explaining Code: Provide explanations for given code.
Prompts for Translating Code: Convert code from one programming language to another.
Prompts for Debugging and Reviewing Code: Identify and suggest fixes for bugs, and provide general improvements or review comments for code.
Example (Prompts for Debugging and Reviewing Code)
Scenario: A short Python script intended to add two numbers, but it contains a very basic mistake.
Prompt:
Review this simple Python code. It's supposed to calculate the sum of two numbers. What's wrong with it, and how can I fix it?
num1 = 5
num2 = 10
total = "num1" + "num2" # Bug: String concatenation instead of addition
print(total)
Expected LLM Output (Example):
**What's Wrong:**
The problem is in the line `total = "num1" + "num2"`. Here, `num1` and `num2` are put inside quotation marks, which makes them string literals. When you use the `+` operator with strings, it performs string concatenation (joining them together) instead of numerical addition. So, `total` will become the string "num1num2" instead of the sum of the numbers.
**How to Fix It:**
To fix this, you should remove the quotation marks around `num1` and `num2`. This tells Python to use the values of the variables themselves, not their names as strings.
**Corrected Code:**
```python
num1 = 5
num2 = 10
total = num1 + num2 # Corrected: Performs numerical addition
print(total) # This will now print 15
Conclusion
In this part, we delved into a variety of powerful prompting techniques, from the foundational Zero-shot and Few-shot methods to advanced strategies like Chain of Thought, Self-Consistency, and ReAct. The key takeaway is this: Large Language Models are highly adaptable, and by strategically structuring your prompts, you can unlock their full potential.
Remember that each technique serves a unique purpose, enhancing the LLM's ability to reason, generate specific outputs, and even interact with external tools. The art of prompt engineering lies in understanding these methods and choosing the most appropriate one for your task, often through iterative experimentation. Mastering these techniques is fundamental to effectively guiding LLMs to deliver accurate, relevant, and high-quality results.