Summary
Provide examples: Crucial for guiding the LLM.
Simplicity and specificity: Keep prompts clear and concise.
Instructions over constraints: Tell the model what to do.
Control output length: Manage responses for efficiency.
Experimentation: Test different formats and styles.
Adaptation: Stay updated with model changes.
Collaboration: Work with others for better prompts.
Essential Prompting Practices
Finding the right prompt involves experimentation and following key best practices.
1. Provide Examples The most crucial best practice is to include one-shot or few-shot examples within your prompt. Examples serve as powerful teaching tools, showing the model desired outputs and acting as a reference point for improved accuracy, style, and tone.
2. Design with Simplicity Prompts should be concise, clear, and easy to understand for both you and the model. Avoid complex language and unnecessary information. Use action-oriented verbs like "Act," "Analyze," "Generate," "Summarize".
Task: Get a recipe for basic scrambled eggs.
DO
Prompt: "Give me a simple recipe for scrambled eggs."
DON'T:
Prompt: "I'm feeling a bit peckish this morning, and I've heard that eggs are a good source of protein. Could you perhaps, if it's not too much trouble, provide some culinary guidance on how one might prepare a rudimentary breakfast dish involving eggs, specifically focusing on the scrambled variety? I'm not a gourmet chef, so nothing too elaborate, please. Thanks in advance!"
3. Be Specific About the Output Provide specific details in your prompt (e.g., via system or context prompting) to guide the LLM and improve accuracy. A concise instruction alone might be too generic.
Task: Get a summary of a movie review focused on specific aspects.
DO:
Prompt: "Summarize the following movie review. Focus only on the reviewer's opinion of the protagonist's acting performance and the film's visual effects. Limit the summary to two sentences."
DON'T:
Prompt: "Summarize this movie review."
4. Use Instructions Over Constraints Focus on positive instructions, telling the model what it should do, rather than primarily using constraints (what it should not do). Instructions directly communicate the desired outcome, encourage creativity, and are less prone to confusion or conflicts than long lists of constraints. Constraints are still valuable for safety, clarity, or strict output formats.
Task: Generate a short, positive review for a new coffee shop.
DO:
Prompt: "Write a two-sentence, enthusiastic review for 'The Daily Grind' coffee shop. Highlight its cozy atmosphere and excellent latte."
DON'T:
Prompt: "Write a review for 'The Daily Grind' coffee shop. Do not mention any negative points. Do not talk about the price. Do not make it longer than two sentences. Don't use words like 'okay' or 'decent'. "
5. Control the Max Token Length Manage the length of the generated response by setting a maximum token limit in the configuration or explicitly requesting a specific length in your prompt.
Task: Explain a complex concept concisely.
DO:
Prompt: "Explain how photosynthesis works in exactly one sentence."
DON'T:
Prompt: "Explain how photosynthesis works."
6. Use Variables in Prompts To create dynamic and reusable prompts, use variables for inputs that change (e.g., city names). This saves time and effort, especially when integrating prompts into applications.
Task: Get a fun fact about different animals.
DO:
Prompt 1: Prompt Template: "Tell me one fun fact about the {animal_type}."
Prompt 2: animal_type = "kitten"
Prompt 3: animal_type = "elephant"
DON'T:
Prompt (for elephant): "Tell me one fun fact about the elephant."
Prompt (for hummingbird): "Tell me one fun fact about the hummingbird."
7. Experiment with Input Formats and Writing Styles Different models, configurations, prompt formats, word choices, and styles yield varying results. Experimenting with these attributes is crucial.
Task: Get creative ideas for a children's story about a lost toy.
Approach 1: Direct Question Format
Prompt: "What are some whimsical ideas for a children's story about a teddy bear who gets lost at a carnival?"
Approach 2: Role-Play/Scenario-Based Format
Prompt: "Imagine you are a children's book author brainstorming. Give me three creative plot ideas for a story where a brave teddy bear named Barnaby gets separated from his owner at a bustling carnival. Each idea should have a unique twist."
8. For Few-Shot Prompting with Classification Tasks, Mix Up the Classes While the order of few-shot examples usually doesn't matter much, when performing classification tasks, ensure that you mix up the possible response classes within your examples. This prevents overfitting to a specific order and helps the model learn the key features of each class, leading to more robust performance on unseen data. A good starting point is 6 few-shot examples for testing accuracy.
Task: Classify an animal as "Mammal" or "Bird".
Prompt:
Classify the following animals:
Animal: "Eagle"
Class: Bird
Animal: "Dolphin"
Class: Mammal
Animal: "Sparrow"
Class: Bird
Animal: "Bat"
Class: Mammal
Animal: "Penguin"
Class: Bird
Animal: "Kangaroo"
Class: Mammal
Animal: "Lion"
Class:
9. Adapt to Model Updates Stay informed about changes in model architecture, training data, and capabilities. Experiment with newer model versions and adjust your prompts to leverage new features. Tools like Vertex AI Studio can help store, test, and document prompt versions.
10. Experiment with Output Formats Beyond input format, experiment with output formats. For non-creative tasks (extracting, selecting, parsing, ordering, ranking, categorizing data), structured formats like JSON or XML are beneficial. Returning JSON objects, for instance, can simplify data handling, allow for sorted data, and crucially, force the model to adhere to a structure, limiting hallucinations.
Task: Extract product information from a customer's message. By specifying JSON output, the LLM is forced to return data in a parseable and consistent format. This is extremely valuable for automated processing (e.g., feeding into a database, another application, or a fulfillment system) and significantly reduces the chance of hallucinations or unstructured text that is hard to work with programmatically.
DO:
Prompt: "Extract the product name and quantity from the following message and return it as a JSON object with 'product' and 'quantity' keys. If quantity is not specified, assume 1. Message: 'I'd like to order 3 units of the Wireless Headphones Pro.'"
DON'T:
Prompt: "Extract the product name and quantity from the following message.
11. Experiment Together with Other Prompt Engineers Collaborating with multiple prompt engineers can lead to more effective prompts.
Conclusion: The Path to Prompting Mastery
In conclusion, the part 3 has provided you with a robust framework of best practices that are indispensable for effective prompt engineering. While LLMs are powerful, their true potential is unlocked through the deliberate application of these strategies. By consistently providing clear examples, maintaining simplicity and specificity, favoring instructions, and embracing a mindset of continuous experimentation and adaptation, you can significantly enhance the quality, reliability, and relevance of your LLM-generated outputs. These practices are not just guidelines but essential tools for navigating the complexities of AI interaction and achieving mastery in prompt engineering.