3 Practical Prompt Engineering Techniques You Can Use Today
3 Practical Prompt Engineering Techniques You Can Use Right Now
There's a lot of overhype and grifting in the AI YouTube space right now. People are constantly promoting the "latest and greatest" releases of random GitHub projects like ChatDev, DBGPT, MetaGPT, and countless others. While some of these tools are genuinely useful, most content about them is click-baity and not ultra-valuable. Let's face it – in five (probably three or even two) years, nearly all of these projects will be dead.
In this post, I want to filter through that noise and focus on three concrete, instantly valuable, and practical prompt engineering techniques. By the time you finish reading, you'll have three new techniques you can apply immediately. Let's focus on the underlying technology that's giving us all this value in the first place: Large Language Models (LLMs).
The Three Techniques
1. Cap Refs (Capitalized References)
2. One Word Prompts
3. Context Chaining
You've likely already used one or two of these techniques without even knowing it. Let's dive in.
1. Cap Refs (Capitalized References)
When you're building out reusable prompts that require several input variables, it can be hard to organize everything effectively. To get around this, I use a technique I call "Capitalized Referencing" or "Cap Refs."
Example:
Here's a sample chat with ChatGPT using this technique:
```
User: Let's workshop some titles for a new YouTube video. We want high ranking SEO, click-worthy titles. Create
10 highly engaging, click-worthy titles based on EXISTING TITLES that combine one or more VIDEO ELEMENTS below.
When you generate the title, place elements involved next to the text like this.
EXISTING TITLES:
- How I Made $10,000 with ChatGPT in 30 Days
- 5 Prompt Engineering Techniques That Will Change Your Life
- The Ultimate Guide to Becoming a ChatGPT Power User
VIDEO ELEMENTS:
- Prompt engineering
- Python programming
- Making money online
- AI tools
- Automation
```
The magic here is placing the capitalized references (EXISTING TITLES, VIDEO ELEMENTS) in separate sections and then specifying each variable with its list of items below. This organized structure makes it much easier for both you and the AI to keep track of different inputs.
This approach is particularly useful for repetitive tasks like creating SEO-optimized titles for blogs or YouTube channels, where you want to quickly generate ideas without reinventing the wheel each time.
2. One Word Prompts
There are many scenarios, especially when using GPT-4, where you shouldn't overthink your prompts. If you have a problem you want fixed, sometimes just tossing one word at GPT followed by your content can save you a lot of time and typing.
Examples:
Example 1: Code Testing
```
User: pytest
def calculate_discount(price, discount_percentage):
"""
Calculate the final price after applying a discount.
Args:
price (float): The original price
discount_percentage (float): Discount percentage (0-100)
Returns:
float: Price after discount
"""
if discount_percentage < 0 or discount_percentage > 100:
raise ValueError("Discount percentage must be between 0 and 100")
discount_amount = price * (discount_percentage / 100)
final_price = price - discount_amount
return final_price
```
ChatGPT understood exactly what I wanted – to create tests for this function – just from the single word "pytest."
Example 2: Translation
```
User: Translate
J'aime les conseils pratiques d'ingénierie de prompts.
```
The AI immediately recognized that I wanted to translate the French text to English without my needing to specify languages.
**Example 3: Rewording**
```
User: reword
How to make your prompts better with these simple techniques
```
Again, with just one word, I got back reworded versions of my title.
The beauty of one-word prompts is that they often get you straight to the point without unnecessary explanation from the AI. They work particularly well when you're pasting in content that contains enough context for the AI to understand what you want.
3. Context Chaining
Moving fast is key to engineering success. We all have 24 hours in a day, but some people get more out of those hours because they move efficiently – taking one swing to cut two trees. Context chaining saves you time by letting you communicate relevant contexts insanely quickly.
Example:
Let's compare two approaches to the same question:
Traditional approach (148 characters):
```
User: I'm building a Python CLI tool. Want to publish it to TestPyPI. I'm using Poetry as my package management tool. How can I publish my package?
```
Context chaining approach (41 characters):
```
User: code python poetry publish to TestPyPI
```
Both prompts will get you similar results, but the context chaining version is just 41 characters – less than a third of the length of the traditional approach!
The magic of context chaining is that it lets you concisely pack your prompt in a compressed way:
- "code" - We're in the realm of coding
- "python" - Specifically using Python
- "poetry" - Operating with the Poetry package manager
- "publish to TestPyPI" - Our specific goal
After setting up this context chain, the specific action "publish to TestPyPI" makes perfect sense to the AI.
If you're writing 10 or 100 prompts a day, cutting your prompt time by two-thirds will help you get much more out of your day than the engineer sitting next to you.
Bonus: Combining Techniques
As a final tip, you can combine context chaining with cap references to create even more compressed, concise prompts.
Example:
```
User: code python combine FUNC1 and FUNC2 together
FUNC1:
def connect_to_db():
import sqlite3
conn = sqlite3.connect('database.db')
return conn
FUNC2:
def insert_record(name, email):
cursor = conn.cursor()
cursor.execute('INSERT INTO users (name, email) VALUES (?, ?)', (name, email))
conn.commit()
```
This prompt combines the efficiency of context chaining with the organization of cap refs, allowing the AI to immediately understand that we want to combine two Python functions while clearly defining what those functions are.
Summary
To recap, we've covered three powerful prompt engineering techniques:
1. **Cap Refs (Capitalized References)** - Reference input variables in a seamless, organized manner
2. **One Word Prompts** - When you don't need to overthink it, a single word followed by your content can be enough
3. **Context Chaining** - String together key contextual words to dramatically reduce prompt length while maintaining clarity
These techniques work across different LLMs, whether you're using GPT-3.5, GPT-4, Falcon, or any other model. As LLMs continue to improve, they'll need less information to perform effectively, making these concise prompting methods even more valuable.
The future of prompt engineering is all about specifying the right context so models can load the right information to predict the next token. Master these three techniques now, and you'll be ahead of the curve.
Comments
Post a Comment