- The Prompt Engineer
- Posts
- #6: Prompt balancing
#6: Prompt balancing
Happy Monday folks.
We know ya’ll hate Mondays so we brought you an easy-to-digest tutorial.
So let’s see what we have for today.
But first let’s push the easy button.
Prompt balancing
Last week we learnt about few-shot prompting and how we can teach certain patterns and rules to ChatGPT/GPT-3.
One thing we didn’t cover was how to balance these examples.
Why is this important?
It matters because we don’t want to teach the Large Language Model patterns by mistake.
Look at the example below.👇
Here in the first case the Large Language Model can pick up a pattern of having three identical sentiments after each other: Neg → Neg → Neg → Pos → Pos → Pos and this can create scenario where a negative input is categorised as positive falsely.
Creating a random pattern Neg → Pos → Pos → Neg → Neg solves this issue as the model can’t learn any sequences we left in our prompt by mistake.
Hope you found this useful!
Let me know what you think or have any questions - I personally go through all my emails!
Best,
Gabor Soter, PhD
A little about me:
did my PhD in Europe’s largest AI and robotics research lab
worked as software engineer and CTO at Y-combinator-backed and AI startups
in my previous startup my team worked with OpenAI