chatGPT Prompt Engineering
WIP.. fair warning 😊
Summary of Jules White's prompt categories :
|
Pattern Category |
Prompt Pattern |
|
Input Semantics |
Meta Language Creation |
|
Output Customization |
Output Automater |
|
Persona |
|
|
Visualization Generator |
|
|
Recipe |
|
|
Template |
|
|
Error Identification |
Fact Check List |
|
Reflection |
|
|
Prompt Improvement |
Question Refinement |
|
Alternative Approaches |
|
|
Cognitive Verifier |
|
|
Refusal Breaker |
|
|
Interaction |
Flipped Interaction |
|
Game Play |
|
|
Infinite Generation |
|
|
Context Control |
Context Manager |
What you can see here is a lot of the power of these large language models comes from not just thinking of them as a one-off, like I ask it a question, it gave me a response that wasn't very good or it couldn't answer it. We should always be thinking about how do we take the response and use it to inform a next question for the conversation or a next statement for the conversation, or how do we give it feedback on what it did well or what it didn't do well. That's how we get the really useful products. That's how we go from thinking of it as a hammer where we strike once it doesn't give us what as we want and we throw on the floor that mindset is wrong. We want to go to the mindset of it's a hammer. We're going to have to chisel away at the rock to get the really beautiful outputs of it. If we're not continuing the conversation, continually asking follow-up questions, problem-solve in the conversation and trying to move around roadblocks, taking what we're being given and giving it different shapes and formats that may be useful to us. We're really missing the underlying power and capabilities of these large language models.
Jules' paper : https://arxiv.org/pdf/2302.11382.pdf
Chain of thought prompting and its benefits: https://arxiv.org/pdf/2201.11903.pdf
..
Comments
Post a Comment