visit
Prompting and prompt engineering are easily the most in demand skills of 2023. The rapid growth of Large Language Models LLMs has only seen an emergence of this new discipline of AI called prompt engineering. In this article, lets take a brief look at what prompting is, what prompt engineers do and also the different elements of a prompt that a prompt engineer works with.
The art of designing or engineering these inputs to suit the problem on hand so as to serve you best gives birth to a fairly new discipline called prompt engineering.
Before diving into prompt engineering, lets understand the motivation or need to engineer prompts with examples. Lets say I want to summarise a given passage. So I give a big passage from wikipedia as input and in the end say, “summarise the above paragraph”. This way of providing simple instructions in the prompt to get an answer from the LLM is known as Instruction prompting.
Lets move to a slightly more complicated case of mathematics and ask the LLM to multiply two numbers. Lets try the prompt, “what is 2343*1232”. The answer we get is “23431232”, which obviously is not the multiplication of the two numbers but both put together.
Now let me modify the prompt and add an extra line in the prompt to be more specific, “what is 2343 multiplied by 1232. Give me the exact answer after multiplication”. We now get, “2886576” the right answer from the LLM.
So, clearly the quality of model’s output is determined by the quality of the prompt. This is where prompt engineering comes into play. The goal of a prompt engineer is to assess the quality of output from a model and identify areas of improvement in the prompt to get better outputs. So prompt engineering is a highly experimental discipline of studying the capabilities and limitations of the LLM by trial and error with the intention of both understanding the LLM and also designing good prompts.
Prompts can be instructions where you ask the model to do something. In our example, we provided a huge body of text and asked the model to summarise it.
Prompts can optionally include a context for the model to better serve you. For example, if I have questions about say, English heritage sites, I can first provide a context like, “English Heritage cares for over 400 historic monuments, buildings and places — from world-famous prehistoric sites to grand medieval castles, from Roman forts … “ and then ask my question as, “which is the largest English heritage site?”
As part of the prompt, you can also instruct on the format in which you wish to see the output. And so a prompt can optionally have an output indicator. For example, you can ask “I want a list of all the English heritage sites in England, their location and specialty. I want the results in tabular format.”
Desired format:
Company names: <comma_separated_list_of_sites>
Sites: -||-Location: -||-Speciality: -||-
A prompt can include one or more input data where we provide example inputs for what is expected from the model. In the case of sentiment classification, take a look at this prompt where we start providing examples to show our intentions and also specify we don’t want any explanation in the response:
Text: Today I saw a movie. It was amazing.
sentiment: Positive
Text: I don't very good after seeing that incident.
sentiment:
This way of giving examples in the prompt is similar to how we explain to humans by showing examples. In the prompting world its called few-shot prompting. We provide high quality examples having both the input and the output of the task. This way the model understands what you are after and so responds far better.
Text: Today I saw a movie. It was amazing.
sentiment: Positive
Text: I don't very good after seeing that incident.
sentiment: Negative
Text: Lets party this weekend to celebrate your anniversary.
sentiment: Positive
Text: Walking in that neighbourhood is quite dangerous.sentiment: Negative
Text: I love watching tennis all day long
sentiment:
Text: I love watching tennis all day long
sentimet:
This is zero-shot prompting where you don’t provide any examples but still expect the model to answer you properly. Typically while prompt engineering, you start with zero-shots as its simpler and based on the response, you move on to few-shots by providing examples to get better response.
If you wish to jump to a specialised topic with the LLM, you can straightaway steer it to be an expert in a field by assigning it a role and this is called role prompting.
Or it could be slightly more complicated by asking the LLM to act as a linux terminal. And providing specific instructions to copy the first 10 lines of a file into a different file and to save it. You can even prevent it from including any other text in the output by explicitly mentioning not to give any explanation.
You are a poet.
Write a poem about AI Bites
Act as a linux terminalI want you to provide the shell command to read the contents of a file named "input.txt".Copy the first 10 lines to a different file with the name "new.txt" and save it.Do not give any explanations.
Extract locations from the below textDesired format:Cities: <comma_separated_list_of_cities>Countries: <comma_separated_list_of_countries>Input: Although the exact age of Aleppo in Syria is unknown,an ancient temple discovered in the city dates to around 3,000 B.C. Excavations in the1990s unearthed evidence of 5,000 years of civilization,dating Beirut, which is now Lebanon's capital, to around 3,000 B.C.
Text: Today I saw a movie. It was amazing.sentiment: Positive
Text: I don't very good after seeing that incident.sentiment: Negative
Text: Lets party this weekend to celebrate your anniversary.sentiment: PositiveText: Walking in that neighbourhood is quite dangerous.sentiment: Negative
Text: """{text input here}"""
Then there is something called the stop sequence which hints the model to stop churning out text because its finished with the output. You may choose a stop sequence with any symbol of your choice. But new line seems to be the usual option here.
Text: "Banana",
Output: "yellow \\n"
Text: "Tomato",
Output: "red \\n"
Text: "Apple",
Output: "red \\n"
/*
Get the name of the use as input and print it
*/
# get the name of the user as input and print it
With all that introduction about prompts, prompt engineering and their types, we have only scratched the surface here. For example, how can we ask the LLM to reason about a given situation? There are more advanced ways to prompt like chain-of-thought, self-consistency, general knowledge, etc. Lets have a look at those in the upcoming posts and videos. Please stay tuned!