In the world of prompt engineering, how you design your input to an LLM (like OpenAI’s GPT models) greatly affects the output. Two foundational techniques are:
Zero-shot prompting: Ask the model to perform a task without any examples.
Few-shot prompting: Provide the model with a few examples of the task, so it can mimic the pattern.
These techniques are essential when you're using LLMs for text classification, generation, summarization, translation, or any custom NLP task.
In zero-shot prompting, you give the model only the instruction, and it relies on its pre-trained knowledge to generate a response.
✅ Example Task: Sentiment Classification
from openai import OpenAI
client = OpenAI()
prompt = "Classify the sentiment of the following sentence as Positive, Negative, or Neutral:\n\n'Stunning visuals but the plot was boring.'"
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}],
temperature=0.0,
)
print(response.choices[0].message.content)
🧾 Output:
Negative
🔍 Why it works: The model was trained on a huge corpus of sentiment-labeled examples and can apply that knowledge even if we don’t provide training examples in the prompt.
Here, we include a few labeled examples in the prompt so the model understands the format, structure, and expectations better.
✅ Example: Same Sentiment Classification, Now Few-Shot
few_shot_prompt = """
Classify the sentiment of each sentence as Positive, Negative, or Neutral.
Example 1:
Sentence: "I love this movie, it was fantastic!"
Sentiment: Positive
Example 2:
Sentence: "The product is okay, nothing special."
Sentiment: Neutral
Example 3:
Sentence: "Worst customer service I've ever experienced."
Sentiment: Negative
Now classify this sentence:
Sentence: "Stunning visuals but the plot was boring."
Sentiment:
"""
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": few_shot_prompt}],
temperature=0.0,
)
print(response.choices[0].message.content)
🧾 Output:
Negative
🧠 Why it works better sometimes: The model picks up the input-output format and the task intent more clearly, improving accuracy—especially for domain-specific tasks.
Technique Best For Zero-shot Simple or general tasks where model knowledge is sufficient Few-shot Custom formats, domain-specific tasks, or when extra guidance is needed
💡 Tip: If performance is still lacking, consider fine-tuning or using tool-augmented prompting (like RAG, LangChain).
To use the examples above, ensure you’ve set up OpenAI:
pip install openai
And set your API key:
import os
os.environ["OPENAI_API_KEY"] = "your_api_key_here"
examples = [
("I love this!", "Positive"),
("Not worth the money.", "Negative"),
("It's just okay.", "Neutral")
]
new_sentence = "The acting was good but the story lacked depth."
few_shot_dynamic = "Classify sentiment as Positive, Negative, or Neutral.\n\n"
for i, (s, label) in enumerate(examples, 1):
few_shot_dynamic += f"Example {i}:\nSentence: \"{s}\"\nSentiment: {label}\n\n"
few_shot_dynamic += f"Now classify this sentence:\nSentence: \"{new_sentence}\"\nSentiment:"
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": few_shot_dynamic}],
temperature=0.0,
)
print(response.choices[0].message.content)
Zero-shot is fast, simple, and surprisingly effective for common tasks.
Few-shot gives you more control, especially when task format matters.
These are the foundations of prompt engineering—mastering them sets you up for advanced tools like LangChain, LangGraph, or Retrieval-Augmented Generation (RAG).
Join Anik on Peerlist!
Join amazing folks like Anik and thousands of other people in tech.
Create ProfileJoin with Anik’s personal invite link.
0
10
1