Chain-of-Thought prompting is a technique where we guide an AI to think step-by-step before giving a final answer.
Instead of this:
Q: What’s 15% of 200?
A: 30
We get this:
Analyse: "The question asks for a percentage calculation."
Think: "I know that 15% of 200 is calculated as (15/100) × 200."
Calculate: "That equals 30."
Validate: "Double-checked. The math is correct."
Result: "Final answer: 15% of 200 is 30."
See the difference? CoT adds transparency, explainability, and often improves accuracy.
LLMs are incredibly powerful—but when given vague or complex tasks, they might guess or hallucinate. CoT prompting helps by:
✅ Making LLMs explain their thought process
✅ Helping users understand how answers are formed
✅ Reducing errors and hallucinations
✅ Supporting multi-step reasoning (math, logic, ethics, code debugging, etc.)
Let’s say we want to create an assistant that doesn’t blurt out answers but breaks them down like a teacher. We’ll give it a fixed structure:
Step 1: Analyse — What is the user asking?
Step 2: Think — How can I approach this?
Step 3: Reflect — Any edge cases or checks?
Step 4: Output — Provide the calculated answer.
Step 5: Validate — Does the answer make sense?
Step 6: Result — Final polished explanation.
In prompt engineering, how you ask matters. Here's how we guide the LLM to reason properly.
You are an AI tutor who solves problems by thinking aloud step-by-step.
Always respond with one step at a time in JSON format.
Start with "analyse", then go through "think", "reflect", "output", "validate", and "result".
We use a system prompt like a “personality blueprint” that tells the AI how to behave.
Here’s a beginner-friendly version using Python + OpenAI API:
from openai import OpenAI
import json
client = OpenAI()
system_prompt = """
You are an assistant that solves problems by thinking step-by-step:
analyse → think → reflect → output → validate → result.
Return one step at a time as JSON.
"""
messages = [{"role": "system", "content": system_prompt}]
query = input("Ask a question: ")
messages.append({"role": "user", "content": query})
while True:
response = client.chat.completions.create(
model="gpt-4o",
response_format={"type": "json_object"},
messages=messages
)
step_data = json.loads(response.choices[0].message.content)
messages.append({"role": "assistant", "content": json.dumps(step_data)})
print(f"🔁 {step_data['step'].capitalize()}: {step_data['content']}")
if step_data["step"] == "result":
break
✅ This loop lets the model reason one step at a time, and we stop when the final result is ready.
Input: What’s the square of 9?
Analyse: The user wants to find the square of a number.
Think: I know squaring means multiplying the number by itself.
Reflect: 9 × 9 = 81, that seems correct.
Output: The square of 9 is 81.
Validate: 9 squared is indeed 81.
Result: The final answer is 81, as 9 × 9 = 81.
Now THAT is a trustworthy assistant.
import json
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
client = OpenAI()
system_prompt = """
You are an AI assistant who is expert in breaking down complex problems and then resolve the user querry.
For the give user input, analyse the input and break down the problem step by step.
Atleast think 5-6 steps on how to solve the problem before solving it down.
The steps are you get a user input, you analyse, you think, you again think for several times and then return an output with explanation and then finally you validate the output as well before giving final result.
Follow the steps in sequence that is "analyse", "think", "output", "validate" and finally "result".
Rules:
1. Follow the strict JSON output as per output schema.
2. Always perform one step at a time and wait for next input.
3. Carefully analyse the user query.
Output Format:
{{step: "string", content: "string"}}
Example:
Input: what is 2 + 2
Output: {{step: "analyse", content: "Alright! The user is interested in maths query and he is asking a basic arthermatic operation."}}
Output:{{step: "think", content: "To perform the addition I must go from left to right and add all the operands"}}
Output: {{step: "output", content: "4"}}
Output: {{step: "validate", content: "seems like 4 is correct ans for 2 + 2"}}
Output: {{step:"result", content: "2 +2 = 4 nd that is calculated by adding all numbers"}}
"""
messages = [
{"role":"system", "content": system_prompt},
]
query = input("> ")
messages.append({"role":"user", "content": query})
while True:
response = client.chat.completions.create(
model="gpt-4o-mini",
response_format= {"type": "json_object"},
messages = messages
)
parsed_response = json.loads(response.choices[0].message.content)
messages.append({"role":"assistant", "content": json.dumps(parsed_response) })
if parsed_response.get("step") != "output":
print(f"🧠: {parsed_response.get("content")}")
continue
print(f"🤖: {parsed_response.get("content")}")
break
Join Anik on Peerlist!
Join amazing folks like Anik and thousands of other people in tech.
Create ProfileJoin with Anik’s personal invite link.
0
13
0