Few-Shot Prompting
Published on: 05 October 2025
Tags: #few-shot-prompting #ai
The Spectrum of Prompting: Zero, One, and Few-Shot
graph TD
subgraph Prompting Spectrum
A[Start: Define Task] --> B{Provide Examples?};
B -- No --> C[Zero-Shot Prompt
Instruction + Query];
B -- Yes --> D{How Many Examples?};
D -- One --> E[One-Shot Prompt
Instruction + 1 Example + Query];
D -- Multiple --> F[Few-Shot Prompt
Instruction + 2+ Examples + Query];
end
C --> G([LLM Generates Output]);
E --> G;
F --> G;
How In-Context Learning Works (Simplified)
graph TD
subgraph Prompt Processing
A["
User Prompt
Instruction: Translate English to French.
Example 1: sea otter -> loutre de mer
Example 2: cheese -> fromage
Query: peppermint -> ???
"]
end
subgraph LLM Internal Process
B[1.Tokenization & Embedding
Prompt is converted into numerical vectors]
C{2.Transformer Attention Layers}
D["
Self-Attention Mechanism
The vector for 'peppermint' (Query)
attends to vectors for 'sea otter' & 'cheese' (Keys)
to understand the 'English -> French' pattern from the examples (Values).
"]
end
subgraph Output Generation
E[3.Contextualized Representation
Model understands the task based on context]
F[4.Generated Output
menthe poivrée]
end
A --> B --> C --> D --> E --> F
Few-Shot Prompting vs. Fine-Tuning
graph TD
A[Pre-trained LLM] --> B(Adapt to New Task);
B --> C{Few-Shot Prompting};
B --> D{Fine-Tuning};
subgraph "Few-Shot Prompting Workflow"
C --> C1[1.Craft a prompt with a few examples];
C1 --> C2["2.Send prompt to the model
(Inference)"];
C2 --> C3[3.Get task-specific output];
C3 --> C4[Result: No model weights are updated];
end
subgraph "Fine-Tuning Workflow"
D --> D1[1.Prepare a large labeled dataset];
D1 --> D2["2.Train the model on the new dataset
(Training)"];
D2 --> D3[3.A new, fine-tuned model is created];
D3 --> D4[Result: Model weights are updated];
end
Advanced Few-Shot Techniques
graph TD
A["Base Prompt
(Instruction + New Query)"]
subgraph "Standard Few-Shot"
B[+Few Input/Output Examples] --> C[Standard Few-Shot Prompt]
end
subgraph "Chain-of-Thought (CoT)"
D[+Examples with Reasoning Steps] --> E["Few-Shot CoT Prompt
(Teaches the model HOW to think)"]
end
subgraph "Retrieval Augmented Generation (RAG)"
F[+Retrieved External Documents] --> G["RAG-Enhanced Prompt
(Gives the model relevant knowledge)"]
end
A --> B
A --> D
A --> F