What is prompt engineering?
Prompt engineering is the practice of writing instructions for AI language models to get useful, consistent results. It's the difference between asking "tell me about invoices" and getting a generic essay, versus asking "extract the invoice number, date, and total from this document and return them as JSON" and getting exactly what you need.
It's not rocket science, but it does require some craft. The way you structure, phrase, and constrain your prompts has a massive impact on output quality.
Why prompts matter
Language models are general-purpose tools. They can write poetry, debug code, or summarise legal documents — depending entirely on what you ask. The prompt is your lever.
In a business context, poorly written prompts lead to:
- Inconsistent outputs that require manual review
- Hallucinated facts presented as truth
- Verbose, vague, or off-topic responses
- Missed edge cases in automated workflows
Well-engineered prompts lead to AI that behaves predictably, follows your rules, and produces output you can actually use.
Core techniques
1. Be specific
Vague prompts get vague answers. Tell the model exactly what you want, in what format, and what to avoid.
Bad: "Summarise this document."
Better: "Summarise this document in 3 bullet points. Focus on financial impacts. Use plain English."
2. Set a role
Giving the model a persona or role helps it calibrate tone and expertise level. "You are an experienced employment lawyer reviewing this contract" produces very different output from a generic prompt.
3. Use system prompts
In production systems, system prompts set the baseline behaviour — tone, constraints, output format, safety rules. User prompts then provide the specific task. Separating these gives you consistent behaviour across different queries.
4. Few-shot examples
Show the model what good output looks like. Provide 2–3 examples of input/output pairs before giving it the real task. This is often more effective than lengthy instructions.
5. Constrain the output
Tell it what format to use (JSON, bullet points, table), what length to target, and what to exclude. "Do not include opinions" or "respond only with the extracted data" can prevent rambling.
6. Chain of thought
For complex reasoning tasks, ask the model to "think step by step" or "explain your reasoning before giving the final answer." This improves accuracy on multi-step problems.
Business examples
Some real-world prompt patterns we use in production:
- Invoice extraction: "You are a data extraction system. Read the following invoice text and return a JSON object with: invoice_number, date, vendor_name, line_items (array), total_amount. If a field is not found, return null."
- Email classification: "Classify the following email into one of these categories: billing, technical_support, sales_enquiry, complaint, other. Return only the category name."
- Knowledge Q&A: "Answer the user's question using only the context provided below. If the answer is not in the context, say 'I don't have enough information to answer that.' Always cite the source document."
Common mistakes
- Too vague: "Help me with my data" — the model has no idea what you actually want.
- Too long: Prompts that are pages long often confuse the model. Keep instructions clear and structured.
- No examples: Describing the format you want is harder (and less reliable) than showing it.
- No constraints: Without boundaries, models default to verbose, generic responses.
- One-and-done: Good prompts are iterated on. Test with real data, review outputs, and refine.
Beyond prompts
Prompt engineering is important, but it's just one part of a production AI system. For reliable business applications, you also need:
- RAG to give the model access to your data (see What Is RAG?)
- Guardrails to validate outputs and catch errors
- Evaluation pipelines to measure answer quality systematically
- Human review for high-stakes decisions
Think of prompts as the steering wheel. But you still need the engine, the brakes, and the road.
Key takeaways
- A well-written prompt dramatically changes the quality of AI output — it's not just about what you ask, but how.
- System prompts set the context, role, and constraints. User prompts provide the specific task.
- Few-shot examples (showing the model what good output looks like) are one of the most reliable techniques.
- For production systems, prompt engineering is an ongoing process — you iterate based on real outputs.