// Guides

Prompt Engineering That Actually Works

Core Workflows: Prompt Engineering That Actually Works

12 April 2026 claude tutorial core-workflows

Prompt Engineering That Actually Works

Series: Claude Learning Journey · Core Workflows

Prompts are instructions. Most people write them like they are writing a letter to a human who will read between the lines. Claude is not a human. It reads exactly what you write, takes it literally, and has no idea what you meant if what you said was ambiguous. That is both the problem and the advantage of prompt engineering — you are not negotiating with a person, you are programming with words.

The difference between a good prompt and a bad one

A bad prompt is vague. “Help me with this code” is a bad prompt. Claude will help, but it will guess what kind of help you need, which model to use, and what level of detail to provide. You might get a 10-line summary when you needed a full refactor, or a wall of text when you just needed a one-liner.

A good prompt tells Claude what you want, what format you want it in, and what context it needs. specificity is not about length — it is about clarity.

Compare these:

Bad: “Write a function to process some data.”

Good: “Write a Python function called process_batch that takes a list of dictionaries with keys user_id and amount, filters out any dicts where amount is negative or zero, sums the remaining amount values grouped by user_id, and returns a list of {"user_id": x, "total": y} dicts sorted by total descending. Include type hints and a docstring.”

The second version would produce exactly what you wanted. The first version would produce something that might be right, or might not.

The four components of a solid prompt

1. What you want the model to do. Be explicit. Not “analyse this” but “identify the three most expensive operations in this function and explain why each one is slow.”

2. The format or structure you expect. “Give me a markdown table with columns for filename, line count, and last modified date.” If you want bullet points, say so. If you want code, say so and specify the language.

3. Context and constraints. What is the surrounding situation? Are there files the model should read? A particular framework in use? Constraints like “do not use external libraries” or “keep it under 50 lines”?

4. Who the output is for. Stating the audience changes the level of detail. “Explain this to a junior developer” produces different output than “explain this to a systems engineer reviewing a production incident.”

Iterative prompts beat long prompts

You do not need to get the prompt perfect the first time. One of the most useful patterns is to send a first prompt, read what Claude returns, then send a follow-up that refines based on what you saw.

You: "Write a Python script to paginate through the GitHub API and collect all commits for a repo."
Claude: [writes a script]
You: "Good, but it doesn't handle rate limiting. Add exponential backoff with a maximum of 5 retries, and log a warning when it falls back."
Claude: [updated script]

This iterative approach is closer to how you actually work with a colleague than writing a 500-word brief upfront. It is also more resilient because you can steer Claude away from wrong turns before it goes too far down a path.

Saying what you do not want

Claude is literal. If you say “do not use regex”, it will not use regex. If you say “do not explain the theory, just the code”, it will give you code only. The negative constraint is a precise tool — use it.

Common negative constraints:

  • “Do not use external libraries”
  • “Do not explain, just show the code”
  • “Do not change the existing function signature”
  • “Do not add anything not directly requested”

Chain-of-thought for complex tasks

For multi-step problems, asking Claude to think through its reasoning before giving the final answer leads to better results. A simple “think step by step” or “walk me through your reasoning” trigger produces more careful, more accurate responses.

You: "Three trains leave stations A and B heading towards each other. Train A is travelling at 80km/h, Train B at 60km/h. They start 280km apart. A bird flies back and forth between them at 120km/h until they meet. How far does the bird travel?"
Claude: Thinks step by step... [gives correct answer]
Claude without chain-of-thought: [often gives wrong answer]

Try it yourself

Pick a task you do regularly — code review, writing test cases, explaining a piece of code to a colleague. Write the prompt as minimally as you usually would, send it to Claude, then write a second, more explicit version and compare the quality of the outputs. Notice where the ambiguity was costing you useful output.

The goal is not to write elaborate prompts. It is to close the gap between what you meant and what Claude understood.

What’s Next

Now that you know how to communicate clearly with Claude, the next skill is getting it to work with your actual files. In the next post we will cover file editing — reading, writing, and navigating your codebase directly.


Part of the Claude Learning Journey series · Next: File Editing: Reading, Writing, and Navigating Code