top of page

Prompting: The Art of Communicating with AI

  • Writer: Oskar Schiermeister
    Oskar Schiermeister
  • 2 days ago
  • 5 min read

You've been sitting in front of ChatGPT for 20 minutes. Third attempt, and still garbage. The AI gives you some generic text that has nothing to do with what you actually need. You rephrase. Again. And again. Eventually you think: "I'll just do it myself." Sound familiar?


Then keep reading. Because the problem isn't you, it's the way you're talking to the AI. The good news: With a few smart frameworks and tactics, these moments can be reduced to almost zero. Not only does this prevent the AI from delivering useless results, it also drastically improves your output.


The secret? Prompting, or as it's called in professional jargon: Prompt Engineering. But what exactly makes a prompt "good"?


ree

What Is a Good Prompt?


Most people think a prompt is a question you ask the AI. That's true, but it falls short. A prompt is an instruction. And as with any instruction: The clearer you say what you want, the better the result.


The problem? Most prompts are too vague and too imprecise. They leave the AI too much room for interpretation, and the AI almost always interprets wrong.

A good prompt does two things: It ensures maximum quality in the output and makes sure the process runs smoothly, without five follow-up questions and three rounds of corrections.

Good prompting is like a brief glimpse into the future. You know what output you want, which details matter, and how the result will solve your problem. And that's exactly what you communicate. Sounds logical, but how do you actually implement it?


The Three Core Principles


Before we get to specific frameworks, there are three principles that make every prompt better. They sound simple, but most people ignore them.


Clarity means saying exactly what you want. Avoid vague instructions like "Write something good" or "Do your best." The AI can't read minds, and the clearer the instruction, the better the result.


Intent means stating the purpose of your request. Why do you need this output? Who is it for? A prompt like "Help me compare these options so I can present them to my supervisor" delivers better results than "Compare these options." The AI understands the context and adjusts tone and depth accordingly.


Specificity means providing important constraints: length, style, audience, format, tone. The more context the AI has, the more precisely it can respond. Without these details, it has to guess, and it usually guesses wrong.


Three principles that make sense. But how do you put them into a structure you can remember?


The CLEAR Framework


One of the most effective frameworks for structured prompts is CLEAR. It summarizes the key elements in five letters:


  • Context describes the background and situation. Who are you? What do you need the result for?

  • Length defines length and level of detail. 100 words or 1,000?

  • Expectations define what a good result looks like. Factual, creative, persuasive?

  • Action clearly states what the AI should do. Write, analyze, compare?

  • Refinements leave room for improvements. Allow follow-up questions, announce iteration.


Before vs. After: Comparing Prompts

Theory is good, practice is better. Here's a concrete example:

Typical Prompt:

"Write me a short introduction about social media marketing trends. Should be for beginners."

Optimized Prompt (using CLEAR):

"I'm creating a blog post for our company. The target audience is marketing beginners with no prior knowledge. Write an introduction of about 150 words. The tone should be professional but easy to understand, with no jargon without explanation. Write an introduction to current social media marketing trends. If anything is unclear, ask before you write."

The difference? The second prompt gives the AI everything it needs: context, clear length, quality criteria, a specific task, and room for questions. The AI doesn't have to guess, it can deliver.


A Real-World Example: The Meeting Summary


Frameworks are helpful, but nothing replaces actual application. Here's an example from typical everyday work:


The situation: After an hour-long project meeting, the results need to be summarized and sent to everyone involved. The notes are chaotic, time is short.


The first prompt:

"Summarize this meeting."

The result: A generic summary without structure that hides important decisions and next steps somewhere in the body text. Colleagues have to read the entire text to find the points relevant to them.


The optimized prompt using CLEAR:

"I just led a project meeting and need to send the results to the team. The recipients are both team members and department management, who only need the key points. Create a summary of maximum 300 words. The summary should be clearly structured and easy to scan, with a professional but not overly formal tone. Organize the summary into the following sections: Key takeaways (the three most important results), decisions made, open items, and next steps with responsibilities. Here are my notes: [insert notes]. If any information is missing, let me know."

The result: A structured summary that anyone can skim in 30 seconds. Decisions and responsibilities are immediately visible. Management can see at a glance what's important, and the team knows who needs to do what by when.


What this example shows: The time investment for a good prompt is maybe two minutes more. But it saves 20 minutes of rework and delivers a result that everyone involved can actually use.


Six Best Practices


In addition to the CLEAR framework, there are some habits that make the difference between mediocre and excellent prompts:

  1. Be unambiguous and say exactly what you want. "Do your best" is not an instruction.

  2. Break down complex tasks by asking for step-by-step guides or numbering subtasks.

  3. Provide examples, because "Here's a good response style, copy this structure" works surprisingly well.

  4. Define format and constraints like length, tone, and structure (table, list, prose).

  5. Iterate and refine, because the first draft is rarely perfect. Give feedback and let the AI improve.

  6. Request explanations with instructions like "Explain step by step how you arrived at this result" to drastically increase quality.


Common Mistakes


Most prompting mistakes can be traced back to a few patterns.

  • Vagueness is the most common mistake. "Help me with my project" gives the AI nothing to work with. Context, goal, and desired outcome should always be provided.

  • Wanting everything at once is the second common mistake. Complex tasks in a single prompt overwhelm even AI. It's better to break the task into steps.

  • Not providing examples is the third mistake. The AI learns from patterns, and one example says more than a hundred words of description.

  • Blind trust is dangerous because AI hallucinates. It invents things that sound convincing but are wrong. Always verify.

  • Not iterating wastes potential. Prompting is a dialogue. Give feedback, refine, improve.

All these mistakes can be avoided. But there's one risk that even the best prompt can't eliminate.


A Word on Hallucinations


AI models are getting better, but they're not perfect. They can assert things with confidence that are simply wrong. This happens especially often with specific numbers and data, with niche topics that have limited training data, and with current events depending on the model's knowledge cutoff.


The rule is: Trust but verify. Especially with important or specialized outputs, a human should always review.


Prompting isn't magic, it's a craft. A craft that can be learned.


The basics are simple: clarity, intent, specificity. With frameworks like CLEAR, you structure your thoughts. With iteration and feedback, you improve results. And with a healthy dose of skepticism, you avoid the pitfalls.


Those who internalize these principles will find that the frustration with AI disappears. What remains is a powerful tool that actually delivers what you need.


Your next step: Take two minutes on your next AI project and consciously apply the CLEAR framework. Write Context, Length, Expectations, Action, and Refinements explicitly into your prompt. Compare the result with what you would have gotten otherwise. The improvement will convince you.

 
 
bottom of page