I spent the first three months using ChatGPT writing prompts like “write me a sales email.” The output was generic slop that I’d rewrite from scratch anyway. Then I started treating prompts like project briefs — specific, structured, and full of context — and my usable output rate jumped from maybe 20% to over 80%. The tool didn’t change. My inputs did.

Most people blame the AI when they get bad results. The real problem is almost always the prompt. Here’s how to fix that for both text and image generation tools.

Why Most Prompts Fail

Bad prompts share the same three problems: they’re vague, they lack context, and they don’t specify format. Telling an AI to “write a blog post about CRM” is like telling a contractor to “build something nice.” You’ll get something, but probably not what you wanted.

The fix isn’t complicated. You don’t need a PhD in machine learning or a 47-step framework. You need to communicate clearly — the same skill that makes you good at delegating to humans.

The Gap Between Your Brain and the AI’s Output

You have a picture in your head of what you want. The AI has nothing but your words. Every assumption you don’t spell out is a coin flip. If you want a 300-word product description written for technical buyers in a casual tone with three specific features highlighted — say exactly that.

I’ve run CRM implementations where the difference between a helpful AI-generated customer email sequence and a useless one came down to one sentence in the prompt: “The customer just churned from a competitor and is skeptical about switching again.” That context changed everything about the tone and content.

The Anatomy of a Good Text Prompt

Every effective text prompt has five components. You don’t always need all five, but the more you include, the better your results.

1. Role

Tell the AI who it is. “You are a senior CRM consultant with 10 years of experience in B2B SaaS” produces wildly different output than no role at all. The role sets the vocabulary, depth, and perspective.

Example:

  • Weak: “Write about email deliverability.”
  • Strong: “You’re an email marketing manager at a mid-size SaaS company. Write about email deliverability issues that specifically affect transactional emails from CRM platforms.”

2. Task

Be specific about what you want done. “Write” is not specific enough. “Write a 500-word troubleshooting guide” is better. “Write a 500-word troubleshooting guide organized as a numbered checklist with the most common issue first” is best.

3. Context

This is the part most people skip, and it’s the part that matters most. Context includes:

  • Who’s the audience?
  • What do they already know?
  • What’s the situation or scenario?
  • What’s been tried before?

When I’m building email sequences in HubSpot, I’ll include context like: “These leads came from a webinar about data migration. They’re mid-funnel. They’ve already seen our pricing page but haven’t booked a demo.” That one paragraph of context produces emails that actually sound relevant.

4. Format

Specify the structure you want. Bullet points, numbered list, table, paragraph form, Q&A format, email template — tell the AI exactly how to organize the output. If you want headers, say so. If you want a specific word count, include it.

5. Constraints

Constraints are what you don’t want. “Don’t use jargon.” “Keep sentences under 20 words.” “Don’t include a call-to-action.” “Avoid mentioning competitors by name.” Constraints are surprisingly powerful — they prevent the most common failure modes.

Put it all together and you get something like this:

You’re a CRM implementation consultant writing for small business owners who’ve never used a CRM before. Write a 400-word guide explaining how to set up their first sales pipeline. Use a numbered list format with 5-7 steps. Keep the language simple — no technical jargon. Don’t recommend specific tools. End with the single most important thing to get right on day one.

That prompt will produce usable output on the first try at least 75% of the time, regardless of which AI writing tool you’re using.

Advanced Text Prompting Techniques

Once you’ve got the basics down, these techniques will push your output quality even higher.

Chain-of-Thought Prompting

Instead of asking for the final answer immediately, ask the AI to think through the problem step by step. This is especially useful for analysis, strategy, and complex writing tasks.

Example for CRM data analysis:

I’m going to give you our Q1 sales data. Before making any recommendations, first identify the three most significant patterns in the data. Then explain what might be causing each pattern. Then — and only then — suggest specific changes to our sales process.

This prevents the AI from jumping to generic advice. I’ve used this approach when analyzing pipeline data from Salesforce dashboards, and the recommendations are noticeably more specific when the AI has to show its reasoning first.

Few-Shot Prompting

Give the AI 2-3 examples of what you want before asking it to produce new output. This works incredibly well for maintaining brand voice, formatting standards, or a specific writing style.

Example:

Here are two customer success stories we’ve published:

[Example 1 — paste 200 words] [Example 2 — paste 200 words]

Now write a new customer success story following the same structure, tone, and length. The customer is a 50-person accounting firm that reduced their response time by 40% after implementing our CRM.

Few-shot prompting is the single best technique for consistency across multiple pieces of content. If you’re writing 20 product descriptions or 15 case studies, give examples first.

Iterative Refinement

Don’t try to get perfect output in one prompt. Use a conversation flow:

  1. First prompt: Get the rough draft
  2. Second prompt: “Make the opening more specific — reference a dollar amount for the cost of bad data”
  3. Third prompt: “The third section is too long. Cut it to 100 words and make it punchier”
  4. Fourth prompt: “Rewrite the CTA to focus on booking a call rather than downloading a guide”

This iterative approach produces better results than one massive, overloaded prompt. I’ve tested this side-by-side — four focused refinement rounds beats one detailed prompt about 70% of the time for content over 1,000 words.

The “Act As My Editor” Technique

After generating any piece of content, follow up with:

Now review what you just wrote. Identify the three weakest sentences and rewrite them. Flag any claims that need a source or specific data point.

This self-critique step catches a surprising amount of fluff and vagueness. I run this on every AI-generated draft before I edit it myself. It cuts my editing time roughly in half.

Image Prompt Engineering: Different Rules

Text prompting and image prompting share some DNA, but image generation has its own grammar. What works in Midjourney doesn’t always work in DALL-E or Stable Diffusion, but these principles apply broadly.

Structure of an Effective Image Prompt

Image prompts work best when you think in layers:

  1. Subject — What’s the main focus? Be specific. “A woman” is vague. “A woman in her 40s wearing a navy blazer, sitting at a desk with two monitors” gives the AI something to work with.

  2. Setting/Environment — Where is this happening? “Modern office with floor-to-ceiling windows, afternoon light, downtown skyline visible in background.”

  3. Style — What should it look like? Photography, illustration, watercolor, 3D render, flat design? Name specific styles or reference artists (in tools that support it).

  4. Mood/Lighting — This is where amateur prompts fall short. “Warm, soft lighting” vs. “harsh fluorescent lighting” creates completely different emotional responses.

  5. Technical specifications — Aspect ratio, camera angle, depth of field. “Shot from slightly below, shallow depth of field, 85mm lens equivalent” tells the AI exactly what perspective you need.

Example prompt for a CRM-related blog header image:

Professional photograph of a sales team of four people gathered around a laptop screen, pointing at a dashboard with colorful charts. Modern open-plan office, natural light from large windows. Warm color palette, slight depth of field blur on background. Shot at eye level, 35mm lens equivalent. Clean, editorial style similar to business magazine photography.

What NOT to Do With Image Prompts

Don’t overload with conflicting descriptors. “Minimalist yet detailed, dark yet bright, modern yet vintage” confuses the model. Pick a direction and commit.

Don’t ignore negative prompts when the tool supports them. In tools like Stable Diffusion and Midjourney, you can specify what to exclude: “no text, no watermarks, no extra fingers, no blurry faces.” Negative prompts clean up the most common generation artifacts.

Don’t use vague aesthetic words alone. “Beautiful” and “amazing” and “stunning” don’t give the AI meaningful information. “High contrast with deep shadows and warm highlights” is actionable.

Style References Beat Descriptions

Most modern AI image generators let you upload reference images or cite specific styles. This produces more consistent results than trying to describe a visual style in words.

If you need images for a brand, create a single reference image you love, then use it as a style anchor for everything else. Consistency across 20 images is nearly impossible with text-only prompts. Reference images solve this.

Practical Image Prompting for Business Use

Here’s what actually matters if you’re generating images for websites, presentations, or social media:

For blog headers: Specify that you need space for text overlay. “Composition with significant negative space on the left third for text overlay” prevents the AI from centering everything.

For social media: Include the aspect ratio upfront. “Square format, 1:1 aspect ratio” or “Vertical 9:16 for Instagram Stories.” Cropping after generation always looks worse than getting the composition right in the prompt.

For product mockups: Be extremely specific about the product placement and environment. “MacBook Pro on a clean white desk, screen showing a CRM dashboard with blue and green elements, overhead angle at 30 degrees, soft shadow beneath laptop.”

Platform-Specific Tips

Different AI tools respond differently to the same prompt. Here’s what I’ve found after extensive use.

ChatGPT / GPT Models

GPT models respond well to system-level instructions and structured prompts. They handle long, detailed prompts without losing focus. If you’re using the API or a custom GPT, set your core instructions in the system prompt and keep your per-message prompts shorter.

One thing GPT does particularly well: maintaining a consistent voice across a long conversation. If you set the tone early and give examples, it’ll hold that voice for dozens of outputs. I use this when generating entire email sequences — set the voice once, then request each email individually.

Claude

Claude handles nuance and complex instructions particularly well. It’s my go-to for tasks that require understanding subtle distinctions — like writing different versions of a sales email for different buyer personas where the differences are subtle but important.

Claude also responds well to “think step by step” instructions. For analytical tasks, explicitly asking Claude to reason through the problem before answering produces notably better results than with some other models.

Midjourney

Midjourney has its own prompt language. Short, punchy prompts often work better than long descriptions. The --style and --ar parameters matter more than adjective stacking. Spend 30 minutes learning Midjourney’s specific parameters — it’s worth more than 30 hours of trial-and-error with prompt wording.

Key Midjourney-specific advice: put the most important elements at the beginning of your prompt. The model weighs early words more heavily.

Common Mistakes That Waste Time and Money

I’ve watched dozens of teams adopt AI tools and make the same mistakes. Here are the ones I see killing productivity.

Mistake 1: The “Kitchen Sink” Prompt

Cramming 15 requirements into a single prompt. The AI tries to satisfy all of them and does none of them well. Better approach: prioritize your top 3 requirements and iterate from there.

Mistake 2: Not Providing Examples

You have a specific output in mind. The AI doesn’t. Showing one example of what you want is worth 100 words of description. Always include examples when you have them.

Mistake 3: Accepting First-Draft Output

Treating AI output as final copy. First-draft output from any AI tool should be a starting point. The real productivity gain comes from getting a solid 70% draft in 30 seconds instead of spending 20 minutes on a blank page. You still need to edit.

Mistake 4: Same Prompt, Different Tools

Using identical prompts across different AI tools and expecting identical results. Each model has strengths. Adapt your prompts to the specific tool. A prompt optimized for GPT-4 might need restructuring for Claude, and it definitely needs restructuring for Midjourney.

Mistake 5: Ignoring Temperature and Settings

Most AI tools let you adjust creativity/randomness settings. For factual content like documentation or data summaries, lower the temperature. For creative work like brainstorming or ad copy, increase it. The default settings are mediocre for everything because they’re trying to be okay for everything.

Building a Prompt Library

The single highest-ROI activity for any team using AI tools: build a shared prompt library. Here’s how.

Start With Your 10 Most Common Tasks

What do you generate most often? For CRM teams, it’s usually:

  1. Follow-up emails after demos
  2. Meeting summary notes
  3. Proposal sections
  4. Customer onboarding messages
  5. Internal status updates
  6. Feature request responses
  7. Help documentation
  8. Social proof / testimonial requests
  9. Re-engagement emails for cold leads
  10. Quarterly business review summaries

Write one optimized prompt for each. Test it 5 times. Refine it. Save it.

Template Variables

Build your prompts with clear variable placeholders:

You’re a customer success manager at [COMPANY]. Write a check-in email to [CUSTOMER_NAME] at [CUSTOMER_COMPANY]. They’ve been using our [PRODUCT] for [TIME_PERIOD]. Their usage data shows [KEY_METRIC]. The tone should be warm but professional. Keep it under 150 words. Include one specific question about their experience with [FEATURE].

Anyone on the team can fill in the brackets and get consistent, quality output. This is how you scale AI usage beyond one power user.

Version Control Your Prompts

Prompts evolve. What worked with GPT-4 might need tweaking for newer models. Keep a version history with notes on what changed and why. A simple shared doc works. Fancy tools exist but aren’t necessary — I’ve seen teams get the same results from a well-organized Google Doc as from a $50/month prompt management tool.

Measuring Prompt Quality

How do you know if your prompts are actually good? Track two metrics:

Usable output rate — What percentage of AI outputs can you use with minimal editing (under 5 minutes of changes)? A good prompt should hit 70-80%.

Time to final — How long from prompt submission to finished, published content? If you’re spending 30 minutes editing a 200-word AI-generated email, your prompt needs work. The target for short-form content should be under 10 minutes total, including prompt writing and editing.

Run these numbers for a week. You’ll quickly identify which prompts need improvement and which are already dialed in.

Putting It All Together

Good prompt engineering isn’t about memorizing formulas. It’s about being specific, providing context, and iterating. Start with your most repetitive task, build one great prompt for it, and measure how much time it saves. Then do it again for the next task.

If you’re choosing between AI tools, check our AI tools comparison page to find the right fit for your workflow — the best prompt in the world won’t fix a tool that’s wrong for your use case. For more on specific CRM applications, see our guides on AI-powered CRM tools.


Disclosure: Some links on this page are affiliate links. We may earn a commission if you make a purchase, at no extra cost to you. This helps us keep the site running and produce quality content.