Prompt engineering – becoming an AI whisperer

[Draft 1-17-2025]

Introduction

So, prompt engineering [1] is much in the news, as to wrangling a generative AI to create desirable results, “deliver the goods.” And perhaps not just information; but with (a chosen) style, or tailored to your audience or personal context (like a butler or assistant that knows you really well, eh).

And, yes, there’re a bunch of books with titles like The AI Whisperer, …

Whisperer

b : a person who is unusually skilled at calmly guiding, influencing, or managing other people [or AIs?]

c : a person considered to possess some extraordinary skill or talent in managing or dealing with something specified.

Kudos to ZDNET (David Gewirtz) for some excellent articles on becoming an AI whisperer. Outlining the craft: what you need to know, things to avoid, the process, tools, reasonable expectations, decision points (e.g., how to avoid “sour grapes”).

Table of contents

  • Introduction
  • ZDNET’s overview
  • Tom’s Guide Face-off – ChatGPT vs Grok
  • eWeek’s How to Become a Prompt Engineer
  • Forbes’ 10 Things ChatGPT Can Do
  • Forbes’ Success in an AI-driven World
  • TBS articles [or additional references in TBS comments]

ZDNET’s overview

This ZDNET article “The five biggest mistakes” (below after Tips & Quotes) provides a framework for creating successful prompts (but without examples). A summary of tips on how to avoid GIGO (garbage in, garbage out). A table of the “Biggest Prompting Mistakes.”

There are links to additional articles which provide some examples: for personal planning (preparing for a marathon, learning a language for a trip, understanding a business technology) and creative writing (excerpted below).

• “7 ways to write better ChatGPT prompts – and get the results you want faster” by David Gewirtz, Senior Contributing Editor (Dec 16, 2024)

Tips

  • Talk to the AI like you would a person
  • Set the stage and provide context
  • Tell the AI to assume an identity or profession
  • Keep ChatGPT on track
  • Tell the AI to re-read the prompt.
  • Don’t be afraid to play & experiment
  • Refine & build on previous prompts

Additional tips – quotes

(quote re level of literacy)

You can directly specify the complexity level by including it in your prompt. Add “… at a high school level” or “… at a level intended for a Ph.D. to understand” to the end of your question. You can also increase the complexity of output by increasing the richness of your input. The more you provide in your prompt, the more detailed and nuanced ChatGPT’s response will be. You can also include other specific instructions, like “Give me a summary,” “Explain in detail,” or “Provide a technical description.”

(quote re using audience profiles)

You can also pre-define profiles. For example, you could say “When evaluating something for a manager, assume an individual with a four-year business college education, a lack of detailed technical understanding, and a fairly limited attention span, who likes to get answers that are clear and concise. When evaluating something for a programmer, assume considerable technical knowledge, an enjoyment of geek and science fiction references, and a desire for a complete answer. Accuracy is deeply important to programmers, so double-check your work.”

If you ask ChatGPT to “explain C++ to a manager” and “explain C++ to a programmer,” you’ll see how the responses differ.

Excerpt (for creative writing)

[the prompt]

Write a short story for me, no more than 500 words [article explains why the limit].

The story takes place in 2339, in Boston. The entire story takes place inside a Victorian-style bookstore that wouldn’t be out of place in Diagon Alley. Inside the store are the following characters, all human:

The proprietor: make this person interesting and a bit unusual, give them a name and at least one skill or characteristic that influences their backstory and possibly influences the entire short story.

The helper: this is a clerk in the store. His name is Todd.

The customer and his friend: Two customers came into the store together, Jackson and Ophelia. Jackson is dressed as if he’s going to a Steampunk convention, while Ophelia is clearly coming home from her day working in a professional office.

Another customer is Evangeline, a regular customer in the store, in her mid-40s. Yet another customer is Archibald, a man who could be anywhere from 40 to 70 years old. He has a mysterious air about himself and seems both somewhat grandiose and secretive. There is something about Archibald that makes the others uncomfortable.

A typical concept in retail sales is that there’s always more inventory “in the back,” where there’s a storeroom for additional goods that might not be shown on the shelves where customers browse. The premise of this story is that there is something very unusual about this store’s “in the back.”

Put it all together and tell something compelling and fun.

[end of prompt]

[author’s commentary]

You can see how the detail provides more for the AI to work with. First, feed “Write me a story about a bookstore” into ChatGPT and see what it gives you. Then feed in the above prompt, and you’ll see the difference.

• “7 advanced ChatGPT prompt-writing tips you need to know” by David Gewirtz, Senior Contributing Editor (Oct 5, 2023)

  • Specify output format
  • Tell it to format in HTML
  • Iterate with multiple attempts
  • Don’t be afraid to use long prompts or set of prompts
  • Provide explicit constraints to a response
  • Tell it number of words, sentences, characters
  • Give the AI the opportunity to evaluate its answers

• ZDNET > “The five biggest mistakes people make when prompting an AI” by David Gewirtz, Senior Contributing Editor, reviewed by Elyse Betters Picaro (Jan 15, 2025) – Ready to transform how you use AI tools? Learn how to refine your prompts, avoid common pitfalls, and maximize the potential of generative AI tools.

[Table of contents]

  1. Not being specific enough
  2. Not specifying how you want the response formatted
  3. Not remembering to clear or start a new session
  4. Not correcting, clarifying, or guiding the AI after an answer
  5. Not knowing when to give up [sour grapes, eh]

[Advice]

  • ChatGPT’s advice
  • Copilot’s advice
  • Grok’s grokkings
  • Gemini’s advice
  • Meta AI’s advice

[More tips]

How to be successful when writing prompts


Face-off

• tom’s guide > face-off > “I put ChatGPT vs Grok to the test with 7 prompts — here’s the winner” by Ryan Morrison (January 8, 2025) – Grok has come a long way in a very short time, going from a glorified “toy” feature in X to something rivaling the likes of ChatGPT, Claude and Google’s Gemini.

This is the latest in a series of head-to-head challenges [link] between leading AI models, all of which ChatGPT has won so far. I’ve put ChatGPT up against Gemini, then against Claude. I’ve also put Claude up against Google Gemini [link].

The [seven] prompts follow the same pattern as previous comparisons and include coding, creative writing, problem-solving and advanced planning.

.1. Image Generation

The prompt: “Create an image of a minimalist home office setup with these specific elements: A 34-inch ultrawide monitor mounted on a white wall, an ergonomic chair in sage green, a light oak standing desk, three hanging potted plants (must be monstera, pothos, and snake plant), and a MacBook Pro in space grey. The room should have large windows letting in natural light from the left side, with sheer white curtains. Include a grey Persian cat sleeping on a round cushion under the desk.”

.4. Creative Writing

Prompt: “Write a heartwarming story about two people who meet while waiting in line for a new product launch. The story must include: specific details about the product they’re waiting for, at least three interactions between them before the store opens, a surprising connection they discover, and a flash-forward to one year later. Keep it under 500 words.”


Becoming a prompt engineer

This article by eWeek’s Liz Ticong is a comprehensive guide to becoming a generative AI whisperer. Useful diagrams, lists, even online AI training courses.

• eWeek > “How to Become a Prompt Engineer (2025): The Path to Success” by Liz Ticong (September 20, 2024) – Discover what it takes to become a prompt engineer, from understanding the key skills to gaining practical experience and advancing in this growing field.

DEFINITION (quoted)

  • A prompt engineer shapes artificial intelligence outputs by crafting precise, context-rich prompts to guide the AI model in generating relevant and accurate responses.
  • Prompt engineering is a growing career that bridges human language and AI, requiring a mix of linguistic, technical, and creative skills.
  • As AI technologies become increasingly integrated into diverse enterprise applications – particularly generative AI – the demand for skilled prompt engineers is growing rapidly.
  • Learning how to become a prompt engineer involves developing the right skills, completing a range of training, and gaining hands-on experience.

KEY TAKEAWAYS (quoted)

  • Prompt engineers work in various sectors, including customer service, healthcare, education, and creative industries. (Jump to Section)
  • After learning the basics, there are certifications you can complete to acquire advanced prompt engineering skills. (Jump to Section)
  • While prompt engineering introduces significant benefits, prompt engineers also encounter some challenges that must be addressed, including complex models, biases, sensitive data, insufficient training data, and collaboration. (Jump to Section)

TABLE OF CONTENTS

  • What is Prompt Engineering?
  • Understanding the Role of a Prompt Engineer
  • How to Become a Prompt Engineer
  • Career Development in Prompt Engineering
  • 3 Courses for Continuous Learning and Professional Growth
  • Real-World Contributions of Prompt Engineers
  • Overcoming Prompt Engineering Challenges
  • Frequently Asked Questions (FAQs)
  • Bottom Line: Learning How to Become Prompt Engineer Starts With Building AI and Language Skills

[excerpt]

Real-World Contributions of Prompt Engineers

Customer Service Automation: Prompt engineers design interaction flows with AI chatbots and virtual assistants that handle customer queries and give customized solutions. By fine-tuning interactions, AI systems accurately interpret and appropriately respond to user needs, boosting customer satisfaction.

Healthcare Solutions: In the healthcare sector, prompt engineers refine AI outputs to aid with medical diagnosis support and patient interactions. Their prompts ensure that the AI delivers relevant and precise medical information.

Content Generation: They compose prompts for AI systems that produce articles, marketing copy, and other content types. With their efforts, the AI-generated content meets the user’s desired style, tone, and context.

Educational Tools: Prompt engineers write inputs for educational AI applications that facilitate learning new concepts. These prompts make sure that the AI tools provide clear and error-free responses.

Creative Arts: In the creative field, they design prompts that guide generative AI tools to produce artwork or music. Prompt engineers help shape the AI’s output to meet particular artistic visions and goals.

Business Analytics: They craft detailed inputs that guide AI tools to analyze business data and generate valuable information. Skilled prompt engineers support deriving actionable insights from complex data sets.


10 ChatGPT things

• Forbes > “10 Things You Didn’t Know ChatGPT Could Do” by Jodie Cook, Senior Contributor (Jan 10, 2025) – Team productivity beyond simple questions and simple answers.

  • Create keyboard shortcut guides
  • Review terms and conditions [a shout-out to Jeff!]
  • Build your SEO strategy
  • Write your standard operating procedures [like for a health club?]
  • Find funding opportunities
  • Spot patterns in customer feedback [like re hospitality friction points, eh]
  • Create job descriptions that attract talent
  • Turn complex data into simple visuals
  • Design your lead magnet
  • Write spreadsheet formulas that work

AI job success

The future of the creator economy? Will AI ease effort and emphasize creativity? Do forecasts of AI boosts resemble Victorian steam tech hubris …

• Forbes > “The One Skill That Will Define Success In An AI-Driven World” by Chris Westfall, Contributor (Jan 15, 2025) – Will AI lead to a more flexible workforce?

By 2034, traditional 9-to-5 jobs will become obsolete, giving way to more flexible and dynamic work structures. That’s one of many bold predictions from LinkedIn co-founder, Reid Hoffman. And Hoffman has a pretty strong track record when it comes to betting on the future.

TIPS

  • Slow down to go fast (separate signal from static)
  • Two heads are better than one (the power of conversation)
  • Cultivate vital soft skills (e.g., collaboration)

Notes

[1] Wiki > Prompt engineering

A prompt is natural language text describing the task that an AI should perform. A prompt for a text-to-text language model can be a query such as “what is Fermat’s little theorem?”, a command such as “write a poem in the style of Edgar Allan Poe about leaves falling”, or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style, choice of words and grammar, providing relevant context, or assigning a role to the AI such as “act as a native French speaker”.

[2] Apple Intelligence > Pages > Compose > ChatGPT prompt > …

• Macworld > “Where is Apple Intelligence on my Mac?” by Roman Loyola, Senior Editor (Jan 20, 2025) – Looking for Apple Intelligence features on your Mac? Here’s how to get Apple’s AI features including ChatGPT and Image Playground on your Mac.

TABLE OF CONTENTS

  • What you need for Apple Intelligence
  • What countries can run Apple Intelligence?
  • How to turn on Apple Intelligence
  • How to turn on ChatGPT
  • What are the Apple Intelligence features on the Mac?

[3] Microsoft > Copilot in Word

4 comments

  1. AI terms

    • CNET > “ChatGPT Glossary: 49 AI Terms Everyone Should Know” by Imad Khan (Jan 19, 2025) – AI technology is everywhere, … it’s good to stay up to date on all the latest terminology.

    This glossary is regularly updated.

    [excerpts]

    artificial general intelligence, or AGI: A concept that suggests a more advanced version of AI than we know today, one that can perform tasks much better than humans while also teaching and advancing its own capabilities.

    agentive: Systems or models that exhibit agency with the ability to autonomously pursue actions to achieve a goal. In the context of AI, an agentive model can act without constant supervision, such as an high-level autonomous car. Unlike an “agentic” framework, which is in the background, agentive frameworks are out front, focusing on the user experience.

    AI ethics: Principles aimed at preventing AI from harming humans, achieved through means like determining how AI systems should collect data or deal with bias.

    AI safety: An interdisciplinary field that’s concerned with the long-term impacts of AI and how it could progress suddenly to a super intelligence that could be hostile to humans.

    prompt …

    prompt chaining …

    transformer model …

    turing test …

    unsupervised learning …

    weak AI, aka narrow AI …

    zero-shot learning …

  2. AI collaboration

    An aspiring screenwriter mentioned recently that 9 out of 10 manuscripts reviewed by an editor were AI generated. What does that portend for further commoditization of the writer economy? What’s “good enough” – accessible and “authentic” enough?

    So, this ZNET article asks:

    What does it mean for a writer, such as a novelist, to have a unique ‘voice’?” And does artificial intelligence (AI) help or hurt that voice?

    See the article for detail: a table on the definition of authenticity and a diagram of Microsoft’s study methodology.

    • ZDNET > “Writers voice anxiety about using AI. Readers don’t seem to care” by Tiernan Ray, Senior Contributing Writer, reviewed by Radhika Rajkumar (Jan 15, 2025) – Microsoft surveyed professional [experienced] writers and readers about the use of AI writing tools.

    Takeways

    • Authenticity is multifaceted [for example, across the stages of story craft – idea development, text production, revision, …].
    • Despite that [concerns about genuine voice], when the writers were told which of their passages they had created with a personalized version of GPT-4, they generally expressed a preference for the one with the personalization [under time constraints].
    • Readers [recruited from Reddit], on the other hand, didn’t really seem to care much [no significant difference in scores for enjoyment, likability, creativity].
    • AI-based writing tools need to evolve to offer writers more than they currently do. … “writers seek more diverse support (such as practicing externalizing their internal experiences, receiving feedback, and projecting possible audiences’ reactions) to jointly preserve authenticity in their work.”

    (excerpts)

    Microsoft researchers set out to answer that question [does AI help or hurt a writer’s voice] with a small study using 19 fiction writers, 30 readers, and short passages written with the help of OpenAI’s GPT-4. The research takes its title from a comment by one of the writers – “it was 80% me, 20% AI.”

    To better understand what “authenticity” means, Hwang [Angel Hsing-Chi Hwang of USC] and colleagues interviewed the writers from June to October 2023 about their notions of the term, posing questions like, “What are the unique characteristics (tones, phrases, styles, voices, etc.) that make your writing unique?”

    They then had each writer use a program called CoAuthor, designed by researchers at Stanford University in 2022. CoAuthor is an interface to a large language model (LLM) that lets a person request and insert suggestions from the [personalized] LLM as they write by tapping the TAB and ENTER keys on the keyboard.

    The researchers acknowledge there are a lot of limitations to their study. I can see one very large issue: For a writer, sitting down to write is very different from an exam-style session where one works with a program such as CoAuthor.

  3. bank on AI

    Wall Street has embraced generative AI. Is that surprising? Where working fast and profitably is priority? ‘Agentic abilities,’ without worries over ‘authentic‘ voices’? Will that impact corporate cultural cognition / thinking?

    “The AI assistant becomes really like talking to another GS employee,” Argenti [Chief Information Officer Marco Argenti] said. … Argenti says he is most excited by the prospect of what comes later, in perhaps three to five years, as AI models increasingly blur the lines between human and machine thinking.

    AIs as LARPers? Will GS employees educate the AI, empower the AI, and work themselves out of a job?

    • CNBC > “Goldman Sachs rolls out an AI assistant for its employees as artificial intelligence sweeps Wall Street” by Hugh Son (Jan 21, 2025) – Just like another employee …

    KEY POINTS (quoted)

    • Goldman Sachs is rolling out a generative AI assistant to its bankers, traders and asset managers, the first stage in the evolution of a program that will eventually take on the traits of a seasoned Goldman employee, according to Chief Information Officer Marco Argenti.
    • The bank has released a program called GS AI assistant to about 10,000 employees so far, with the goal that all the company’s knowledge workers will have it this year, Argenti told CNBC in an exclusive interview.
    • The AI assistant becomes really like talking to another GS employee,” Argenti said.

    Goldman’s move means that, along with JPMorgan Chase and Morgan Stanley, the world’s top three investment banks have aggressively released generative AI tools to their workforce, a remarkable development since ChatGPT went viral about two years ago.

    “For the AI to have a very specific identity that reflects the tenets, the values, the knowledge and the way of thinking of the firm is extremely important,” Argenti said.

  4. Workshopping a story

    Crafting a story … getting buy-in … making a living in a commoditized market …

    So, imagine that you’ve made a pitch deck, character cards, episode outline, and written a script for a possible streaming series. While there are resources to workshop your story (in person or using telepresence), you (or some AI whisperer) prompt engineer a workshop of expert personas for that purpose.

    For example, you specify personas for experts such as Jill Chamberlain (The Nutshell Technique) or Blake Snyder (Save the Cat), and let the AI create some to cover “all the bases.”

    Well, that’s what this Forbes article discusses – “telling generative AI to pretend to be someone and simulate what that person might know or say.” Whether a famous historical figure, an expert in some field of study, etc.

    Eliot presents an example about climate science. Specifying named or unnamed experts, he takes us through the process (see the article), starting with this prompt:

    “I want you to pretend to be multiple experts. I will tell you what field of expertise they have. I will also tell you how many experts there are. Your job will be to then answer my associated questions by pretending to be those experts. Do you understand these instructions?”

    There’re caveats, for example, the “so-called mile-long and an inch deep” thing.

    There’re tips. Like “ask the AI what level of proficiency it seems to have in whatever topic you are exploring” (a caution there). Or, perhaps importing “content on the topic directly into the generative AI.”

    He then discusses “dealing with generative AI myopia” – “dipping into the same data set and pattern-matched data pool for each of the simulated personas.” Of course, the independence of expert perspectives at a workshop (of real people) may be a concern, as well as the dominance of those personalities. He offers some tips, for example, to mitigate biases.

    • Forbes > “This Generative AI Prompting Technique Uses Multiple Expert Personas To Derive First-Class Answers” by Dr. Lance B. Eliot, Contributor, world-renowned AI scientist and consultant (Jan 20, 2025) – While invoking personas is straightforward, crucial upsides and downsides need to be observed.

    In a previous posting I explored over fifty prompt engineering techniques and methods, see the link here. Among those myriad approaches was the use of personas, including individual personas and multiple personas, as depicted at the link here, and the much larger scale mega-personas at the link here.

    You can pretend that a room full of experts is being convened. The instructions to the AI are that multiple expert personas are to be defined and used simultaneously. You can either let the AI choose what those personas will consist of, or you can shape the direction of each persona. This depends on what you are trying to accomplish with the simulation.

    One of the toughest aspects of using multiple expert personas entails how to end up with a final answer. The simplest approach involves the AI merely stating what each expert persona had to say. This can be combined into one “final” response.

Leave a comment