Use fewer (and more specific) words by talking to bots
It's easy to think chatbots understand intent when they are pattern matching. Clear context is essential to get the outcomes you want. Read: "Everything Starts Out Looking Like a Toy" #241

Hi, I’m Greg 👋! I write weekly product essays, including system “handshakes”, the expectations for workflow, and the jobs to be done for data. What is Data Operations? was the first post in the series.
This week’s toy: an award-winning collection of animations that involve scrolling. There’s something oddly compelling about infographics that develop in response to user actions, revealing new information as they build. Edition 241 of this newsletter is here - it’s March 10, 2025.
Thanks for reading! Let me know if there’s a topic you’d like me to cover.
The Big Idea
A short long-form essay about data things
⚙️ Use fewer (and more specific) words by talking to bots
A (very) smart person recently shared feedback with me that was brief, accurate, and telling. He suggested: “when you have big ideas, start with smaller words and bring people along for the ride.” This was a kind way of saying “sometimes, you use big words when small words will do”.
The feedback (accurate) made me pause – and I realized I was feeling defensive – until I realized that he was suggesting the same process with people I’ve been practicing with bots. He wanted me to establish context and tell a story that reached the audience.
How do you get buy-in for a new idea?
First, you need to establish basic facts. This means laying out the problem, the background, and a potential desired outcome.
Next comes the process, where you identify each step, create input and output states, and clarify what happens when errors occur.
Finally, you share a vision for the outcome, including estimates of effort and/or cost. If there are adjustments, you need to be able to incorporate them to change course.
This feedback resembles the instructions needed for Large Language Models to produce solid results.
LLMs do not “think”. They produce a statistical outcome of the next likely item based on the previous item. That means that even if they produce content that appears the same as the words you and I would use to describe an outcome, it doesn’t necessarily have the same meaning.
For example, LLMs do not “know” whether a task is “completed” or “not” unless we give the model a rule that results in a true statement of “completed” or “not completed.” Ethan Mollick, a professor at the University of Pennsylvania, compares this to observing a biological process, not a software one.

Liberal Arts Degrees Are Useful
If Mollick’s observation is correct, you need to do more than ask a Chatbot to complete a task.
To build a series of prompts that will result in quality work, you need to be equal parts coder, project manager, and philosopher.
Like the friend who suggested I “bring people along for the ride”, you need to lay out the problem, set context, and provide a model for the desired outcome.
Setting Context
For LLMs, setting the context means creating a general set of instructions that bounds the answers. For example, if you wanted to create a standard for answering, you might suggest:
“Your role is to assist users by providing accurate, detailed, and context-aware responses. In this conversation, please ensure that your answers are:
Clear and Concise: Offer straightforward explanations with examples or code snippets when relevant.
Context-Aware: Consider both the user's query and any provided instructions carefully.
Well-Structured: Utilize headings, bullet points, or tables to organize information effectively.
Cited: Include inline citations when referencing external sources or data.
Adaptable: Adjust the depth and tone of your response to suit the user's needs while maintaining a neutral, professional style.”
Basic Background
In addition to the context (think of this as world-building), you need to create some basic background information to help the model use the context to focus it on a relevant user story.
For example:
“You are assisting a product manager who is creating a project at a mid-sized software company to create a new feature and needs to align the work for 6-9 weeks”
The background helps match the result to those most relevant to the user.
Desired Outcome and Validation
LLMs are “token tumblers” that provide statistically relevant outcomes. That means that sometimes they’ll deliver a result that is spot on and other times match something completely offbeat. Because this is an iterative process to create a result, it helps to focus the outcome.
For example, the prompt:
“Provide a well-researched outline that creates a work-back plan with tasks and steps”
Might require a companion prompt after this is completed that takes the role of a validator and writes:
“You are evaluating a plan created to complete a project in 6-9 weeks. Please identify any risks and suggest adjustments to the plan.”
Your mileage may vary, especially when you need to create validation to the output that takes into account data validation for types and values in an answer.
How do you know if the LLM is along for the ride? When it provides you with consistent, accurate output that can be fact checked by external folks and stays on task. For some outcomes, this means limiting the tasks an LLM is asked to do so that both “True” and “False” states can be statistically likely (and approved).
What’s the takeaway? Why use big words when small words will do? It’s important to be specific when asking LLMs to complete tasks and to set a standard for evaluating the results. Setting the context, building a background, and creating validation are key steps for success.
Links for Reading and Sharing
These are links that caught my 👀
1/ On writing - The smaller, more well-edited, the better the document. Jose Luis Zapata makes an eloquent case for self-contained well-written documents. (This follows the standard of good API writing — loosely coupled, strongly typed.)
2/ Troubleshooting as a service - Learning how to debug physical systems is (not surprisingly) similar to the QA process on digital systems. Read On Troubleshooting to see how this works as a general process.
3/ Speed lines - Joshua Vides transforms cars from 3D to 2D by wrapping them in comic book-like lines. You have to see it to believe it. This is a classic effect that somehow doesn’t look real in a picture.
What to do next
Hit reply if you’ve got links to share, data stories, or want to say hello.
The next big thing always starts out being dismissed as a “toy.” - Chris Dixon