The Thinking Habit that Makes AI Actually Useful
You need one simple shift in how you approach problems.
Most people skip one important question when they open an AI tool: What am I actually trying to solve?
Not "what do I want to write." Not "what should I ask." But: what's the real problem, who's it for, and what does a good answer actually look like? It's a habit borrowed from product teams. It's baked into how they work. And it turns out it's also the single most useful thing you can bring to an AI tool or LLM, regardless of your field or experience level.
Here are three product thinking principles that translate directly into better AI use.
Principle 1: Define the problem before you start typing
In product work, teams have a name for what happens when you skip this step: solutioning early. It means you've already decided what to build before you've understood what's broken. The result usually is that you build the wrong thing.
The same thing happens with AI. Most people open a chat window and just start typing a rough description of what they want, and hope the output lands. Sometimes it does. More often, the result is technically fine but doesn't actually solve the problem.
Before you write your prompt, write one sentence: What decision will this help me make, or what outcome will it move forward? That sentence is more valuable than the perfect prompt.
Try this: Instead of "help me write an email to my manager," try: "I need to flag a process gap to someone who didn't ask for the feedback, without sounding like I'm criticizing them. Help me figure out how to frame this first."
Principle 2: Be specific about who the output is for
PMs don't build for "users", they build for a specific person in a specific moment with a specific goal. A feature that works for a first-time user is different from one that works for a power user. Context changes everything.
The same is true with AI. "Write me a summary" could mean ten different things depending on who's reading it and why. A summary for a technical audience is different from one for leadership. One meant to inform is different from one meant to persuade.
The more clearly you describe the audience and the purpose, the more useful the output becomes. This is about making the problem clearer.
Try this: Add two lines to any prompt: "The person reading this is [role/context]" and "What I want them to do or understand is [outcome]." Watch how much better the output gets.
Principle 3: Treat the first output as a prototype, not a deliverable
Product teams don't ship the first prototype. They use it to learn what works, what breaks, and what they didn't think to ask. It's not a failure if the first version is rough, that's the whole point.
Most people treat AI output differently. They read it, decide it's good enough or it isn't, and either use it or try a different prompt hoping for better luck.
Better approach: after the first output, ask yourself three things. What did it get right? What did it miss? Where did it oversimplify something that's actually complicated? The answers tell you what to ask next and they often reveal something about how clearly you understood your own problem to begin with.
Try this: After your first AI response, don't re-prompt randomly. Instead, pick the one thing that felt most off and tell the tool exactly what it missed. One targeted correction beats five vague retries.
None of this requires a strategy background or a product title. These are just habits of thinking, ways of slowing down before you start, so you end up somewhere useful.
The people who get the most out of AI tools are the ones who've gotten clearer about their own thinking first. And it's one you can build starting today one better-defined problem at a time.
About Kima Sargsyan
Kima Sargsyan is a Senior Strategy Director at Huge Inc. She works at the intersection of brand, experience, and emerging technology. She explores what that intersection actually means for companies, for careers, and for how we think in her newsletter Perceptio.