Last updated: 2026-05-07
How to actually get good at using ChatGPT (or Claude, or any LLM)
The skill nobody teaches you. Five practical habits that separate frustrated users from people who get real work done.
The skill is real
People get frustrated with LLMs for the same reason they get frustrated with new hires: they ask vague questions, provide no context, and expect polished output anyway. Using ChatGPT or Claude well is a skill. The people who treat it like a search box usually get search-box quality results. The people who treat it like a collaborator with context usually get much better work.
This skill is not mystical. It is mostly communication discipline: clear setup, explicit constraints, examples, and deliberate iteration.
Habit 1: Tell it who you are
Start threads with context. Three lines beats a heroic prompt every time:
- What role you are in (owner, operations lead, analyst, account manager).
- What industry context matters.
- Who the output is for and how it will be used.
Without this, the model guesses. With this, the model aligns faster and makes fewer tone mistakes.
Habit 2: Show, do not tell
Instead of saying "write better," paste what you already have and ask for revision options. Instead of saying "analyze my data," paste the sample rows. Instead of saying "match our voice," paste one example your team already likes.
LLMs respond better to concrete artifacts than abstract directions. Showing the bad draft is often the fastest path to a good draft.
Habit 3: Iterate, do not restart
Keep working inside one thread while refining output. Starting fresh for every tweak throws away context the model has already learned about your task, tone, and constraints. Iteration in-thread is how responses move from generic to useful.
Restart only when the thread clearly drifted or assumptions are now wrong.
Habit 4: Ask it to think out loud
For hard decisions, ask for reasoning before final output: "Walk me through how you would think about this trade-off, then give a recommendation." You often get better structure and fewer brittle answers.
This is especially useful for strategy drafts, scope decisions, and prioritization where assumptions matter as much as wording.
Habit 5: Learn when to stop
LLMs can sound confident while being wrong. Watch for warning signs: suspiciously specific facts without sources, over-agreeable recommendations, and language that smooths over uncertainty where uncertainty is expected.
Verify anything where being wrong is expensive: contracts, financial assumptions, policy claims, compliance, or medical/legal advice.
Three patterns worth memorizing
- Reviewer pattern:"Pretend you are a [role]. Critique this for [criteria]."
- Format pattern:"Output as [exact structure/table/checklist]."
- Constraint pattern:"Limit to [word count, tone, format, audience]."
Things you should always do once you have a good prompt
- Save it in a shared prompt library (our Prompt Library Builder helps with this).
- Share it with your team and include one good output example.
- Revisit quarterly. Prompts age as models improve.
The mindset shift that changes everything
Think of LLMs as very fast interns: smart, useful, and context-starved. Your job is not to test their intelligence. Your job is to provide enough context that they can produce useful work in your reality.
Once you make that shift, frustration drops and output quality climbs.
Where to go next
Build reusable prompts with the Prompt Library Builder and find high-value workflows with the AI Use Case Finder.