It's all ChatGPT's fault until it's not anymore

3 minutes
🌱 Seedling
It's all ChatGPT's fault until it's not anymore

A few months ago, we hid the fact that we were using ChatGPT. Today, we list ChatGPT, Claude, Perplexity, Manus, and Veo3 on our org chart as “junior” contributors.

When something goes wrong, the default reaction usually:

  • Dev: “Shoot, I missed that bug when Claude added the retrieval feature.”
  • Marketing: “ChatGPT mixed up the facts from the meeting transcript.”
  • Sales/Ops: “The AI didn’t leverage the full context of the call notes.”

The pattern is the same in every department: the blame lands on the AI, not on the prompt or on the process. It isn’t about the talent of the person executing the task. It’s about the ability to prompt an LLM effectively; how well we translate the context we have into a clear, actionable request.

When prompting is weak, the output is weak, and the team looks for someone (or something) to own the mistake. We’ve moved from:

  1. Hiding AI usage →
  2. Openly advocating AI‑in‑the‑Loop (AI‑ITL) →
  3. Treating the model as a junior employee.

Once the model is on the team, the same rigor we apply to human contributors must apply to it. If we don’t standardize, we end up with “AI‑slop” that erodes quality. So how do we operationalize AI In The Loop?

a. Choose the right platformPlatformWhen to use itWhat it gives youOpenAI Custom GPTsYou’re already on the OpenAI stackFine‑tuned prompts, built‑in guardrails, version controlAnthropic Claude ArtifactsYou prefer Anthropic’s safety‑first modelReusable prompt templates, context‑aware chainingWorkflow engines (lindy.ai, n8n.io, Make.com)You need orchestration across multiple toolsAutomate data ingestion, post‑processing, and hand‑offs

b. Define the Unit of Work

Ask yourself: What exactly must be delivered?

  • A document (spec, proposal, PRD)
  • A URL (published article, knowledge‑base entry)
  • A zipped bundle of design assets
  • A video (demo, tutorial)
  • A slide deck

For each unit, write an output specification that includes:

  • Format (Markdown, PDF, MP4, etc.)
  • Style guide (tone, branding, citation rules)
  • Acceptance criteria (e.g., “no factual errors > 1%”)

Map AI‑Ops to Core Business FunctionsBusiness AreaTypical AI‑ITL TaskDesired OutputSalesDrafting proposals from CRM dataPolished proposal PDFLegalGenerating contract draftsEditable Word document with clause checksBackend DevelopmentWriting boilerplate API code from specsGit‑ready repositoryFrontend DevelopmentProducing component skeletons from design tokensReady‑to‑use React/TSX filesUX DesignSummarising user research into journey mapsVisually formatted Figma fileProject Documentation & PRDsCollating meeting notes into structured docsMarkdown PRD with traceability matrix

By cataloging each function, you can attach the right prompt template, version‑control workflow, and quality gate to every AI‑generated artifact. AI‑ITL is no longer a “nice‑to‑have” experiment—it’s a core production line.

If we treat it casually, we risk:

  • Inconsistent quality (the dreaded “AI slop”)
  • Escalated blame cycles that damage morale
  • Regulatory or compliance gaps when AI‑generated content is unchecked

Conversely, a disciplined AI‑Ops framework gives you:

  • Predictable, audit‑ready outputs
  • Faster onboarding (new hires can trust the same prompt libraries)
  • Clear ownership—when something fails, you can trace it to a prompt version, not to a mysterious “AI.”

Closing Thought

If we’re going to keep AI on our team, we must manage it the way we manage any junior employee: give it a clear job description, provide the tools to succeed, and hold it to the same standards we hold our people to.

  1. Assess all current workflows where you delegate tasks to an LLM.
  2. Document the Unit‑of‑Work and acceptance criteria for each.
  3. Choose a platform (Custom GPT, Claude Artifacts, or a workflow engine).
  4. Build reusable prompt libraries and version‑control them like code.
  5. Implement a review gate, human or automated. Ensure all outputs are checked against the spec before it ships.

Continue Reading