Fire Automations is my weekly newsletter where I cut through the hype and teach you when automation is worth it, what to automate, and how to implement it in real businesses
Thanks for reading - and let me know your thoughts down below.
If your workplace has started using AI, you have probably felt the whiplash: one person is pasting sensitive client notes into a chatbot, another is quietly using AI to draft emails, and leadership is saying “be innovative” without giving any real boundaries. I have built enough little automations and AI workflows to know the truth: AI is incredibly useful at work, and also incredibly easy to misuse by accident.
The fix is not a 30 page policy nobody reads. It is a set of practical guardrails you can actually remember and follow when you are moving fast.
Here are 11 simple rules I use when I’m working with ChatGPT, Microsoft Copilot, Google’s AI tools, or any internal assistant, especially when the output touches customers, money, or reputations.
1. Assume anything you paste could become public later
Even when a tool promises privacy, treat every prompt like it could be forwarded, logged, or reviewed during an audit. That mindset instantly improves your judgment. If you would not paste it into a shared Slack channel, do not paste it into an AI tool. When I follow this rule, I move slightly slower for 10 seconds. But, I save myself hours of cleanup later.
If your company offers an approved “work” AI (often with admin controls and data protections), use that instead of a personal account.
2. Never input customer data unless your company explicitly allows it
This is the most common real-world mistake I see, and it is usually not malicious. It is just someone trying to be efficient. Names, emails, phone numbers, addresses, medical details, account IDs, contract PDFs, support transcripts, and anything tied to an identifiable person should be treated as sensitive by default. If you must use AI on customer text, anonymize it first and keep the original in your system of record. The extra minute of redaction is cheaper than a trust incident.
3. Use AI for first drafts, not final decisions
AI is great at getting you from blank page to something workable. It is not great at being accountable. I let AI draft, summarize, rewrite, and propose options. I do not let it decide what we ship, what we promise a customer, what we publish publicly, or what we tell legal or finance.
A clean mental model is: AI accelerates thinking, you own judgment. If the output would look bad on the front page, you review it like it will be.
4. Write your prompt like a manager, not like a search box
Most “AI mistakes” are actually “vague instructions.” If you want safer, more accurate output, give context, constraints, and what success looks like. I usually include audience, tone, length, and what sources it should rely on (or avoid). When a prompt is clear, you need fewer back-and-forth turns, and you reduce the chance the model fills gaps with confident nonsense. This rule is the fastest way to make AI feel less random.
5. Demand receipts for facts, and verify anything important
AI will sometimes invent details, misquote numbers, or blend two true things into one false statement. Any time you see statistics, dates, legal claims, or “according to research,” treat it as untrusted until verified. In practice, I use AI to suggest what to check, then I confirm using my company’s real sources or the primary document. If verifying feels annoying, that is the signal the claim is high-stakes enough that it needs verification.
6. Put red lines in writing so everyone uses the same judgment
You do not need a big policy doc, but you do need shared defaults. If you are a manager, put a small set of red lines in your team wiki and reference them in onboarding.
Here is a simple version that works in most office environments:
No client PII in AI tools
No passwords, keys, or internal access details
No confidential financials unless approved
No legal, HR, or medical decisions by AI
No publishing without human review
This protects your team from “I thought it was fine” ambiguity.
7. Use AI to transform your work, not to replace your voice
The safest, most effective use is often “make this clearer” rather than “write it for me.” I will paste my messy notes and ask the AI to organize them into a logical outline, then I rewrite the final in my own tone. This reduces the risk of accidental plagiarism, tone mismatch, or weird corporate-sounding language that nobody on your team would actually say. If your job depends on trust, your voice is part of the product.
8. Keep a human in the loop for anything customer-facing
Customers can tell when something is off, even if they cannot name why. If AI is drafting support replies, proposals, marketing copy, or executive communications, someone accountable should review it. I like a two-pass approach: first pass for correctness, second pass for tone and promises. The biggest hidden risk is not grammar. It is the AI casually overpromising, inventing a policy, or sounding certain when you are not.
9. Separate “thinking” from “doing” in your automations
This is a guardrail that matters once you start using Zapier, Make, Power Automate, or similar tools. Let AI draft, classify, summarize, or suggest. Do not let it automatically send emails, update CRM fields, create invoices, or close tickets without checks. My rule is: AI can propose actions, but automations should require a human approval step when money, customers, or public communication is involved. The extra click is worth it.
10. Log what matters so you can explain decisions later
If you use AI in important workflows, keep lightweight records: the prompt template, what data you provided, what tool you used, and what the human reviewer changed. This does not need to be complicated. A simple note in the ticket or a saved prompt in your team doc goes a long way. The payoff is huge when someone asks, “How did we arrive at this?” and you can answer without guessing.
11. Set a weekly “AI hygiene” habit
This is the boring rule that prevents slow-motion chaos. Once a week, I scan my saved prompts, delete anything that contains sensitive details, and tighten the templates that keep producing mediocre output. I also review any automations that touch external communication to make sure nothing drifted. Tools change, models change, and your business changes. A 15 minute weekly check beats discovering three months later that an automation has been quietly doing the wrong thing.
Final thoughts
If you want to implement this without turning it into a bureaucracy, start with Rules 1, 2, and 8. Those three alone prevent most real-world AI regret at work. Then add one habit: write your team’s red lines down and treat them like normal operating procedure. AI can absolutely save you time this week. The goal is to save time without borrowing risk from your future self.



