If You Wouldn’t Post It on Social Media, Don’t Paste It Into AI | it.ie
AI Data Privacy & Security

If You Wouldn’t Post It on Social Media, Don’t Paste It Into AI

Visualizing AI Data Risks
JG
John Grennan

AI tools like ChatGPT, Gemini and others have quickly become the modern office assistant. They summarise emails, write proposals, draft policies and even help with HR wording. But there is a growing blind spot.

"A simple rule of thumb: If you wouldn’t post it on social media, you shouldn’t share it with an AI chatbot."

That does not mean “don’t use AI”. I’m a huge fan and couldn’t imagine my working life without it. It means using AI with the same care you already apply to email, messaging apps, and cloud sharing. The aim is to get the productivity benefits without accidentally exporting sensitive information outside your control.

The things you should never share with AI

Below are the most common examples we see in real environments. None of them are malicious. All of them create risk.

1. Personal Data

  • CVs and Performance issues
  • Disciplinary notes
  • Customer contact details
  • Complaint emails

2. Financial Information

  • Invoices & Payroll queries
  • Pricing structures
  • Margin calculations
  • Supplier costs

3. Legal Documents

  • NDAs
  • Supplier agreements
  • Leases
  • Service contracts

5. Security Info

  • Network diagrams
  • Firewall settings
  • Admin portal screenshots
  • Passwords/MFA codes

The Real Issue: Shadow AI

Businesses have worried about Shadow IT for years. We now have a new version: Shadow AI. This is when employees use public AI tools to perform company work outside company oversight. It is not done deliberately, it is done because the tools are helpful and no guidance exists.

Blocking AI rarely works. The practical solution is not banning AI. It is giving employees a safe way to use it.

Public AI vs Business AI

The conversation should not be about whether AI is good or bad. It should be about where your data goes. Public AI tools are designed for individuals. Microsoft Copilot for Microsoft 365 is designed for organisations. That difference matters.

Your data stays within your tenant

Copilot respects your existing permissions. If a user cannot open a file, Copilot cannot see it either.

Not used to train public models

Your prompts and documents remain within your Microsoft 365 security boundary. Microsoft does not use your data to train public models.

Before You Prompt Checklist

If you answer YES to any of these, stop and rethink your prompt or your tool.

1

Would I post this on social media?

2

Does this include client or customer info?

3

Does this include personal data about anyone?

4

Could this expose security or strategy details?

5

Am I using a workplace-secured AI tool?

If any answer gives you pause, rewrite the prompt or switch tools.

The Takeaway

AI is not the risk. Uncontrolled AI use is. Your staff are going to use AI, in many businesses they already are. The decision organisations face now is whether that use happens safely or invisibly.

Is your business ready for Copilot?

We help organisations introduce AI safely, alongside practical policies that protect sensitive data while still allowing staff to work efficiently.

Download The Social Engineering Guide

Fill in your details below and hit download.