The AI You Didn't Know You Already Deployed
Recently I was reviewing the Microsoft 365 configuration for a professional services firm — small team, full SharePoint stack, client files organised by engagement. Standard setup.
I ran a permissions audit and found that Copilot could query every client folder in the tenant. Engagement documents, financial records, internal correspondence — all of it. No sensitivity labels. No DLP policies. Audit logging for AI interactions wasn't enabled.
The person managing IT didn't know Copilot was active. The partners thought they were "still evaluating AI."
They weren't evaluating. They'd already deployed it. They just hadn't noticed.
This Isn't an Isolated Case
Microsoft rolled Copilot features into the M365 suite that firms were already paying for. It indexes content across your entire tenant — SharePoint, Teams, Outlook, OneDrive. It doesn't distinguish between internal memos and client engagement files.
Most firms I've looked at have the same configuration gap. And the incidents backing this up aren't hypothetical:
- February 2026: Microsoft confirmed a Copilot bug (reference CW1226324) that allowed the tool to read and summarise confidential emails from users' draft and sent folders — without authorisation.
- Late 2025: Security researchers demonstrated that Copilot Studio could access files classified as "High Restricted" on SharePoint — without generating an audit log entry.
- Independent research (Metomic, Concentric AI) consistently shows that over 15% of business-critical files in enterprise M365 environments are accessible to users and tools that shouldn't have access.
- The Australian Federal Government ran its own Copilot trial and concluded: "poor data security and information management processes could lead to Copilot inappropriately accessing sensitive information."
- Queensland's Office of the Information Commissioner published specific privacy guidelines for organisations deploying Copilot.
For a technology startup, a data exposure incident is embarrassing. For a professional services firm holding client records under professional obligation — whether that's financial data, legal documents, or health information — it's a potential breach of conduct. Industry regulators and professional bodies don't distinguish between accidental and intentional disclosure.
Where Does Your Firm Actually Sit?
Over the past two years, I've worked with enough professional services firms to notice a clear pattern. Most firms fall into one of four levels of AI maturity — and most don't realise which level they're at.
Level 0 — Unaware
"We don't use AI."
You do. Microsoft 365 E3/E5 licences include AI-powered features that may be active by default. Copilot can query SharePoint, summarise Teams meetings, and draft email responses. Your people may already be using it — or it may be indexing content in the background regardless.
At this level, nobody has asked what Copilot can access, whether permissions align with client confidentiality requirements, or whether AI interactions are being logged.
The risk: You have AI processing client data without governance, without oversight, and without anyone being accountable.
Most firms I encounter are here. The CTO or IT lead was busy with infrastructure, the partners assumed "Microsoft handles security," and nobody specifically asked about AI capabilities in the tenant configuration.
Level 1 — Ad Hoc
"Some people use ChatGPT. We told them to be careful."
Firms at this level know AI exists but treat it as an individual tool rather than an organisational capability. Some staff paste client information into ChatGPT or Claude. Others use Copilot's built-in features. There's no policy, no approved tooling, and no visibility into what data is leaving the firm's environment.
Staff on professional forums describe this exactly: "We were told to 'use AI responsibly' but nobody defined what that means." Meanwhile, IT managers in adjacent threads write: "People are pushing AI into workflows that process client data and we have no governance framework."
The risk: Data is flowing to external AI services without consent, without logging, and without any ability to demonstrate compliance during an audit.
Level 2 — Governed
"We've configured Copilot properly and have policies in place."
This is where most firms should be as a minimum. At Level 2, you've done the foundational work:
- SharePoint permissions reviewed — client folders locked to engagement teams only
- Sensitivity labels applied — "Client Confidential" content excluded from Copilot indexing
- Data Loss Prevention (DLP) policies active — preventing sensitive data from flowing to unapproved tools
- Audit logging enabled — every AI interaction recorded and reviewable
- An acceptable use policy — staff know what's approved and what isn't
Getting from Level 0 to Level 2 doesn't require new technology. It requires reviewing what you already have, closing the permission gaps, and documenting the decisions. For a firm under 50 people, this is days of focused work — not a six-month programme.
What you gain: Defensible compliance posture. If a regulator asks "how do you govern AI access to client data?", you have an answer.
What you still lack: Control over what the AI model actually does. Copilot is a general-purpose tool. You can restrict what it accesses, but you can't control how it processes information, what it retains for model improvement, or how it responds to novel queries about sensitive content. You're configuring guardrails around someone else's AI.
Level 3 — Strategic
"We use AI that was built for our work, under our control."
At Level 3, AI isn't a bolt-on feature from your productivity vendor. It's a purpose-built capability designed for professional services work:
- Processes only the specific client files authorised for a given engagement
- Runs in your environment (or a dedicated, isolated cloud environment) — client data never leaves your control
- Every interaction is logged in your audit trail — not Microsoft's
- The model is configured for your firm's workflows — client file analysis, document drafting, compliance checking
- You can explain exactly what the AI does and doesn't do — to clients, to regulators, to your professional body
This isn't theoretical. The head of a small professional services firm recently shared how his team built internal AI tools that process client data entirely within their own infrastructure. No data sent to external services. Full audit trail. Staff productivity increased measurably — and they can demonstrate compliance to any regulator who asks.
The shift: You stop being a consumer of someone else's AI decisions and start being the architect of your own.
Why Most CTOs Get Stuck Between Level 0 and Level 2
If you're the person responsible for technology at your firm, you're facing a particular version of this problem. The partners want "AI" because they've read about it. The compliance team wants "no risk" because that's their job. And you're in the middle, trying to make a responsible recommendation without enough information.
Here's what I've observed: the firms that stay stuck are the ones that treat AI as a single binary decision — yes or no, all or nothing. The firms that make progress treat it as a maturity journey with concrete, low-risk steps.
The irony is that doing nothing is the highest-risk position. If Copilot is already active in your tenant and nobody has reviewed permissions, sensitivity labels, or audit logging — you have ungoverned AI accessing client data right now. That's not cautious. That's the opposite of cautious.
The Practical Path Forward
Step 1 (this week): Find out what's actually running. What Microsoft 365 licences does your firm have? What Copilot features are enabled? What can they access?
Step 2 (this month): Close the obvious gaps. Review SharePoint permissions. Apply sensitivity labels to client folders. Enable audit logging for AI interactions. Document your decisions.
Step 3 (this quarter): Decide your strategy. Is a well-configured Copilot sufficient for your firm's needs and obligations? Or does the nature of your work — client confidentiality, regulatory requirements, professional duty — demand a tool you fully control?
Most firms discover that Step 1 and Step 2 are less painful than expected. The permissions review usually reveals a few SharePoint sites with overly broad access — fixable in hours. The governance documentation is a day's work. And the result is a defensible position that satisfies both the progressive partners and the cautious ones.
Step 3 is where the strategic conversation happens. It's the difference between renting AI from Microsoft and owning AI that works the way your firm needs it to.
One More Thing the Partner Meeting Won't Surface
There's a dynamic in professional services firms that rarely gets discussed openly. The cautious CTO who blocks AI adoption is often seen as the problem. But from where they sit, the risk is asymmetric: if they approve AI and something goes wrong with client data, it's their responsibility. If they block AI and the firm moves slower, that's abstract and diffuse.
The maturity framework gives that person something they rarely get: a structured way to say yes. Not "yes to everything" — but "yes to a governed, auditable, defensible approach that I can explain to the board, to regulators, and to clients."
That's not blocking progress. That's leading it responsibly.
Take the First Step
If any of this sounds familiar — or if you're not sure where your firm sits on the maturity framework — I'm happy to talk it through. No pitch, no proposal. Just a practical conversation about where you stand and what the next steps look like.
I help professional services firms figure out AI — from governance reviews to purpose-built systems that keep client data where it belongs. More about what I do →
Interested in AI governance for your firm?
Let's have a practical conversation about where you stand.
Get in Touch →