WA firms are discovering the efficiency of AI comes with a hidden cost.
The traditional moat surrounding professional services – the absolute sanctity of client privilege – is being drained from the inside out in 2026, often by a simple copy and paste.
As businesses in Western Australia rush to adopt artificial intelligence, a shadow AI crisis has emerged whereby employees and executives, often under immense pressure to perform, are feeding sensitive client data, internal strategies and confidential advice into public AI models.
What feels like a privileged conversation with a super-intelligent assistant is, in reality, a permanent disclosure to third-party servers.
The issue has become so prevalent that, for the first time, leading cybersecurity firm CyberCX included it in its 2026 threat report, released in early February.
The company identified the unintentional release of sensitive, classified or confidential information, known as data spill, as responsible for 3 per cent of the incidents it responded to during the year.
“In many of these instances, the organisation did not have enterprise licensing for the AI platform, data-loss prevention controls in place, or adequate network logging, often making it impossible to identify or quantify the data spillage,” the report said.
“[Last year] was the first year that CyberCX’s DFIR team was engaged for these types of AI data-spill incidents and likely reflects the continued surge in AI adoption by many organisations and individuals.
“It is vital to ensure that secure adoption of AI platforms is followed by enforcing strict data-handling controls, continuous monitoring, and implementing clear data-governance frameworks.”
And while that ranks low on the list of threat vectors in the firm’s report, digital transformation consultant Josh Horneman said shadow AI, whereby employees and executives use public AI tools with sensitive data, had become a problem.
“The assumption remains that because AI felt like a private conversation, it was one,” said Mr Horneman, who is also co-founder of Perth-based AI consultancy Howll.
“But the same people serving us superintelligence in our pockets, for free, also created social media a decade ago, and have infiltrated every element of our personal lives.
“The last frontier of access to data were our businesses. And most people have just opened the door and let the Trojan horse in.”
In fact, issues with AI have become so pronounced many companies and government departments are inserting ‘no AI’ clauses in contracts.
“The intent makes sense. These organisations or departments are trying to protect their data and meet their own compliance obligations,” Mr Horneman said.
“The problem is that a clause in a contract doesn’t stop AI being used. It just determines who is liable when it is.”
Mr Horneman gives an example whereby a company wins a contract and a director signs a terms sheet that includes a ‘no AI’ provision.
“Somewhere downstream a team member does something that has felt completely normal for two years. They open a public AI tool and paste in a document,” Mr Horneman told Business News.
“They’re not being malicious, it’s a reflex. And the organisation has no visibility that it happened.
“The ‘no AI’ clause doesn’t catch that behaviour. It just means the director who signed the contract is personally exposed if it ever surfaces.
“ChatGPT was allowing its shared chats to be indexed in Google searches a year or so ago. You could search the word ‘strategy’ and find confidential information about strategic plans, exports form SAP instances and much more.”
Mr Horneman said while a ‘no AI’ clause in a contract sounded like a protection, it was just a transfer of risk.
“If your team is using public AI and you have no way of knowing, you’re not compliant. You’re just unaware,” he said.
Legal professionals are among those most exposed to the risks of shadow AI and data spillage.
HHG Legal Group has moved to mitigate these internal risks by providing staff with secure, licensed alternatives to public chatbots.
“We were very much ahead of the game and put policies around it quite quickly,” chief executive Merrill MacNish said.
“We implemented Microsoft Copilot and a LexisNexis derivative called Prodigy, which are licensed and controlled within our own environment.
“This provides a safe environment for our lawyers to draft documents and ask questions, because the system is controlled and does not save or train on our data.”
And while Mr Horneman said insecure AI use in professional services firms had become an issue, there were also problems on the client side.
“I hear a lot of legal professionals using public AI to draft advice, summarise matters, cross-reference precedents,” he said.
“And now clients, on the other side, pasting the legal advice they receive straight into ChatGPT to interrogate it or seek a second opinion.
“Both ends of the relationship are now exposed, often without either party realising it.”
HHG Legal Group managing director Murray Thornhill said the reflex to use public tools had created a dangerous grey area for the sanctity of advice.
“Legal professional privilege can be waived by the conduct of the client or the lawyer, and treating advice as no longer confidential by releasing it to third parties is a serious risk,” he said.
“There has been no specific test of it yet (in Australia), but on established principles, uploading your lawyer’s advice to a large language model is a significant risk to maintaining privilege.”
The risk also extends beyond written documents, with recording and transcription software also causing headaches.
Mr Thornhill said the integration of AI platforms into meeting software could lead to legal strategy being processed on foreign servers without a firm’s consent.
“You can be in a Teams meeting and find out that it’s been recorded and analysed by an app that wasn’t agreed to upfront,” he said.
Ms MacNish said firms needed to actively manage how clients handled the work product they received.
“We have actually made a change to our cost agreements to bring to the client’s attention the risks of cutting and copying our advice into an LLM [large language model],” she said.
“We specifically advise clients about it and put it in our terms of engagement ... that clients are to respect the confidentiality and privilege of our communications with them and not to use them in that way on AI.”
While Australia is yet to see this play out in court, the The US v Heppner ruling, handed down on February 10, hinted how some of these issues may be handled in the future.

Haim Ozchakir (left) and Dunja Lewis are calling for an AI classification system to help transparency.
In the case, Dallas-based financial services executive Bradley Heppner learned he was the target of a government investigation.
After retaining legal counsel, he used a publicly available AI platform to research legal issues related to the investigation.
Information Mr Heppner learned from his attorney was provided to the AI platform, which then ran queries related to the investigation, generating 31 documents.
The documents were then sent to Mr Heppner’s counsel for discussion.
On the day of Mr Heppner’s arrest, the FBI seized several electronic devices containing the AI-generated documents.
Mr Heppner’s defence then asserted attorney-client privilege over the documents; a move challenged on February 6 by the government.
In a February 10 ruling, judge Jed Rakoff held that a defendant’s communications with a publicly available AI platform were not protected by attorney-client privilege.
It was one of the first rulings to address privilege claims involving generative AI globally, and while not directly applicable to cases in Australia, provides a guide of where the profession is heading internationally.
Within Australia, a joint statement issued by the Victorian Legal Services Board, the Law Society of NSW and the Legal Practice Board of WA said: “Lawyers cannot safely enter confidential, sensitive or privileged client information into public AI chatbots or any other public tools”.
It warned that, if practitioners did use generative AI tools, they must carefully review the contractual terms of the platform to ensure the information was kept secure.
And it’s not just the legal sector struggling with the issue.
Financial advisers and accountants operate under similar confidentially expectations, with client financial data funnelled into public models carrying the same fundamental problem.
“The common thread across all of them is the same: the liability handballs are starting, and the AI usage happening below them is largely invisible,” Mr Horneman said.
“This isn’t just a legal profession problem. Any industry built on confidentiality, professional privilege or regulated data handling is sitting on the same exposure.
“Most of them just haven’t had the incident that makes it real, yet.”
The legal boards that contributed to the joint statement suggested several steps firms could take to ensure they were using AI safely: clear, risk-based policies; and limiting use to low-risk undertakings.
Being transparent about the use of AI was another step the boards believed was important to maintain the industry’s integrity.
That’s something Haim Ozchakir and the company he co-founded, AIUC, has been working on for years.
Mr Ozchakir has developed a world-first AI-usage classification system.
Much like the age classification system for entertainment products, AIUC defines a set standard for various classifications, from AI-free and human-led to co-created content, AI-led and AI-generated.
Without these types of auditable governance standards, Mr Ozchakir warned that mid-tier WA firms risked becoming effectively uninsurable, as traditional professional indemnity providers introduced exclusions for any claims arising from the use of public, non-governed generative AI.
Mr Ozchakir’s intention with the AIUC was not necessarily to protect proprietary data but rather restore trust between professional services firms and clients about the origin of material.
Co-founder Dunja Lewis said the US case proved there were conversations that needed to happen.
“I doubt (Mr Heppner) knowingly went and breached confidentiality with his lawyer, it’s just that the standard wasn’t set to begin with,” she said.
“You can’t expect people, particularly in such a fast-moving space, to all be aware at the same time of something that they just didn’t understand.
“But as the old saying goes, ‘if you aren’t paying for the product, then you are the product’.
“If you have clients that you know are going to be using AI, have that conversation around the expectations. It’s ludicrous for a consultancy or firm to say, ‘here’s a report, but you can’t do whatever you want with it’.
“There has to be a conversation around what responsible use is, and how it should be used once it leaves the hands of the consultancy.”
Ms Lewis said any belief that businesses could completely stamp-out AI usage was misguided.
“We’ve seen in the recent KPMG report, 56 per cent of people are using AI against company policy,” she said.
“That’s the entire point of AIUC, to shift the conversation away from ‘AI use is bad’. The moment you start to put blanket bans on its usage, you just drive its usage away from where you can see it. You ban it from a work laptop, someone will just copy and paste it using their phone.
“The only language we use when we talk about AI right now is either that it’s AI-free, like some of the contracts have tried to mandate, or AI-generated.
“Nobody talks about anything in between, and so that means we can’t plan for it properly and everyone is still hiding it.”
Ms MacNish said attempting to stop the use of AI among younger staff was a losing battle for modern firms.
“I think the demographic of staff that are coming through now, if we don’t give them the efficiency tools we will lose them, because they’re looking for the next best thing,” she said.
“They love tech and they’re moving at quite a rapid pace compared to how things were previously.
“I don’t think we’re going to be able to stop it; we need to embrace it and make it safer.”
While claims by providers that LLMs would not train on data submitted through enterprise or paid accounts are dubious, the one thing that promise does provide is a ‘reasonable expectation of privacy’: an essential ingredient in any argument over privilege.


