Are we sleepwalking into an AI / Cyber nightmare?

Are we sleepwalking into an AI / Cyber nightmare?

Yes, unless sleep-running is a thing??

I've been saying for a while that I'm convinced, we will see a major cyber-related event in the next 18 months or so, scare or incident which causes the vast majority of us to take a breath in relation to "AI".

For the two or so years prior to the launch of ChatGPT, there was a noticeable shift towards focussing on cyber security and broader risks around data. None of the risks have gone away, but the shiny thing in the room has captured a lot of the mindshare while simultaneously making the landscape much more complicated.

The security paradox

Cyber security is, and always has been, a mix of reactive and proactive effort.

  • Reactive: response, intervention, patching, maintaining, monitoring, cleaning up.
  • Proactive: policy, process, controls, training.

But... you’re always one step behind the next vulnerability, exploit or user workaround. To re-use a phrase - “They only have to get it right once. We have to get it right every time.”

The single biggest issue in Cybersecurity though is... us... Humans! Unfortunately, a lot of robust Cyber Security measures can feel constraining and driven by all the complex and competing imperatives we face this is where it bites. On the other side of the coin, throw in to the mix the all that promise from the world of AI and you can see how this might be an issue.

Here are just a few real-world examples from conversations I’ve had:

  • A junior associate at a law firm was caught putting privileged legal documents into their personal ChatGPT account.
  • At another firm that had fully locked down LLM access, someone was caught taking phone photos of confidential data and uploading them to an online tool for analysis.
  • Another involved an HR associate who was using GenAI to draft letters and responses to disciplinaries and grievance actions, sharing a wide range of internal and sensitive personal information.

The shadow AI problem

About 6 or 7 years ago I read a summary of a study into "Shadow IT" (SaaS applications, software, tools used in business which are not 'sanctioned' by the organisation), it said that in a study of FTSE 100 businesses they found on average approximately 200 Shadow IT applications in use!!!

A 2024 Gartner study estimated that only 15% of enterprises have formally deployed AI tools in a meaningful, managed way. But the dirty (not so) secret as described above is Shadow IT. I can all but guarantee you the remaining 85% have an employee somewhere in the business using LLMs or other AI tools.

The risks

I don't want this to seem like a cop-out, but we don't really know yet!!

Outside perhaps of financial theft, data loss / exploitation has been the biggest concern for years. These remain, but added to them is data leakage, which has always existed but increasingly the idea that it could come from using applications exactly how they were intended is relatively new, previously it's more come about by accident or misuse.

There are all sorts of concerns around poor integration, immature technology, technical debt and a range of other factors which could lead to breaches / exploitation. Many of those will prove unfounded or be hardened before issues arise, but... The growth of use of generic "AI", as well as new services, existing service pivots is outstripping the combined education, policy, process and tooling that has been designed to keep us safe "online". Without a similar growth of those measures, or corresponding "AI Cyber Tools" and the business knowledge, process and policy around use we're potentially headed for some major breaches / leaks.

It's entirely possible it's already happened in a variety of forms:

In December 2024, a vulnerability was discovered in Meta’s chatbot that allowed users to access private prompts and generated outputs from other users. It was patched in January 2025. That’s a month of exposure after it was reported and who knows how long before that. Meta AI reportedly has over 1 billion active users, just let that sink in.

Risks are emerging in other forms too, I'm sure amongst the readers of this are some who've used WeTransfer...They caused a stir this year after quietly updating its terms and conditions to allow AI model training on uploaded user files. After public backlash, they rolled it back, clarifying they weren’t doing it “yet”. They are not and will not be alone.

Then there's the lawsuit between OpenAI and a number of News Organisations, where OpenAI is the subject of a legal order forcing them to retain and potentially share chat transcripts in a case relating to unauthorised use of copyrighted content for training of AI models.

So... what do we do?

The genie is out of the bottle. AI isn’t going away—nor should it, locking everything down is not the answer!

So, if we can't prevent the first major incident and can't lock it down... what can we do?

Good question, and this is the conversation that is increasingly being had, but needs to be occurring far more.

A lot of the Cyber fundamentals still apply; education, policy, process, tooling, but for now we're going to have to rely far more on former until the tooling truly catches up and these come down to culture. Businesses and leaders need to embrace the technology, understand the drivers for users to seek them out and find ways to work together to use them responsibly for the benefit but also security of all.


What do you think? Are we doing enough? Or are we due for a wake-up call?

I'd love to hear your thoughts — reach out: [email protected]