Contact

Blog

Could Unsecured AI Be Your Next Million-Dollar Problem?

Digital shield with "AI" text, glowing in red and blue, symbolising AI and cybersecurity.

You’ve probably seen tonnes of AI headlines lately, and for good reason. AI is transforming businesses faster than anyone expected, with adoption rising across every sector.
As a Content Specialist, I use AI tools every day, and I’ve seen firsthand how powerful and risky they can be. When used without clear safeguards, they open the door to serious security and operational threats.

The 2025 Cost of Data Breach Report from IBM and the Ponemon Institute found that AI is being deployed faster than it’s being governed. Without clear controls, even well-meaning AI use can leave businesses exposed.

Read on to discover what the report* reveals and learn how to stay ahead of the risks.

AI adoption is accelerating and so are the risks

Only 13% of organisations surveyed have experienced an AI-related security breach so far, but that number is expected to rise as usage grows. Another 8% of respondents weren’t sure if AI played a role in their breaches, highlighting how easily these threats can go unnoticed.

Of the companies that did face an AI-related breach, 97% admitted they didn’t have proper access controls in place, and those gaps were costly. Roughly a third of incidents involving authorised AI exposed sensitive data and disrupted operations. Others caused reputational damage that proved much harder to repair.

Unapproved tools are creating blind spots

Teams are eager to work smarter. This attitude is great, but when tools are adopted without proper oversight, they can introduce serious risk.

Shadow AI, tools adopted by employees without business approval, accounted for 20% of reported breaches, more than any other type of AI use. These tools often bypass IT controls, making them easy targets. In 62% of these cases, data was stored in external environments like public cloud platforms.

Breaches cost more when AI is involved

Attackers are now using AI to create more realistic phishing emails, fake images and even deepfake videos, making these threats harder to identify and even easier to believe.

In 16% of reported breaches where attackers used AI, each incident cost an average of US$ 4.49 million. When the attack specifically targeted an organisation’s AI systems, the average cost was nearly the same: US$ 4.46 million.

Shadow AI breaches were even more expensive. They added an average of US$ 200,000 and took around a week longer to contain, giving attackers more time to do damage.

In these incidents, personally identifiable information was the most frequently compromised, and the most expensive to recover. Each stolen record cost an average of US$ 166, above the global average.

Most organisations still lack basic AI controls

AI adoption is outpacing the ability to manage it. While 37% of businesses said they had some form of AI governance, more than 60% admitted they lacked clear policies.

Even among those with policies, few had tools in place to detect Shadow AI or apply consistent standards across teams.

Training, policies and approval processes are essential parts of AI governance, but most organisations haven’t built them into their AI strategy. Over three-quarters weren’t running stress tests on their models, and nearly two-thirds weren’t performing regular audits.

Without clear guardrails, AI systems can introduce legal and ethical risks, not to mention the long-term reputational damage that’s much harder to fix. But you don’t have to choose between speed and safety.

Now’s the time to close the gap

The IBM report revealed that while AI-related breaches are still relatively rare, the risk is growing, and fast.

Many businesses don’t recognise the risk until it’s too late. It only takes one weak point, and once data is exposed or customer trust is damaged, recovery can be slow and expensive.

But there’s still time to take control. With the right structures in place, you can scale AI securely, knowing it’s working for your business and not against it.

At Elixirr Digital, we help organisations close the gap through AI strategy development. That includes creating robust AI governance and delivering hands-on training to keep teams compliant.

Talk to our experts today about responsible AI governance.

*Source: Cost of a Data Breach Report 2025: The AI Oversight Gap, IBM, website: https://www.ibm.com/reports/data-breach

Authors

Annabelle Gardiner

Content Specialist

Annabelle’s love of creative writing led her to the digital marketing arena in pursuit of a career in copywriting. In 2022, Annabelle joined Elixirr Digital, where she spends her days crafting high-quality content for various digital marketing channels. From social and email copy to articles and ads, Annabelle’s way with words supports the digital marketing needs of clients spanning a multitude of sectors and industries. In her spare time, Annabelle enjoys baking – which means she’s especially skilled at creating content for our clients in the food and beverage industry!

Share

Let's collaborate

Partner with us

Let’s work together to create smarter, more effective solutions for your business.

Related blogs

A person typing on a laptop with transparent digital icons floating above the keyboard, including a large central label reading “GEO – Generative Engine Optimization,” surrounded by symbols for AI, analytics, targeting, A/B testing, global reach, and search.

Generative AI is changing how content gets discovered. Where search engines rank web pages, AI systems pull information together to answer questions directly. To stay visible, brands must move beyond…

15 January 2026

AI
Content Marketing
Digital Marketing
GEO
SEO

Who we are

Explore how our culture and expertise fuel digital innovation