Half of All Organizations Hit by AI Security Flaws, EY Warns
A new report from EY reveals a troubling trend: half of all organizations have been negatively impacted by security vulnerabilities in their AI systems, exposing critical weaknesses in how enterprises deploy and secure artificial intelligence.
Even more concerning, only 14% of CEOs believe their AI systems adequately protect sensitive data. As businesses rush to adopt AI-driven tools, they are compounding existing cybersecurity challenges — managing an average of 47 different security solutions in an already fragmented defense landscape, EY found.
AI: The New Frontline in Cyber Warfare
EY’s report paints a clear picture of how AI is reshaping the attack surface. While AI enhances productivity and decision-making, it also lowers the barrier to entry for cybercriminals.
“AI lowers the bar required for cybercriminals to carry out sophisticated attacks,” said Rick Hemsley, cybersecurity leader for EY in the U.K. and Ireland. “Skills that once took years to develop are now easily accessible to anyone online — often for free.”
AI-powered automation is helping attackers conduct faster, more coordinated, and more deceptive intrusions. Social engineering tactics such as voice phishing (vishing) have surged by 442% in the second half of 2024, according to CrowdStrike. Meanwhile, attackers’ breakout time — the time it takes for them to move laterally within a network after initial compromise — has plummeted from one hour in 2023 to just 18 minutes by mid-2025, according to data from ReliaQuest.
“Accelerating breakout times are dangerous,” EY warns. “Once attackers establish a foothold in a network, they can gain deeper control and are much harder to extract.”
Human Error Meets AI Risk
Beyond external threats, internal governance gaps are also widening the attack surface. EY found that 68% of organizations allow employees to develop or deploy AI agents without high-level approval, and only 60% have clear guidance in place for doing so.
These figures highlight a dangerous mix of enthusiasm and oversight gaps. Without structured governance, companies risk exposing sensitive information — or worse, allowing AI systems to train on personally identifiable data unintentionally.
Securing the AI Ecosystem
EY recommends several key measures to mitigate these emerging threats:
-
Embed security from design to deployment: Make security an integral part of every AI development stage.
-
Protect data integrity: Ensure data used for AI training and operations is monitored, sanitized, and compliant with privacy laws.
-
Tighten AI supply chain oversight: Validate the security of third-party AI tools and models before integration.
-
Reinvent threat detection: Update monitoring systems to spot and stop AI-driven abuse faster.
-
Train the workforce: Develop continuous awareness programs that help employees recognize AI-related risks.
The Bottom Line
As AI adoption accelerates, cybersecurity maturity is lagging behind. The same technologies that empower innovation are also enabling threat actors to move faster, strike deeper, and cause more damage.
Organizations that fail to build AI security into their DNA risk more than just data breaches — they risk the integrity of their entire digital ecosystem.
No comments:
Post a Comment