SANS Surge 2026: Featured Keynote - Securing Your AI Transformation

February 24, 2026|6:30 PM PST|Past event

Enterprises racing to embed AI into core operations now face exploding attack surfaces where a single compromised model can cascade into multimillion-dollar breaches or systemic failures.

Key takeaways

  • AI-related vulnerabilities surged as the fastest-growing cyber risk in 2025, with 87% of leaders identifying them as critical amid rapid enterprise adoption and new agentic AI introducing autonomous risk surfaces.
  • Regulatory deadlines loom in 2026, including EU AI Act high-risk system rules by August and U.S. state laws like California's frontier AI transparency requirements effective January, forcing compliance amid federal-state tensions.
  • Failure to secure AI transformations risks not just data leaks and IP theft but operational disruption, with AI-powered attacks already scaling phishing, malware, and adversarial manipulations faster than traditional defenses adapt.

AI Security at a Tipping Point

Artificial intelligence has shifted from experimental tool to foundational infrastructure across industries, with enterprises investing billions to automate processes, enhance decision-making, and drive efficiency. Yet this transformation has dramatically expanded cyber risk. What changed recently is the maturation and proliferation of agentic AI systems—autonomous agents with broad permissions—that create novel vulnerabilities like entitlement sprawl, misconfigurations, and overprivileged non-human identities in SaaS environments.

Real-world impacts hit hardest in sectors reliant on data integrity and operational continuity. Financial institutions face amplified fraud risks from AI-crafted deepfakes and phishing that mimic executive communications. Healthcare and manufacturing contend with poisoned models leading to flawed diagnostics or supply-chain sabotage. High-profile incidents in 2025 already demonstrated AI-enabled cyberattacks by state actors and criminals, scaling reconnaissance, malware development, and social engineering.

Stakes are concrete and escalating. Average U.S. data breach costs hovered around $10 million in 2025, with identity attacks comprising 30% of intrusions—figures set to rise as AI accelerates attack velocity. Regulatory non-compliance carries fines under the EU AI Act for high-risk systems post-August 2026, while California's S.B. 53 mandates safety frameworks for frontier models from January 2026. Inaction invites not only direct losses but eroded trust, legal liability, and competitive disadvantage as adversaries exploit unsecured AI faster than organizations can govern it.

Non-obvious tensions abound. Defensive AI tools improve detection for 96% of users, yet the same technology empowers attackers, creating an asymmetric arms race where innovation outpaces governance. Federal efforts in the U.S. push minimal regulation to preserve dominance, clashing with state-level rules and the EU's stricter risk-based regime—leaving multinationals navigating patchwork compliance. Many organizations assess AI security before deployment (up to 64% in 2026 surveys), but gaps persist in lifecycle management, from training data to deployment monitoring, where voluntary frameworks dominate over mandatory standards.

We use cookies to measure site usage. Privacy Policy