Tech

Digital4Security Taster Workshops - Workshop 2: ML and DL for cybersecurity: overview of methods and applications

February 25, 2026|5:00 PM CET|Past event

Cybercriminals now deploy AI to craft adaptive malware and hyper-personalized attacks that evade traditional defenses, forcing organizations worldwide to integrate machine learning and deep learning or risk massive breaches costing millions.

Key takeaways

  • In 2026, AI-driven threats have surged, with attackers using machine learning to mutate code in real time and scale operations, rendering signature-based detection obsolete and enabling zero-day exploits at unprecedented speed.
  • Defenders counter with ML and DL for anomaly detection and predictive response, reducing breach costs by up to $1.9 million on average and detecting threats up to 1,000 times faster than legacy methods.
  • The EU AI Act's high-risk system rules activate in August 2026, imposing strict transparency and oversight on AI in cybersecurity applications, creating compliance tensions amid rapid innovation and transatlantic regulatory divergence.

AI Arms Race in Cybersecurity

The cybersecurity landscape in 2026 has tilted decisively toward artificial intelligence as both weapon and shield. Adversaries have fully embraced AI, shifting from occasional use to standard practice: machine learning now powers malware that adapts on the fly, evades sandboxes, and identifies defensive patterns, while deep learning processes vast datasets to uncover subtle attack vectors humans miss. This evolution has amplified threats, from AI-orchestrated phishing with synthetic media to autonomous agents probing networks at scale, making traditional rule-based security inadequate against the volume and sophistication of modern attacks.

Organizations face mounting financial and operational pressure. Breaches leveraging these techniques have inflicted damages in the billions annually, with average costs per incident remaining stubbornly high despite defenses. The shift to AI-enhanced detection offers relief—systems now analyze billions of events daily, flag anomalies in real time, and automate responses, delivering faster mitigation and fewer false positives. Yet adoption lags in many sectors, leaving critical infrastructure, supply chains, and small enterprises exposed.

Non-obvious tensions abound. While ML and DL promise proactive defense, they expand the attack surface: adversaries exploit prompt injection in AI models or poison training data to undermine tools meant to protect. Regulatory landscapes add complexity. The EU AI Act's phased rollout reaches high-risk AI systems by August 2026, demanding rigorous assessments and transparency for cybersecurity applications classified as high-risk, potentially slowing deployment in Europe. Meanwhile, U.S. policy pushes deregulation to spur innovation, creating transatlantic friction for global firms. Skills shortages persist, as teams struggle to implement and oversee these technologies effectively.

The result is an accelerating cycle: faster threats demand faster defenses, but integration brings new vulnerabilities and compliance burdens. Stakeholders must balance speed of adoption against risks of over-reliance on black-box models or regulatory missteps.

We use cookies to measure site usage. Privacy Policy