You can’t firewall a conversation: how AI red-teaming became mission-critical

AI adoption demands red-teaming as traditional security fails against attacks

by · TechRadar

Opinion By Donnchadh Casey published 7 May 2026

(Image credit: Shutterstock)

Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Subscribe to our newsletter

The explosion of AI usage since 2023 is unprecedented. In terms of adoption, AI is moving faster than cloud, faster than mobile, and certainly faster than the internet did. Research group Gartner predicts that 80% of enterprises will deploy AI tools this year.

Donnchadh Casey

VP for AI Security at F5.

When we classify a company’s journey through AI adoption, we see maturity falling into four categories:

  • Category 1 is general purpose AI and productivity – think employees using ChatGPT, Gemini, CoPilot, etc
  • Category 2 is when organizations have internal use cases, building custom chatbots for HR or IT, for example
  • Category 3 includes external use cases like building public-facing GenAI applications, like customer service chatbots
  • Category 4 is agentic workflows which are made up of complex systems that take actions autonomously on behalf of users

These categories often run in parallel rather than in sequence, but it is in the last three categories that security becomes critical. That’s because organizations are building complex software on top of non-deterministic AI models, creating vulnerabilities that traditional firewalls simply cannot see.

Article continues below

Security is always a priority for business but, with AI, the concern is different – it’s a blind spot.

Security leaders have spent 20 years deploying and configuring firewalls and web application firewalls (WAFs) to protect the network, but those tools look at network traffic and usage, whereas AI attacks use natural language – and you can’t firewall a conversation.

That’s why 75% of CISOs are reporting AI security incidents, because their existing shields simply aren’t designed to catch these threats; why 91% have already detected attempted attacks on their AI infrastructure; and that is exactly why a whopping 94% are now prioritizing testing of their AI systems.

New categories of cognitive attacks

There are plenty of real-world examples of how AI is changing the threat model. A breach at Asana last summer stemmed from a tenant-isolation logic flaw in the MCP server that allowed cross-organization data exposure.

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors