With AI and Security, ‘Everyone is Still Learning’
For many organizations, the biggest potential risk that arises from the use of AI tools is the leakage or mishandling of sensitive data.

For many organizations, the biggest potential risk that arises from the use of AI tools is the leakage or mishandling of sensitive data.
October 30, 2025 | 3 min read

The rush to deploy AI tools in every corner of the enterprise has become a major concern for CISOs, and while those tools can provide efficiency and productivity gains, security leaders are concerned that adoption is rapidly outpacing their teams’ ability to monitor and control the use of AI.
The usage of AI tools began just a couple of short years ago as something of a novelty, but it has quickly become an integral–even mandatory–part of many workers’ everyday activities. Writing reports, creating presentations, analyzing data, these are all tasks that people routinely hand over to AI tools, hopefully with the approval of their employers. The problem comes in when people use new tools without the knowledge or approval of their security teams, particularly when there’s sensitive data involved.
“The problem that we see is it’s just hard to keep up. People are always telling us about this or that new tool. We want to do the right thing but also want to make ourselves more efficient. We want to ensure our people are still innovating. The moment security or IT block something, they find a way around and that’s when you end up with shadow AI. For us it’s finding a way to collaborate,” Mark HIllick, CISO of financial services firm Brex, said during a roundtable discussion on AI risks this week.
For people of A Certain Age, the risk of shadow AI may bring back memories of the late 1900s and early 2000s when WiFi was a thing but not yet the thing. Many IT departments were slow to adopt the idea of sending packets flying through the air, so employees hungry for the ability to work on spreadsheets in the cafeteria downstairs went to Circuit City or Fry’s, bought their own WiFi access points and dropped them into the network.
And thus was born (or reborn for the internet age) shadow IT.
The security risks of rogue WiFi routers in a network are somewhat different than those stemming from the usage of unapproved or unexamined AI tools, but the broader challenge is the same: enabling greater productivity while keeping the organization safe. The speed with which new AI tools are emerging and finding their way into enterprises is making this even more difficult. In a new report out today, 1Password found that 73 percent of knowledge workers are encouraged to use AI tools in their work, but 27 percent say they use tools that haven’t been officially approved.
“Employees are constantly being dangled new AI tools and it’s a really hard and nuanced message to say we want you to be AI first in how you think about your work, but don’t be too much so,” said Susan Chiang, CISO of Headway.
“Something about AI is that it's death by a thousand low to medium risks. What are the risks we can address with education and enablement programs versus saying, hey that’s a medium risk and we’re focused on high risks and then realizing we don't have the bandwidth to get to those.”
For many organizations, the biggest potential risk that arises from the use of AI tools is the leakage or mishandling of sensitive data.
“The last thing anyone wants to do is give any advantage to the competition. Our people are pretty careful about where they’re putting sensitive data. In some respects the controls aren’t really keeping up with the technology. We’re still very much in the early days. As network defenders we’re always slightly behind the curve on these things,” said Mark Hazelton, CISO of Oracle Red Bull Racing.
“You have to find ways to say yes or ways to say no that sound like yes.”
The alternative, as we’ve seen many times over, is that people will find a way to use the new tools, regardless of whether they’re approved.
“Everyone is still learning. If anyone tells you that they’ve solved this, they’re lying,” Hillick said.
October 30, 2025 | 3 min read
Dennis Fisher is an award-winning journalist and author. He is one of the co-founders of Decipher and Threatpost and has been writing about cybersecurity since 2000. Dennis enjoys finding the stories behind the headlines and digging into the motivations and thinking of both defenders and attackers. He is the author of 2.5 novels and once met Shaq. Contact: dennis at decipher.sc.