AI for cybersecurity shines with promise, but challenges abound

Companies are rapidly adopting cybersecurity products and systems that incorporate artificial intelligence (AI) and machine learning, but the technology presents significant challenges and cannot replace human analysts, experts say.

In a Wakefield Research survey published this week, for example, nearly half of IT security professionals (46%) said their AI-based systems create too many false positives to handle, 44% complained that critical events are not properly reported and 41% don’t know what to do with AI outputs. A total of 89% of companies reported issues with cybersecurity solutions that claimed to have AI capabilities.

Nearly 90% of businesses struggle with AI-based cybersecurity solutions.
Source: Devo

Not all AI-based projects are created equal, as some technologies are more mature, says Gunter Ollmann, chief security officer at Devo, which sponsored the survey.

“When they talk about deploying AI for cybersecurity…these are the projects that usually fail,” he says. “In particular, NLP [natural language processing] is touted as a technology that can do some neat things for your security and as a way to manage your attack surface, but it hasn’t really been successful.”

The promised benefits of AI for cybersecurity
Integrating machine learning and artificial intelligence capabilities into cybersecurity products has been widely hailed as a way for companies to gain more visibility in attacks against their data and infrastructure, and as a means of respond faster. In December, for example, the business consulting firm Deloitte called the trend towards more AI in cybersecurity be a way to tame the complexities of today’s business infrastructure, which includes more devices, more cloud services, and more employees working from home.

“Cyber ​​AI can be a force multiplier that allows organizations to not only react faster than attackers can move, but also anticipate those moves and react to them in advance,” the firm said. of advice.

The most mature uses of machine learning in cybersecurity products are in endpoint threat detection and user and entity behavior analysis (UEBA) to identify at-risk users and compromised devices, Allie says. Mellen, security and risk analyst for Forrester Research, a corporate intelligence firm.

AI security challenges abound
While finding and classifying IT assets topped the list of uses for AI-based cybersecurity – with 79% of companies using the technology for this application – it also topped the list of challenges, with 53 % of respondents considering the problematic use case, according to Wakefield Research survey of 200 IT security professionals.

About a third of companies had difficulty deploying systems correctly, and a quarter criticized vendors for not updating the product enough to be useful.

There are certainly differences of opinion between business executives, who largely see AI as a perfect solution, and security analysts in the field, who have to deal with day-to-day reality, says Devo’s Ollmann.

“In the trenches, the AI ​​part is not living up to expectations and hopes for better triage, and in the meantime, the AI ​​that’s used to detect threats works almost too well,” he says. “We see that the net volume of alerts and incidents coming into the hands of SOC analysts continues to increase, while the ability to investigate and close these cases has remained static.”

The ongoing challenges that come with AI capabilities mean companies still don’t trust the technology. A majority of companies (57%) rely more or a lot more on AI functionality than they should, compared to just 14% who don’t use AI enough, according to respondents in the investigation.

Additionally, few security teams have enabled automated response, partly because of this lack of trust, but also because automated response requires tighter integration between products that just doesn’t exist yet, explains Olman.

Increase, not replacement
Another dimension of the AI ​​challenge is the lack of clarity on its role vis-à-vis security personnel. While AI applications can provide real benefits, it’s important to understand that they augment the human analyst in the security operations center (SOC) rather than replace them, Forrester’s Mellen points out. It’s a conclusion that many companies probably don’t want to hear, given the problems many have in hiring skilled cybersecurity professionals.

“The most important resource we have is the people working at SOC today,” Mellen said. “AI, of course, helps with detection…but we have to recognize that it’s not going to be a complete game-changer when it comes to security. The way we use machine learning today isn’t even at the point where he would be able to assume most of the responsibilities of the human analyst.”

Devo’s Ollman also cautioned that organizations need to have realistic expectations about what AI can and cannot do.

“I think there’s the sci-fi view that the AI ​​will be a superhuman that will run 1,000 times faster than the best living human; leave that for the sci-fi books,” he says. . “Augmentation is about knowing how to make the human faster and more efficient at what they do, and also fill in the skill gaps they have.”

The challenges all suggest that trained human analysts with expertise in machine learning and AI are needed to get the most out of AI-enhanced cybersecurity products. And AI systems that can explain their findings will be needed in the future to augment human analysts and retain trust, Ollmann says.

Leave a Comment