AI for Cybersecurity: Superhero or Sidekick?

0


We all know that complexity is the enemy of effective cybersecurity. Yet across the globe, organizations are transforming their operations to deliver new digital experiences. This is driving a greater degree of risk in complex environments. We have found that incident volumes increased by 13% in 2023, rising to 16% among enterprises, where scale and operational complexity are greatest.   

It’s no surprise that in this fast-evolving landscape, improving security and reducing risk is a top priority for both IT and business leaders – many times trumping revenue growth and customer experience. However, there is some good news. AI is already playing a major role in transforming enterprise IT operations. It could also do the same for security posture. The CIO’s role will be to ensure adopting AI doesn’t create more problems than it solves.

A useful sidekick

Our research shows that 71% of organizations are looking to expand investments in AI and machine learning over the coming year. In particular, Generative AI (GenAI) has caught the eye of many IT leaders after a breakout in 2023.  From a cybersecurity perspective, it could be deployed in a variety of use cases, from training employees and security teams to testing vulnerability scanners and prioritizing security updates. But one of its most obvious uses is enhancing the productivity of incident responders.

GenAI’s ability to rapidly summarize vast quantities of information and provide answers from an established data set is fast making it a hit with incident response teams. It allows teams to reduce the time they spend coordinating and processing information during incident management – for example, talking to stakeholders and customers. That means they can spend more time resolving incidents – which is critical in a world where customer experience can have a major impact on revenue and brand reputation.

In this context, GenAI is more of a sidekick than a superhero. It can support security teams and incident responders with their work. But give large language models (LLMs) too much license to “think” and resolve incidents independently, and CIOs may run the risk of dangerous hallucinations. In those circumstances, the risk outweighs any expected rewards.

GenAI and beyond

Fortunately, security teams don’t only have GenAI at their disposal. Before incidents even arise, they can leverage event-driven automation to hand most of the heavy lifting to machines. Consider incident volumes: multiple alerts for the same underlying issues can, at best, be annoying. In a worst-case scenario, they can significantly impact a security team’s ability to respond to an incident. But AI and automation can group alerts for related problems into the same incident, turning the noise down so responders can concentrate properly. In a similar way, intelligent tooling can turn down event volume so only the most important events are surfaced for responders.

During triage, machine learning and automated diagnostics can be deployed to surface useful context such as probable incident origin, how past incidents were resolved, and whether any other teams are experiencing the same problem. This will accelerate response by eliminating the need for manual information gathering.

Next comes incident resolution. Here, GenAI and automation can be deployed as a kind of incident response co-pilot to answer critical questions and streamline workflows. In doing so, the technology can help responders investigate probable causes via natural language interactions and suggest paths for remediation – to accelerate mean time to repair. It will also help to automate manual, time-consuming tasks like creating communication channels and drafting updates; further optimizing employee productivity and accelerating resolution times.

If there’s one thing GenAI is great at, it’s communicating. So having a tool that takes the time and effort out of sharing automated updates with key stakeholders and customers can add tremendous value for incident responders. It ultimately helps build trust internally and enhance the customer experience.

Front and center

GenAI can be a tremendously useful tool for responding to resolve security breaches and incidents – saving teams valuable time on tasks like communicating with stakeholders while suggesting different ways to approach difficult problems. But it’s not a panacea – working best when used to augment rather than replace the work of humans.

It’s also not the only game in town. Machine learning and other AI tools can also be deployed to good effect to help reduce alert overload and enhance triage. But whatever flavor of AI CIOs choose to assist their teams, humans will remain front and center of incident response, ably supported by their GenAI sidekicks. 

To learn more, visit us here.



Source link

You might also like
Leave A Reply

Your email address will not be published.