Is AI a Magic Wand for Security?

By Ofer Elzam

 

Artificial intelligence (AI) has connotations that it’s poised to do many magical things and improving enterprise security posture is one of them. The truth, however, is more pragmatic, and AI will still need the human touch to enable automation that ensures robust protection against threats.

The basic promise of AI is that it will augment—even replace—actual teams of people to operate at a large scale beyond human abilities or available resources by seeing what people can’t do and reduce the effort, burnout and churn rates common to today’s security teams.

But AI is an overloaded term. It isn’t magic and it still needs people to be effectively designed and applied. In simplest terms, AI is a computing function which performs a cognitive operation typically done by humans. Applied to cybersecurity, it decides what needs to be done based on information that does not perfectly match any previous criteria, situations, or pre-existing rules.

Further, you can’t talk about AI without mentioning machine learning, a technique used to train a computer program to effectively identify patterns in large sets of data without having an exact match to existing data, or deep learning, a technique used by computer systems to make deductions based on multiple, abstracted and semantic concepts typically derived by machine learning.

AI can’t just be bought off the shelf and installed. Like any security solution, it needs to be properly configured, and while it can enhance automation of security rules and tasks, it’s not required.

 

There’s a Person Behind the AI Curtain

Many organizations are looking at AI to bolster their cybersecurity posture. According to a CapGemini survey released earlier this year, nearly two thirds of respondents think AI will help identify critical threats, while 69 percent believe AI will be necessary to respond to cyberattacks. The pace of adopting AI in cybersecurity is also picking up, according to the survey, with nearly one in five organizations reporting use of AI prior to 2019 with almost two out of three organizations planning to employ AI by 2020.

However, applying AI to improve cybersecurity isn’t as simple as flipping a switch. Specialized AI capabilities are a combination of accumulated knowledge, human training by a vendor, adversarial computer versus computer training, and combining layers of security correlation. All this is necessary for AI to be effectively applied within functional security areas.

The available data also determines the efficacy of AI when applied to security, whether it’s the diversity of sources, scale of data or biases, among other factors. Applying AI to payload data is a lot different than configuration and policy data, and let’s not forget that a human security expert pays a critical role in how well AI can be used to enhance security.

Payload data comes from a wide range of sources, including email content, web site content, application data and network traffic data. It’s a generous stream of attack data that’s easily identified and understood. Patterns of attack can be derived by looking at payload data. The CapGemini survey noted that more than one third of executives make extensive use of AI for predicting cyber threats by scanning through vast amounts of data of various types to make predictions based on how the system has been trained. Preemptive actions can then be taken to avoid attacks. But there are also patterns of defense, which is all the configurations and policies already available. Just as important as understanding attack patterns is analyzing these configurations and policies and understanding if they are in fact secure. If so, can they be applied elsewhere? If not, how do we correct the user or server affected?

While anti-virus and anti-spam tools are comparing what they’re seeing now to previous attacks, when it comes to policy you need to have a safe configuration for baseline comparison so the right update recommendations can be made based on new data. Compliance requirements can help as it can identify areas that are over-exposed and are a significant threat surface—you should always have the least exposure you can afford without hampering business operations.  But as a data source, compliance plays a critical role in automatically bolstering your guardrails and applying your rules whenever the environment changes.

While applying policy rules and updating them automatically based on new and existing data when the environment changes is automation, it’s not necessarily AI.

 

AI if Necessary, but Not Necessarily AI

Just because a decision was made automatically, doesn’t mean AI, machine learning or deep learning was necessarily involved.

Automation is essential for supporting quick, nimble decisions that update configurations, but these decisions are still rooted in human intelligence—security experts who see new threat surfaces and reconfigure security solutions and update policies accordingly. These manual changes might guide automation in the future, but there’s always a need for people to be involved. Automation that doesn’t even fall under the umbrella of AI is augmenting the work of humans, who remain best suited to manage large, complex environments with multiple security solutions and firewalls from multiple vendors.

Many capabilities within solutions are dependent on automation, but while machines may be doing things on their own, it’s not necessarily AI, and not all technology currently available that can respond autonomously to changing situations. Human knowledge in the form of a security expert is still critical alongside automation to provide necessary risk analysis, understanding of compliance pressures, rule recommendation, and policy cleanup.

 

People must power automation

A system that can make changes independently could be considered a flavor of AI, but security automation does require decisions based on rules created and optimized by people.

In the long term, AI has a lot of potential to automate enterprise security to complement the work people still need to do, but remember:

  • You can’t effectively apply AI to cybersecurity without skilled people
  • AI also requires machine learning and deep learning if it’s to be applied to security
  • Automation can still do a lot to improve security posture without AI

Like any emerging technology, AI is not a magic wand for solving all cybersecurity challenges. Ultimately, it’s another option in the toolbox that complements other tools and the security professionals who use them.

About the Author

Ofer Elzam is responsible for the continued development of FireMon GPC, the industry’s first and only solution to deliver persistent policy enforcement for complex, hybrid network environments. Before joining FireMon, Elzam was VP of product at Dome9 Security. Under his headship, Dome9 became the leader in securing multi-cloud deployments, which led to its acquisition by Check Point Software. Prior to Dome9, Elzam was the director of Sophos’ network security product line, where he led the company’s transition to the next-generation XG Firewall platform. Earlier, Elzam worked at Cisco serving as both a strategic architect of security technologies and executive director of product management, where he led ScanSafe, which was acquired by Cisco in December 2009. Elzam also spent 10 years serving in a variety of product leadership positions, including as CTO at Gemalto.

Photo by fotografierende from Pexels