Artificial Intelligence vs. Threat Intelligence

I caught up with David Dufour, Vice President, Engineering at Webroot, at RSA 2019. We both shared an impression that last year’s hype about Artificial Intelligence (AI) and Machine Learning (ML) in cyber security had faded quite significantly. “People got tired of hearing about AI,” Dufour said. “Many vendors were adding it to the messaging but not following through very well.”

David Dufour, VP of Engineering at Webroot

He’s right, and his insight goes way beyond the tendency of tech businesses to promote the latest exciting idea, even if their product doesn’t really offer it. The difficulty with AI as a cybersecurity countermeasure flows from some inherent challenges in making machines think and differentiate real threats from false positives and meaningless noise. The industry quickly got wise to the fluff.

AI is hard. To prove my point, I present the (accidental) but incredibly prescient attack on AI in the 1980 movie “Airplane.” This is in keeping with my global theory that the vast depths of the human experience can be easily explained by classic comedy films.

 

 

The hilarious dialogue actually mirrors many of the difficulties computer scientists have faced while teaching machines to think.

 

Dr. Rumack

Captain, how soon can you land?

 

Capt. Oever

I can’t tell.

 

Dr. Rumack

You can tell me, I’m a doctor.

 

Capt. Oever

NO, I mean I’m just not sure.

 

Dr. Rumack

Well, can’t you take a guess?

 

Capt. Oever

Well, not for another two hours.

 

Dr. Rumack

You can’t take a guess for another two hours?

 

“You can tell me. I’m a doctor.” is funny because Dr. Rumack appears to be so stupid (while maintaining the societally superior, authoritative role of a doctor) that he blindly follows a rule (you can tell a doctor anything) while missing the obvious intent of the Captain’s comment. Following a rule and missing context… sound familiar? This is the essence of weak AI. It follows the rules, but can’t hit the truth if it were the broad side of a barn.

Let’s face facts.

Just because you have an artificially intelligent machine, that does not mean you can easily make that machine deliver good threat intelligence. While Artificial Intelligence and Threat Intelligence share the word “intelligence,” they use it in tellingly different ways. In AI, the “I” suggests the classic meaning of intelligence, i.e. the mind’s faculty for comprehending the truth. Threat intel, in contrast, comes from the espionage definition of the word. Threat intelligence is about arriving at a useful, accurate truth about adversaries.

Webroot has brought the two concepts of intelligence together. They’ve done the hard work of turning the intelligence in “Artificial Intelligence” into effective threat intel. “You have to move past the ‘check the box’ notion of AI in cybersecurity,” Dufour noted. You can’t, in other words, blindly follow a rule like “You can tell a doctor anything.”

The Webroot solution uses machine learning to classify billions of IP addresses and URLs, among other factors, to detect serious threats in near real time. The toolset is then able to block malicious traffic, stop phishing attacks, distinguish good files from bad and so forth. Without this kind of deep, continuous analysis, you might as well be flying on instruments…