Using Machine Learning to Secure Networks: A Conversation with Kris Lovejoy

Hugh Taylor:                       Why don’t you tell me a little about yourself, your background in security, and in the company?

Kris Lovejoy:                      I have been in the security field since pretty much the beginning. I started out as a network engineer and was brought up on security principles within that context. I was an ethical hacker at TruSecure and went on to be CTO of Consul risk management, a Dutch-based cyber security company in the mainframe space, where we built one of the first security information event management technologies. After we sold Consul to IBM, I served as IBM’s vice president of information technology risk and global chief security officer, and then built and ran its security services division. Most recently, I joined Northrup Grumman, specifically to spin out a subsidiary that had spent more than eight years designing an AI-based technology that acts as a sense and respond system for an enterprise. The technology sits within the network and identifies network-based threats, and then contains those threats using an orchestration layer. That’s BluVector.

Hugh Taylor:                       Okay, very impressive. Can you tell me more about the BluVector Solution?

Kris Lovejoy:                      BluVector is a network security solution that leverages AI to sense and respond to threats in real time.  BluVector can either work retrospectively, meaning you can pass it via packet capture or via network recordings for replay. Alternatively, you can run it actively on the network. It uses a set of machine learning algorithms to inspect the network traffic that’s passing through the platform. Based on the kind of results that the machine learning engines are determining, there’s a probability engine that scores the event. If the event has a high probability score, a high likelihood of being malicious, then the technology will contain the threat at an endpoint layer or a firewall or someplace else in the network. Alternatively, it will enrich the event with lots of network data, third-party intelligence, as well as additional post-secondary analysis that’s been performed on that event.

The system gathers all of that detail into a threat battle card, or threat dossier, which an analyst can use to very quickly determine what to do about it, because it tells what this thing (intrusion/malware/threat) is, why it’s bad, and how it works, so that you can close it down pretty quickly.

Hugh Taylor:                       I’m interested in learning about security thought leaders and their impression of embedded threats like malware and firmware, malicious code that’s put into appliances, and other places where it may even be installed at the manufacturing level and can be activated later. Do you consider this an issue that should be taken seriously? What’s your view on it?

Kris Lovejoy:                      This is a huge issue, and in fact I’ve seen this in practice. One of my roles at IBM was to manage the global incident response team. When I moved over from Global CSO to the services division, I took that team with me. We provided incident response services for the IBM Corporation, which is a massive network, as well as all of our strategic outsourcing customers. We also sold those services to other companies. We had oversight of something like 5,000 companies on a daily basis around the world with regard to security incidents. I was involved in a number of incidents associated with embedded device security.

Amongst the funnier examples comes from ATM machines in South America. There was a scam where they would assemble the ATM, and then drive the ATM from the manufacturing floor to the bank. In between, in the van, people were unscrewing the back and deploying new software or updating the firmware on these ATM machines, so that they could reroute deposits.

Hugh Taylor:                       Do you feel that network hardware is at risk for this kind of threat?

Kris Lovejoy:                      I think all embedded devices are at risk. Essentially, these embedded systems have three layers. At the very base, you’ve got the specialized computer chips. They need to be really cheap and operational. Typically, when the companies that make the chips install them, they use a bunch of open source software and a mix of proprietary components and drivers. They’ll create the operating system on the chip and will do about as little engineering as they possibly can in the design, the development, delivery, and the update of those devices, because the profit margin on the chips is really slim. There’s absolutely no financial incentive for them to update their board package.

The second layer is the ODMs, the original device manufacturers. They are hired by a third party, like a medical device manufacturer, to build on specification.  The ODM will pick the chip that meets their feature requirements at the cheapest possible price. They’ll try not to do a lot of engineering that will impact their margin.

Last, the ODM hands it over to the brand name company with their name on the box, who sticks on an application – the third layer. It’s basically a user interface. They might add a couple of new features, make sure everything works, and then be done.

The problem with this process is there is no entity in any part of that supply chain that has any incentive, any expertise, any ability to patch or manage the whole. These devices have been built and deployed by people who want to keep their margins as high as possible. It’s not in their financial interest to layer on a lot of security capabilities, and maintaining these older things is not a priority, especially when these devices are selling for pennies or a couple of bucks a piece. It’s a perfect storm — very old components, some of which are customized, some open source, and most really old. They can’t be upgraded, they can’t be patched.

Then you hand these devices off to a third party end user, like a home router user who’s not skilled. They don’t know how to manually download and install patches, they don’t get alerts about security updates associated with the chip manufacturer. These things are just rife with vulnerabilities that can be exploited.

And yes, it is possible for the bad guys to slip code – like logic bombs – into the firmware on the manufacturing line which will find its way into the embedded devices.

Hugh Taylor:                       Can your solution help detect anomalies associated with this kind of malware?

Kris Lovejoy:                      BluVector was specifically designed to protect against this kind of threat. It came out of the intel and defense community. If you think about intel and defense – they’ve got a lot of things like unmanned vehicles, drones, even a Bradley tank is really a computer on tracks. Our technology was really designed to enable the systematic detection of and response to advance threats for those kinds of embedded devices operating over a network. That said, BluVector’s technology wasn’t designed for deployment in home by an individual consumer. We’re not in a position to protect your personal refrigerator from becoming a bot, like what happened in 2016 when a bunch of refrigerators were weaponized to attack Bill Krebs’ website.

Hugh Taylor:                       It seems like there’s a lot of potential for chaos if you combine the device level attack with knowledge of who is who, where they are and what device they are using – taken from data stolen in the Equifax hack, the breach of the Office of Personnel Management and so forth. You have kind of a combined attack capability where the malicious actors can map organizations and impersonate people.

Kris Lovejoy:                      Oh absolutely. Think about the use of robotics in manufacturing. This is one of the things that keeps me up at night. Today, there are around two million robots being used for manufacturing labor. Over the next few years, we expect to see 20 percent of the work being done by robots jump to 80. Now, think about a world the manufacturing engine of the United States is dependent on industrial robots. What would happen if a form of malicious code could steal or erase data associated with the operational management of the manufacturing line. Those processes, all of those routines, could be stolen and used by a third party. That could create economic hardship. What would be worse is if there were malicious code embedded in those devices that was like a kill switch. An example is Stuxnet, which impacted the Iranian uranium enrichment program.

Hugh Taylor:                       Let’s say the President of the United States calls and says, “Listen, I need your help, I want you to drop everything and help make the United States more secure from a cyber security standpoint.”

Kris Lovejoy:                      It would involve looking back at what is already in the field and then looking at the future. You have to separate it. When it comes to what is already in the past, I diverge from my peers. They would argue that we have to force the manufacturers, the suppliers, whomever it is that has their name on the box to actually patch and maintain those devices. I just don’t think that’s possible at this point.

The technology is what it is. With the trillions of devices we have in the field today, it’s not possible, outside of certain devices like medical devices and others, that we can force the maintenance of those devices through massive patching and upgrading. I think we have to mandate that there be some level of detection and response capability such that the risk can be minimized. My technology and others could be deployed very effectively to help organizations in monitoring these OEM devices to ensure that if malicious code were deployed, and anomalous activities were emanating to or from those devices, that there could be some kind of containment.

Going forward, it is important to begin to adopt some standards for how we build and deploy these devices and what the responsibility is for managing those devices on an ongoing basis.

Hugh Taylor:                       One final question: Do you think that the government or some authority, a higher authority than exists now, should get involved in really enforcing standards of this kind?

Kris Lovejoy:                      Here is the sad reality of the way cybersecurity works. There are three reasons why people will ever buy or exercise risk reduction activities. The number one reason is compliance, the number two reason is crisis, number three reason is … my CEO woke up on Christmas morning, got a new iPad and wants to plug it into the network. Those are the three reasons you think about security. It has been proven time and again that those highly regulated industries have a better overall security posture. I do think the answer is for there to be some regulatory authority that deploys reasonable – not prescriptive – but reasonable controls that require organizations to implement security mechanisms in and around the way they build, deploy and manage their software as well as the way in which they continually manage and monitor that software and contain threats on an ongoing basis.