Securing Critical Infrastructure: A Conversation with David Dingwall
Hugh Taylor: Tell us a little bit about yourself and your background and how you came to be doing this kind of work.
David Dingwall: I was quite lucky to be a student in a university where Unix was used on my degree course, the very, very early days of Berkeley Unix where it was provided free to universities. After that I worked in client server computing in the City of London, then for Siemens, which were using Unix systems for controlling and protecting nuclear power stations, rail and transportation systems worldwide. And all of that was pretty much with a security focus. Around about 2000 I moved into the infra-security business with startups around the PKI space and I’ve been kind of going along that track to date.
Hugh Taylor: Do you feel that there are threats hidden and embedded in devices, whether it’s the network route or a server or a smartphone?
David Dingwall: Either in the manufacturing stage or someone interrupting into the logistical supply chains making alterations, so I’m assuming both.
Hugh Taylor: Can you elaborate on how you think someone could interfere in the supply chain?
David Dingwall: Okay, there are some good examples of that around the world and the most publicly quoted ones are some of the policy and logistical changes that have been done to networking equipment by the “Five Eyes” countries—The US, the UK, Australia, New Zealand and Canada. There has been evidence shown of networking equipment being shipped to logistical centers in Singapore and the Middle East where the firmware and parts of the motherboard have been swapped out and changed before those devices were supplied to various markets in Asia. And that’s public information that’s been verified.
It’s not necessarily just, for example, the expectation that firmware in a no-name mobile phone coming from someplace in China may have suspicious firmware dripping information to a vast data center building, a data warehouse, somewhere in China. There’s various countries operating these programs around the world.
If you’re a multinational organization there’s no guarantee that the firmware that’s in your switch or router or infrastructure, the same physical equipment that’s delivered in Canada would be the same firmware that’s delivered to some infrastructure in Indonesia. It could be a number of parties with a plan to adapt a firmware for whatever reason, whatever policy and security reasons, and having been in this business for so many years, I have an expectation that that kind of stuff just happens.
Hugh Taylor: Who would do this and why?
David Dingwall: Well there’s government sponsorship. And who they do it to is based on who they either see as bad actors or people who they regard as somehow gray and unknown and they want to know more information. The other side are private organizations, but many of whom have links to governments or government agencies. Russia and China are the ones that people talk about the most where it seems to be individual companies who are doing this work, trying to penetrate and change and modify both malware and firmware on devices. And the association between the organization and a government agency is tenuous to almost non-proven, certainly from a public perspective.
Hugh Taylor: Right. So for example, last month the United States Government put pressure on AT&T to cancel an agreement with Huawei for their cellphones. Congress said this company has ties to the Chinese Intelligence Services and they worry that the phones would be used for spying. But my feeling is, and I’m curious to get your take on this, is aren’t most major electronics companies in China sort of associated with the government or is that just kind of a myth?
David Dingwall: Well, without having the security clearances, I certainly don’t have those clearances anymore, it’s a bit difficult to comment with any real justification, just a working assumption that it may probable. As an information security professional, you have to have a kind of working assumption that this kind of behavior goes on all the time.
Hugh Taylor: Let’s say you’re advising a client and they ask what they can do to detect the presence of these kind of malicious elements?
David Dingwall: That’s kind of tricky, because there’s been a major change in expectations about updating devices in general. As you know there’s been a fair amount of scandal about poorly configured security definitions and policy in all kinds of devices. And the new NIST regulations and expectations hold that any device that has a security policy needs to have a process where that policy can be updated remotely. There’s actually very good procedural reasons why that’s a good thing, but that whole process could be subverted under government mandate to say this firmware needs to be changed to something else. And the end user won’t know anything different. They’ll just see it as part of the standard security update that’s coming from the software vendor. Almost impossible to detect, but doing things like comparisons between a known version of a ROM and a version that’s actually on a device, that’s a pretty mechanical exercise. But, the world is much, much more fluid than that these days, especially with the assumption that every device needs to have a way to update its security and update its firmware. It’s very tricky. A North American audience would probably be pretty shocked that some of that is actually being sponsored by their governments, allegedly. You wouldn’t expect that kind of behavior to happen, but in a grownup interconnected world, every government has security policies and they have targets and they all want to find ways to get intelligence.
Hugh Taylor: Is there a remedy for this problem? Let’s say for the sake of argument that there’s a problem in the United States that we have a lot of devices used in corporate and government systems that may be compromised and we may have unauthorized eavesdropping going on or potential disruptions and that kind of thing.
David Dingwall: It’s not the technology. That would be my first reaction. And a good exhibit of that is happening in North America, but also in Continental Europe. It’s wider awareness of the implications of basically running critical infrastructure, so everybody’s water systems and wastewater systems are now classed as critical national infrastructure. In North America there’s this massive program that’s been defined by NIST about how to maintain, support and manage the risk of either connected or disconnected devices and how that risk should be mitigated and controlled. And how, again in their program, there’s this massive push for controlled and authenticated and managed updates of information security policy and at the very basic level the mark code and authorization/trust of what actually runs on a device.
But there’s also stupidity, and/or lack of security awareness. A great example of that was utilities, both water and power utilities, that were in disconnected locations with devices and systems that haven’t been updated in decades. There were no network ports on them. And everybody started buying serial ports- LAN adaptor dongles that came from various markets, most of them actually came from China. In this case kind of incidental. But each network device was shipped with the same SSH key, so tens of thousands or hundreds of thousands of devices around the world connected to critical national infrastructure running your water and sewage systems could theoretically be scanned for and entered with the same SSH key.
So there’s a fair amount of activity around scanning and monitoring the basic bits of hardware that plug infrastructure together. Looking at USB dongles, that lesson’s been learned, but there’s a logistics problem if you’ve got a widely dispersed infrastructure and you have 15 to 20 thousand of these dongles connecting your second or third country in the world and you’re only going to get to do maintenance on that environment once a year, then that’s a massive change project to change out dongles, swap it all, update security information, then update your security policy. That’s not malicious per se, but stupidity has as much an impact as anything else.
Hugh Taylor: Do you think that it might be necessary for, for lack of a better term, the government or entities with enforcement power to sort of raise the policy level?
David Dingwall: Again, it’s a people problem. Any kind of regulations can be set up. It’s how enforceable is it. If the people who are running the operations in, say a municipal electricity plant with a very small municipal grid, you know those organizations may have no more than 50 staff. If one of the critical staff goes off ill, say he or she gets cancer and they’re off out of the business for 18 months to 2 years, how do they backfill that experience with someone else who understands the implications? In a large organization, say like AT&T or a regional or continental wide infrastructure provider, they will have those people and processes in place.
North America is a great example where things like power and water and wastewater utilities are often very, very small organizations with very limited security experience. They’re having to report exactly the same as the big boys and girls and they have the same compliance and reporting requirements and have to report, do their FERC and NERC reporting just like everybody else. However, they just don’t have the headcount and they certainly don’t have the available headcount with maybe the level of experience that they need.
I’m quite cynical about utilities in particular. In Europe, everybody’s kind of waiting for a major shoe to fall. We actually, sadly, need something badly to go wrong before the Boards of these organizations will take the seriousness of a hammer sitting over their heads. But it’s a commercial decision, do they accept that risk is a Board level decision. It’s not a technical one, it’s certainly not a government one even if governments set regulations. Unfortunately I think we need to see someone failing quite badly and then obviously there will be yet another revision of say the NIST regulations or CI regulations for wastewater. We’d go through another cycle.
People learn from mistakes. The assumption prevention versus general commercial payback, and there always just seem to be a cost on the business is a losing game sadly. Probably the only people who can’t afford to lose are people looking after nuclear installations. Everybody is petrified and therefore the organizations don’t and can’t take those risks, but what’s the worst that can happen if someone blocks a pump in a wastewater utility, then some polluted water goes into the rivers. Someone will get fined, an organization might be fined itself, but not everybody in the organization is going to be fired because they need to keep the organization operating.
Hugh Taylor: It seems a lot of people, including a lot of people in management, don’t really understand the scope of technology and how pervasive it is and what it does for almost everything in their lives. So, they don’t really understand the consequence of security as a result.
David Dingwall: Yup. It’s certainly not taught to directors. It’s certainly not taught as part of an MBA program. They hire specialists and those specialists are either external advisors who are on contract or they’re the CTO or the CISO and again, sadly, the lifespan of a C level exec on the technical or technical security side, is probably no more than 24 months before they move on. That’s market data. So they are incentivized to think about the things the analysts tell them that they’re going to be measured on. They’re being measured against their peers pretty much like a trading company in Wall Street. I’m doing everything everybody else is doing in almost exactly the same way and my compensation is set up pretty much along the same lines. The rules and regulations that they’re operating against, the metrics might change over time, but the challenge if you’re a C level exec on the technical or security operation side is not to be standing out too high above everybody else in your peer group.
And again, how do you change their expectations, it tends to be something has to happen. But it’s kind of worse for the C level execs because they have no long term commitment to an organization because they are incentivized to become superstars, reduce costs, mitigate risk, whatever the change programs they get in place. And those programs really do need to be showing benefits before they move on to their next job. So something happens with a piece of infrastructure that was created by a CISO five or six years ago, that’s three rotations of that head. Who’s ultimately responsible for that?