Cyber Security Articles

As a cinephile (and former TV script executive), I can’t help myself. When CEO Steve Subar told me that his
I attended the session, “Open Sesame: Picking Locks with Cortana” at Black Hat 2018, in which presenters Tal Be'ery, Amichai
Sophos announced the publication of a detailed report on the notorious SamSam ransomware threat at Black Hat 2018. The 47-page
Auric Goldfinger, the classic Bond villain, once complained, “Man has climbed Mount Everest, split the atom. Achieved miracles in every
Parisa Fabriz, Director of Engineer at Google, ascended a round stage at Black Hat 2018 that had been covered until
By Anthony James Salesforce privately announced that data in the marketing cloud may have been accessed by third parties or
Homeland Security secretary Kirstjen Nielsen announced the formation of a new National Risk Management Center last week. Wired and other
Another day, another head scratcher from the DoD regarding cyber security policies. BleepingComputer.com reported on July 30 that the Department
Before the Internet of Things (IoT) boom, technology futurists breathlessly made predictions like, “Imagine your refrigerator can tell you when
The Washington Post reported last month that Chinese military hackers had stolen over 600 GB of sensitive information from a

Cyber Security Articles

What This Cyber Security Articles Page Is About

The goal of Journal of Cyber Policy is to provide commentary and stimulate conversations about important cyber security topics. Our parallel goal is to discuss cyber issues in plain English, liberating this critical subject from the exclusive realm of specialized engineers and hackers. Throughout, we try to talk about cyber security and related issues from the perspectives of public policy, national security, corporate policy and compliance.

 

Why Articles about Cyber Security Matter

We are living in an era where digital technology dominates so much of our lives. Digital risk naturally accompanies this reality. Smartphones, the IoT, the Internet and so forth make our lives easier, but they also expose us to threats. Some of these threats come from nation state actors. We believe Americans could be better-informed about these risks. And, while there’s certainly no lack of content online about cyberthreats, room still exists for cyber security articles that integrate the subject’s diverse themes of technology, politics and business.

For example, Russian disinformation and Chinese espionage are not new, but today’s digital landscape makes these familiar tactics deadly, in political terms. The Cold War was largely analog in nature, with offensive campaigns quite limited in scope and impact. While Cold War dynamics may survive today, they are having a radically different effect on American society and politics than anything that came before.

It can be tricky to tease out the differences between today and a generation ago. American politics and governance have always been messy, dishonest and idiotic, but there were at least some fact-based controls on it. This is no longer the case. Our enemies are exploiting this new reality. In some cases, they’ve created this new reality.

We see the impacts of these new measures, but leaders across the government and business sectors generally fail to understand the transformative nature of technology, e.g. Amazon is not just a bigger mail order store; the iPhone is not just a phone with fancy features, and so forth. These cognitive gaps lead to deficiencies in the perception of risk. They enable our leaders to underestimate our enemies and how they can win without firing a shot. We also tend to overestimate our defenses and resiliency.

The digitization of society, commerce and politics renders America defenseless in ways that we are only beginning to understand. Digital transformation is double edge. America’s rush to digitize its economy and society produces as much risk as it does benefits. For example, we have to manage the tensions between mobility and surveillance, between big data and privacy and so on.

The Topics We Cover in These Articles

We deal with a wide range of cyber security topics in these articles. Some discuss cyber election interference. Others look at geopolitical cyber risks, such as our recent series on Russian disinformation and “Active Measures.” We will frequently check in on the state of enterprise architecture and cloud computing, seeking expert insights into the best practices and new security technologies that are influencing security policies in these areas of information technology. We cover the gamut of security subjects: malware, phishing, identity and access management (IAM), privileged access management (PAM), zero trust, data security, application security, secure DevOps (DevSecOps), red-blue teaming, automation, Security Orchestration, Automation and Response (SOAR), threat monitoring, incident response, intrusion detection, encrypting, key management and on and on. Our cyber security articles look at compliance, government cybersecurity frames like NIST NSF, GDPR, CCPA and more.

Crying Foul on the Antivirus Industry

As a cinephile (and former TV script executive), I can’t help myself. When CEO Steve Subar told me that his company, Comodo Cybersecurity is based in Clifton, New Jersey, my reflexive reaction was to think, oh, right, just like Rupert Pupkin, the hapless protagonist of Martin Scorcese’s classic 1982 film, “The King of Comedy.” Played by Robert DeNiro, Pupkin declares, “I was born in Clifton, New Jersey… which was not at that time a federal offense.”

Steve Subar, CEO of Comodo Cybersecurity with his Clifton, New Jersey doppelganger , Robert De Niro playing Rupert Pupkin in the 1982 film “The King of Comedy.”

Pupkin took a big chance to get famous. (And hilarity ensued. No spoilers here.) Subar, for his part, while not deluded like Pupkin, is also displaying a lot of Clifton, New Jersey moxie in making a bold statement about one of the cyber security industry’s not so pretty realities. Comodo published an article at Black Hat 2018 criticizing anti-virus software makers who take unfair advantage of the cooperative spirit of Google’s VirusTotal program.

 

Taking Unfair Advantage of Crowdsourcing

The crowdsourced VirusTotal virus scanning project lets anyone upload suspected virus files and URLs. As part of the crowdsourced model, VirusTotal shares the results with participants, including 70 commercial partners. This enables participants to improve the depth and accuracy of their respective virus signature libraries.

To attract commercial partners, VirusTotal’s terms of service mandate that commercial partisans not “use the Service in any way which could infringe the rights or interests of VirusTotal, the Community or any third party, including for example, to prove or disprove a concept or discredit, or bait any actor in the anti-malware space.”

According to Comodo, some of the project’s commercial participants are abusing the privilege. In effect, as Subar puts it, they are integrating their competitors’ virus research into their own products for free. This is not the intent of VirusTotal, nor is it in the spirit of the project. And, while VirusTotal officially banned this sort of free-riding in 2016, Subar claims it is still going on.

Subar commented, “While Google’s VirusTotal performs a valuable service to its vendor-members, those members use VirusTotal to perpetrate a great disservice upon IT end-users: using the reputation of VirusTotal, AV vendors co-opt Google’s service and promote the myth that detection constitutes protection. When users and third-parties discover that Carbon Black, Crowdstrike, Cylance, McAfee, Symantec et al. do not disclose their failure to identify known malware, it becomes obvious that those vendors are hiding behind the Terms of Service.”

 

The Deficiency of “Detect-Remediate”

His broader point revolves around the assertion that virus detection is not the same virus protection. “The Detect-Remediate paradigm is inherently flawed,” he explained. In reality, it is impossible for anti-virus vendors to keep their virus registries 100% current. Furthermore, AI-powered anti-virus algorithms cannot reliably distinguish between malicious and benign code all the time.

Comodo suggests a change of paradigm. In its case, Advanced Endpoint Protection (AEP) combines a default deny mode of functioning with auto-containment and instant usability. Comodo Cybersecurity AEP automatically isolates and contains incoming unknown files while letting users remain productive. The toolset uses a sandbox approach to deny “write access” to malware.

 

A Bigger Tension Revealed

There’s an even bigger picture takeaway from Comodo’s crying foul on abuse of VirusTotal. The cyber security industry comprises an uneasy mix of private enterprise and open communitarianism. Usually, it’s possible to keep a fair balance. Private companies engage with open, crowdsourced community projects and offer much in return. As this episode reveals, however, participants sometimes elect not play nicely. When this happens, whether its anti-virus titans or Jerry Lewis, someone from Clifton, New Jersey will be on the case.

Secure System Engineering and The Torah

I attended the session, “Open Sesame: Picking Locks with Cortana” at Black Hat 2018, in which presenters Tal Be’ery, Amichai Shulman, Ron Marcovich  and  Yuval Ron revealed several different ways to access private information on a locked PC using the Cortana voice assistant. First, they demonstrated what they called the “Voice of Esau” (VoE), in which an attacker could verbally command a locked machine to show images and previews of text files. If you have a file called “password,” then an attacker can see your passwords even if the machine is locked.

The presenters, all cyber security experts from Israel, then showed a far more dangerous threat, which they called “Open Sesame.” With Open Sesame, Be’ery, Shulman, Marcovich and Ron were able to instruct a locked machine to access malicious code by ordering it to open a compromised URL. Or, using voice commands to Cortana, an attacker could invoke a cloud-based application or open a Microsoft document containing exploit code.

This was quite eye-opening, but the presenters also offered an inadvertent lesson in Torah and its relevance to the field of cyber security. As a religious Jew, this subject is near and dear to my heart. It was pleasant to hear a Torah reference in the presentation, even though it was, in my view, slightly incorrect.

Isaac blessing Jacob by Govert Flinck (1615-1660)

Referring to the exploit as the “Voice of Esau” was most likely a reference to Genesis 27:22, which tells the story of the elderly, blind patriarch Isaac being tricked into the all-important blessing of the firstborn his younger son, Jacob, who has dressed himself up as his older brother Esau. (Esau is supposed to get the blessing, but Jacob substitutes himself before his father, who can only feel Jacob’s hands, covered in goat skins to make them seem like the hairy hands of his brother.) At that point, Isaac recites one of the more famous lines in the Torah, “The voice is the voice of Jacob, yet the hands are the hands of Esau.”

So, I think that Be’ery, Shulman, Marcovich and Ron actually meant the “Voice of Jacob” when they revealed a vulnerability based on voice trickery. Jacob was able to trick his father even though he couldn’t disguise his voice. He hacked his father, so to speak, to steal the blessing, much like Be’ery, Shulman, Marcovich and Ron verbally persuaded the PC into revealing secrets from behind a locked screen.

The broader interpretation of the line, “The voice is the voice of Jacob, yet the hands are the hands of Esau” is also instructive for cyber security. In moral terms, the phrase is thought to be a reminder not to be two-faced, to “speak in the voice of Jacob” but act “with the hands of Esau.” Yet, this is an apt description of a hacker as well. A hacker, perhaps who engages in social engineering, is very much acting with the hands of Esau but speaking in the voice of Jacob.

The challenge in cyber defense is to spot system users who are acting in this dualistic way. Artificial Intelligence and Machine Learning can help identify potentially malicious actors on a network. As Be’ery, Shulman, Marcovich and Ron suggested, though, there is an even more basic countermeasure to mitigate this sort of risk: Secure system engineering.

The security weakness in Cortana was accidentally designed into the Windows Operating System. Windows 10 includes several openings to an otherwise locked machine. They are there for convenience and user friendliness, but they are unsafe. The presenters cautioned against being too quick to open new entry points into an interface. They invite exploits. Secure system engineering is a discipline that gauges the security impact of a feature at the design stage with the idea of avoiding a Voice of Esau or Open Sesame type of vulnerability.

The Torah also discuses Secure system engineering, in an indirect way, but that’s for another blog.

 

 

 

Sophos Releases In-Depth Report on Atypical SamSam Ransomware

Sophos announced the publication of a detailed report on the notorious SamSam ransomware threat at Black Hat 2018. The 47-page report covers how the attacks began in 2016. It explores how SamSam targets victims in ways unlike any previous ransomware attack had before. It goes into depth on SamSam (so-named in its first known instance) and its minimalist, manual approach to victim targeting and compromise.

The report discusses how to defend against SamSam. This is essential, as the ransomware is continuing to evolve and become more sophisticated. The attacker appears to select targets carefully. Indeed, the cost victims are charged in ransom is increasing dramatically while the tempo of is also picking up.

The report reveals that the SamSam attacker uses a variety of built-in Windows tools to escalate their own administrative privileges. They scan the network for valuable targets. Their goal is to get credentials whose privileges will let them copy their ransomware payload to every machine. These include servers and endpoints.

The attacker proceeds to spread the “payload” laterally across the network after the initial penetration. At this stage, SamSam is essential a sleeper cell. It waits for the instruction to begin encrypting. Then, like a predator, the attacker goes on the offensive at night.

SamSam first encrypts a prioritized list of files and directories. Then, it moves onto encrypt everything else. Unlike other ransomware threats, SamSam encrypts not only document files and work data but also configuration and data files required to run applications like Microsoft Office. This strategy presents a difficult challenge in business continuity terms. Victims who only back up documents and data will have to re-image machines to recover them.

The entire SamSam attack process is manual, which is highly unusual.  Unlike virtually every other ransomware scenario, SamSam does not start with a phishing attack. Weak or easily-guessed passwords are a particular vulnerability. The SamSam attacker breaks into the network end point using login tools like Remote Desktop Protocol. It exploits operating system vulnerabilities.

Key Findings of the report:

  • SamSam has earned over US $5.9 Million for its creator(s) since late 2015.
  • 74% of the known SamSam victims are US-based. Other regions affected include the UK, Canada and the Middle East.
  • $64,000 is the largest ransom paid by an individual SamSam victim.
  • Targets include medium- to large public sector organizations in healthcare, education, and government. However, these only account for about 50% of the total number of identified victims. The rest are private sector entities that have been largely silent about the attacks.

To download the report, visit https://www.sophos.com/en-us/medialibrary/PDFs/technical-papers/SamSam-The-Almost-Six-Million-Dollar-Ransomware.pdf?la=en

Photo Credit: wuestenigel Flickr via Compfight cc

The New (Inverted) Everest of Cybercrime

Gert Frobe as Auric Goldfinger in “Goldfinger”

Auric Goldfinger, the classic Bond villain, once complained, “Man has climbed Mount Everest, split the atom. Achieved miracles in every field of human endeavor… except crime!” If only he could see today’s “Dark Web.” It’s veritable Everest of cybercrime.

Well, at the very least, it’s the great cybercrime e-shopping experience of our era. John Shier, Senior Security Advisor at Sophos, offered me some insights into the Dark Web at Black Hat 2018. It was a topic worth exploring, in his view, because defenders need to understand how their attackers are actually functioning.

Misconceptions about hackers can distort our thinking. It’s tempting to think of every hacker as some sort of evil genius who fiendishly plots his or her way into your network. The more likely scenario, per Shier, is that the hacker bought access to your environment on an open market on the Dark Web.

The Dark Web is a massive bazaar for a range of illegal items and services, including drugs and weapons as well as a huge variety of cybercrime “products.” “You can buy user credentials for servers,” Shier shared. Or, for example, it’s possible to buy “Ransomware as a service” on the dark web. In this scenario, the hacker gets free software and tools to conduct ransomware attacks, but gives the Dark Web seller a cut of the ill-gotten gains.

The Dark Web’s cyber aisles also contain ready-made threats, malware kits and “spam services.” “You can see offers to target hundreds of thousands or even millions of email addresses for your scam or phishing attack, with guaranteed inbox reach,” said Shier.

The Dark Web explains some of the scale and pervasiveness of cybercrime. It leverages the talents of malware writers and others into a much larger phenomenon. It allows less-able attackers to buy what they need instead of making their own attack tools.

Knowing how the Dark Web works also informs assessments of cyber incidents. “We often see attacks where nothing seems to have happened,” Shier said. “The attackers enter your network and then go away. Is that bad or good? Well, if you understand that the initial break in was for the purpose of stealing—and then selling—your credentials, you will be hopefully be prepared for the real attack that’s on its way.”

John Shier of Sophos

What can you do about the Dark Web? Law enforcement is on the job, but in the meantime, and it may be a very long meantime, it’s critical to be rigorous on patching and security basics. Also, as Shier noted, you can get solutions that isolate potential attack vectors from excessive, risky access.

As an example, Shier used the case of an HR department that receives and stores thousands of resume files. With Microsoft Office documents now commonly used to execute attacks, a resume repository could be full of malware. “Why not use a single, quarantined VM to handled inbound resumes,” Shier suggested. “That’s more secure than letting all the documents stay in an open access network volume or on a work PC.” Sophos offers tools to implement these types of controls.

 

Black Hat 2018 Keynote: Coming Together to Tackle Root Causes of Cyber Vulnerability

Parisa Fabriz, Director of Engineer at Google, ascended a round stage at Black Hat 2018 that had been covered until moments earlier with a projection of the moon’s surface. The whole celestially themed warm up to the speech, with copious smoke effects and spinning spotlights, seemed a tad overproduced. The moon like stage sat against a backdrop of shooting stars and floating galaxies.

The presentation, though, was definitely rooted in earthly reality. This included a moment when I had to wonder what kind of industry we have where a woman, who is highly impressive person, a talented engineer and a capable manager, is expected to deliver her keynote wearing high heels. I’d like to see Satya Nadella keynote the Inspire show in six-inch stilettos. Just sayin’. But, I digress.

The cosmic themed pre-show

Seriously, though, this conference is taking place in an atmosphere of heightened alarm in cyber security circles. We have election hacking coming up in the Fall. Russian hackers are inside American nuclear power plants. Indeed, as Fabriz shared, “The security of computers is now the security of the world.” Despite all of these ominous signs, though, the tone of the keynote as well as the introduction by Jeff Moss, the founder of the Black Hat Conference, was optimistic.

Her talk focused on how and why companies in the tech industry need to come together and do the hard work of attacking root causes of security vulnerability. In her view, with the right approach, it will be possible to make computers more secure.

Fabriz led off by confessing that, as a child, she had cheated at the arcade game of Whack-a-Mole by enlisting her brothers to whack moles on either side of her – all the better to amass prize tickets. (She explained she didn’t feel the need to apologize for this transgression in front of 5,000 hackers.)  Also, although she didn’t realize it at the time, cheating at Whack-a-mole was a great preparation for a career in cyber security.

For the uninitiated, Whack-a-Mole involves hitting plastic moles (sort of like mice) that pop out of holes in a board. The more you can whack back down into their holes, the more points you get. Playing the game is a lot like chasing down the latest malware. You’re never really finished.  A new one is popping up before you’ve bashed the last one on the head.

Parisa Fabriz delivering the keynote at Black Hat 2018

“We have to stop playing Whack-a-Mole with threats,” Fabriz said. “We need to be more collaborate and strategic,” adding, “We have to make things better. It is up to us.”

To this end, she suggested three steps that vendors in the tech industry should take:

  • Identify and tackle root causes. Don’t be satisfied with isolated fixes.
  • Be more intentional in long arc defenses. Identify milestones and celebrate progress to stay motivated.
  • Invest in bold, proactive projects. Build a coalition of champions and builders outside of security.

On this last point, Fabriz stressed that all tech vendors, not just those in security, ought to step up and at least try to address the root causes of security problems. Indeed, as she has seen what looks like a vulnerability may actually be an architectural or coding error much deeper down the stack.

Fabriz discussed how she has borrowed the root cause analysis concept from the auto industry, where it was first pioneered by Japanese car makers. Thus, in the same way that it’s possible to determine that a crankshaft tends to crack more when it’s made of steel from a defective smelter, tech vendors can ask “why” repeatedly until they uncover the true cause of a security problem.

For example, if there’s a remote code execution vulnerability, she suggested asking the “Five Whys” – Why did this bug lead to remote code execution? Why didn’t we discover this bug earlier? Why didn’t anyone write tests that cover this? Why does it take so long to update? This approach can highlight structural and organization drivers of problems.

Google’s Project Zero, which she oversees, has the mandate of reducing the harmful impact of Zero Day attacks. Her team is not aligned with any specific product or service at Google. They have reported over 1400 vulnerabilities since 2014, found in operating systems, browsers, apps, firmware and hardware.

Their goal is to have an offensive strategy, to do more than one-offs. They want to build a pipeline of vulnerabilities. This sounds good, but as she has learned, it can be difficult to get other vendors to go along. There are power imbalances between security researchers and large companies, for example. Just because some discovers a vulnerability does not mean that a giant company will act on it. The solution has been to impose a 90-day disclosure policy. The vendor has 90 days to fix the problem before it is disclosed publicly by the Project Zero team.

Parisa Fabriz

This works. One vendor has improved patch response times by 40%. Another doubled security updates annually. Yet another now sees 98% of issues reported by Project Zero fixed within the 90-day period.

Still, change is hard. Root causes, by their essence, tend to emerge from culture and organizational structures, neither of which is easy to change. Success requires commitment to collaboration and change management. Transparency helps, as does celebrating success.

According to Fabriz, this is the moment to embrace this change. Systems are growing complex and vulnerabilities are getting more serious. Collaboration for root cause analysis is essential for strong security going forward.

 

 

The Recent Salesforce.com Incident and the Potential of Customer-Controlled Encryption

By Anthony James

Salesforce privately announced that data in the marketing cloud may have been accessed by third parties or inadvertently corrupted. The reasons? An error involving the Salesforce application programming interface (API) in the Salesforce marketing cloud, which is designed to let third-party systems connect with Salesforce Marketing Cloud. This could affect many thousands of customers from finance to healthcare and many other industries, and depending on Breach Notification required for those customers, that data may result in significant cost for notification and more. Having seen many high-profile SaaS and cloud applications having to report publicly possible data breaches, it is concerning that the breach is being handled via email to individual customers.

This is not the first time Salesforce has been challenged with data exposure. Earlier, in 2007, a Salesforce.com employee fell victim to a targeted phishing scam and was tricked into providing credentials to the perpetrators, which results in a breach of the Salesforce customer information that was accessible to that employee. Later on, customers whose data was stolen started receiving communications which, in turn, were used to acquire more sensitive information about them. Salesforce said that online criminals have been sending customers fake invoices, viruses, and keylogging software. The emails were sent using information that was illegally obtained from Salesforce.com via the initial breach.

Salesforce is just the latest in the recent array of cloud breaches, but given the size of Salesforce and the scope of their customer base, it could affect thousands of their customers, that could potentially expose data pertaining to millions of individuals. One of the core concerns of this possible breach is that in the letter distributed by Salesforce, they have little visibility into the extent of any data theft or tampering.

Many companies have accidentally exposed confidential and sensitive data being stored in cloud services to the internet unprotected.  Some recent and similar examples of this sensitive data being accidentally shared and made available to the public include the Pentagon accidently shared 1.8 billion intelligence data objects in a AWS database mis-configuration. In February, 2018 FedEx exposed the personal information of tens of thousands of users.

In February 2016, the Top Threats Working Group of the Cloud Security Alliance® published a comprehensive report on the Cloud Computing Top Threats. In this assessment of top threats the risks posed by APIs were well documented. Once an attacker has accessed the API all of your data is vulnerable. Even data encrypted at rest in the database is vulnerable and easily accessed. In the case of Salesforce, their encryption solution, which is designed to encrypt data held in the database is not able to protect from this type of data breach.

The solution?  Customer controlled encryption before the data is delivered to Salesforce, or other clouds.  End-to-end encryption for the Salesforce cloud can protect data at all points in its lifecycle, including this most recent report of an API exposure. End-to-end encryption, also called Zero Trust encryption, can protect data is at rest (in the database), in motion (anywhere in the network, API, middleware … anywhere) and in use. In the event  of the API data exposure announced today, or any of the other data exposure scenarios noted above involving misconfiguration,  if your Salesforce data was encrypted end-to-end there would be no breach to report.

To add insult to injury, Salesforce seemed unable to provide logging to show exactly who, if anyone, accessed the data and when. Not only was there a potential data exposure failure, but perhaps also a compliance failure depending on what data was potentially exposed.  This incident also exposed the weakness of Salesforce engineering of letting such a critical vulnerability passing through their checks.”

Anthony James is CMO of CipherCloud

The New National Risk Management Center: What’s Good. What’s Still Needed

Homeland Security secretary Kirstjen Nielsen announced the formation of a new National Risk Management Center last week. Wired and other publications covered the news. The Center will focus on risk management across sectors, defending US critical infrastructure against hacking by evaluating and sharing threats. Initially, the Center will focus on the energy, finance and telecommunications sectors.

According to Nielsen, the Center will serve as a focal point for cybersecurity within the Federal government. As she said, “We are reorganizing ourselves for a new fight.” The timing of the announcement is auspicious. The same week, senior intelligence and homeland defense officials warned of “pervasive” Russian efforts to disrupt the 2018 elections. Legislation intending to further bolster cyber defense is also in the works in Congress. Finally, it seems, Washington is taking concrete steps to improve the nation’s overall cyber security posture.

These moves resonate with arguments made in two books that every serious student of American cyber vulnerability should read. As David Sanger astutely notes in The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age, the United States suffers from a serious imbalance between its offensive and defensive cyber capabilities. While the US possesses what are probably the most powerful cyber weapons in the world, according to Sanger we are at the same time too big and too vulnerable to defend. The Center is a step towards correcting this dangerous imbalance.

Ted Koppel, in Lights Out: A Cyberattack, A Nation Unprepared, Surviving the Aftermath, brings specificity to Sanger’s perspective. His book explores the frightening risk exposure in the nation’s power grid. The proposed National Risk Center is another incremental move aimed at remediating this potentially devasting problem.

Homeland Security secretary Kirstjen Nielsen

The cyber security industry is rising to the occasion as well. From the point of view of Katherine Gronberg, VP of Government Affairs at ForeScout, “The National Risk Center should be an improvement over the current model of sharing threat intelligence amongst government agencies and infrastructure providers. Threat sharing was a good start, but we need to do more, to be more proactive. The new Center should be a good vehicle for change in this regard.”

Katherine Gronberg

Like a number of firms that deal with security for critical infrastructure, ForeScout engages in dialogues with Federal cyber security policy makers. In ForeScout’s case, the government seeks their input due to the company’s expertise in security for the Internet of Things (IoT), particularly solutions for Operational Technology (OT) and the networked devices found in SCADA systems.

The challenge, as Gronberg sees it, is for operators of Infrastructure Control Systems (ICS) in Ccritical infrastructure like power utilities, to make the best use of the threat data they will get from the National Risk Center. “Getting from point A, where you have good threat information and a collective interest in improving security, to point B, where you actually have stronger security, is a multi-layered process,” said Gronberg.  “You have technology, public policy, corporate security policies and of course, you have money,” she explained.

As Koppel pointed out in “Lights Out,” a reluctance to invest (or the inability to invest) is one of the major obstacles preventing power companies from upgrading security. This is not an insurmountable problem, however, as Gronberg sees it. “Of course, it’s easy for me, as a vendor, to say that power companies should spend money,” she shared. “However, as we work with critical infrastructure providers, we can often find ways for them to beef up their defenses without the kind of large-scale investment they might have imagined was necessary.”

“A lot of companies that operate in the electrical grid have been told they have to replace much, if not all of their security infrastructure—that they must do hard installs of huge numbers of devices to mitigate threats against ICS,” Gronberg said. “This can look prohibitive. Now, though, there are new, less heavy weight options.”

Companies like ForeScout are introducing innovations in countermeasures to protect ICS with relative economy. Using a more passive approach, the agentless ForeScout solution offers visibility into all networked assets without having to scan them. ForeScout, like its peers, is aligned with the new NIST standards, that recommend continuous monitoring of critical infrastructure. “If you identify an issue with a device,” Gronberg said, “You can know right way what its type is, who made it and if its deviating from its baseline. You can take action quickly. You don’t want a server that you don’t recognize on your network, for example.”

Such fast response is essential for utilities that are engaged in “just in time” delivery of power. The Koppel book highlights the risks inherent in the just in time mode of the power grid, expressing concern that the US electrical system could be vulnerable to a “Stuxnet” type attack where power transmission capacity is overloaded by malicious actors while monitoring systems are simultaneously blinded to what is going on. Solutions like ForeScout put in place monitoring that can flag suspicious activities in ICS before they cause real harm to equipment and people.

It’s important to have an informed, balanced perspective on the risks, though, according to Gronberg. “Is it really possible to ‘take down’ or ‘infect’ all of the grid at once? The simple answer is that this would be very, very difficult for any adversary to do today, and I don’t believe it is possible today.  The electric grid of the continental US States is serviced by three regional interconnections (Western, Eastern and Texas).  Within the regions there is interconnection from the standpoint of power transmission, however, the individual IT networks of power companies are not.”

Thus, she reasoned, “An ‘infection’ (i.e. malware) cannot spread like ‘wildfire.’” She added, “The key thing to focus on is that our adversaries appear to be targeting control systems, and we know that they are persistent. So, whereas it is not possible today for them to create an attack that would infect wide swaths of our power infrastructure in one fell swoop, they may be intent on penetrating them methodically over time.  As we discussed, we need to be concerned that our adversaries seek not just the ability to interrupt power, but the ability to destroy infrastructure with the (presumed) intent of being able to disrupt parts or all of American society.”

Finally, she cautioned, “Right now, the physical systems that comprise our power grid do not have the redundancy to withstand destructive attacks (localized or widespread) and this is one reason why, as I mentioned, DHS has announced a greater focus on identifying and managing risk to the critical sectors — not only the grid, but also financial and telecommunications and, eventually, the others.”

 

 

 

 

 

 

 

 

 

All Ahead Full… Bureaucracy: The DoD’s New “Do Not Buy” Software List

Another day, another head scratcher from the DoD regarding cyber security policies. BleepingComputer.com reported on July 30 that the Department of Defense (DOD) has been quietly developing a “Do Not Buy” list of companies known to use Chinese and Russian software in their products. According to Under Secretary of Defense for Acquisition, Technology and Logistics and former CEO of Textron, Ellen Lord, the Pentagon plans to work with three defense industry trade associations —the Aerospace Industries Association, the National Defense Industrial Association, and the Professional Services Council— to alert contractors about problematic products that the Pentagon sees as potential threats.

The program will be voluntary. Lord is quoted as saying, “The Department shared the list with DOD agencies but have not enforced or made it obligatory.” In other words, the threats are serious enough to warrant a “Do Not Buy” list, but the DoD will not actually require its contractors to avoid buying products that may threaten the lives of US service men and women. Got it?

I reached out to the DoD and the Undersecretary’s office for comment, but got no response. Other experts have weighed in on the matter, however. Terry Ray, CTO of attack analytics vendor Imperva, explained, “This really isn’t new. For years all software running in sensitive Federal departments underwent technical scrutiny.  It is common for the US government to scan software used in their environments for backdoors and other imbedded code or configurations that may allow hidden or previously unidentified connections inbound or outbound to the technology.”

So, perhaps the problem is not as severe as it sounds, given that existing procedures mitigate the risks of malware in defense-related code. As Ray shared, “There was a case 15+ years ago against an Israeli security company that prevented that company from selling within certain branches of the US government. That company had failed to document an available connection point within the software sometimes used for support. This connection was picked up through the Federal inspection process and the vendor was effectively prevented from selling into whole departments, primarily in defense.”

Ray then added, “At the moment, I have not seen details on any new inspection processes which makes me think the technical review will utilize existing techniques.  However, it’s important to note that other well-developed countries operate similarly and prefer to purchase and implement, in country, political ally or open source technology in lieu of off-the-shelf products offered by the US or it is allies.”

Johnathan Azaria, security researcher at Imperva noted, “This is not surprising when considering that some software manufactured in China was shipped with out-of-the-box malware. The possible threat from such software ranges from unintentional security issues that simply weren’t patched properly, to a hard coded backdoor that will grant access to the highest bidder. We hope that the news of this list will urge manufacturers to put a larger emphasis on product security.”

 

Why Are Defense Contractors Vulnerable to Cyber Attack?

Recent incidents have demonstrated how vulnerable defense contractors can be to cyberattack. Jeff Buss, Captain, (USN, Retired) offered a perspective, saying, “Sophisticated actors often hunt like a lion or pack of wolves, targeting the weakest gazelle as their pray.  In the case of cyber security, the weakest gazelle is often a small subcontractor who can’t afford to put the extensive cyber security controls in place that a large company can.  The concept of Lowest Price Technically Acceptable – LPTA contracting exacerbates this issue by having companies do the minimum necessary to be technically acceptable.  Verifying cyber security controls across the entire logistics/supply chain is needed but is costly and takes a significant amount of time, hence the issue.”

 

Fighting the “Wolf Pack” of Hackers

Noam Erez, CEO and co-founder of XM Cyber, provided some guidance. He said, “Even when a military contractor has deployed and configured modern security controls, applied patches and refined policies, it should still ask, ‘Are my most important assets really secure?’ This question is crucial because there are many ways that hackers can infiltrate a network and compromise critical assets. Contractors must get ahead of the hackers and shore up their networks in advance to prevent any attempted attacks. The most effective method is to take the hacker’s point of view and test security defenses using every possible attack scenario and path.  This will expose all the security holes and blind spots that hackers might leverage, enabling the contractors to shore up their defenses and protect their crown jewels.”

 

 

Photo Credit: Michel_Rathwell Flickr via Compfight cc

Preventing IoT-Based Domestic Violence, Abuse and Stalking

Before the Internet of Things (IoT) boom, technology futurists breathlessly made predictions like, “Imagine your refrigerator can tell you when you’re out of milk!” That’s a good thing, right? Maybe. Maybe not. As we have started to live with “smart” devices like Alexa and life-simplifying apps on our phones, the ugly potential for these technologies is now emerging.

Domestic abusers and stalkers are finding new avenues to terrorize their victims using devices and software that was supposed to provide convenience and, ironically, greater safety. As the New York Times reported recently, IoT products like smart thermostats and security cameras are now becoming vectors of control and abuse. The problem is challenging to address, and a good deal more complex than it appears at the outset. Still, solutions are on the horizon.

IoT Tools of Abuse and Control

Domestic violence treatment professionals and advocates are now seeing misuse of IoT devices for the purposes of controlling and tormenting another person. Examples include abusers tracking the locations of their victims using (sometimes hidden) GPS apps, monitoring behavior remotely using security cameras and “gaslighting” through unpredictable changes in home temperature and the like.

It’s a frightening prospect for the survivor of the abuse, but not a particularly new scenario, according to several experts. “Domestic abuse is about controlling the other person,” said Rachel Gibson of the National Network to End Domestic Violence (NNEDV). “It used to be, the abuser would check your odometer and grill you about where you’d been in the car. Now, they look at your GPS. It’s the same behavior, just updated with modern technology.” To help survivors become more aware of the risks of technology, NNEDV publishes the website techsafety.org.

“It used to be, the abuser would check your odometer and grill you about where you’d been in the car. Now, they look at your GPS. It’s the same behavior, just updated with modern technology.”

Old or new, it’s still problematic. As Ruth M. Glenn, President of the National Coalition Against Domestic Violence (NCADV) explained, “Those that need to control will often go to extremes.” In her view, technology makes it easier to get to an extreme of abuse very quickly.

Publicized incidents are re-traumatizing survivors, too. Leslie Morgan Steiner, author of the book Crazy Love, which discusses why domestic violence victims stay in abusive situations, commented, “The recent growth of devices that allow people to listen and monitor their homes remotely are a big concern to me as a domestic violence survivor and advocate. I’ve been hearing stories from current victims about this technology increasingly being used to install fear in loved ones, to make false accusations about what they do at home in their free time, and to dominate and control them psychologically.”

 

To Steiner, the technology also fuels abusers’ paranoia and drives them to give into their obsessive and unrealistic desire to control and judge every aspect of a partner’s behavior. She added, “What disturbs me most as a former victim myself is that home is where you should feel safe and relaxed, and these monitoring tools instead make victims feel anxious and terrorized, and even more afraid of making a safety plan to end the abusive relationship.”

 

The Risk of Stalking through the IoT

Remote stalking by a stranger is another disturbing and all too real consequence of the proliferation of IoT devices in the home. Yotam Gutman, VP of Marketing of SecuriThings, a company whose technology prevents abuse of IoT devices, described two basic IoT stalking scenarios. In one case, there is a semi-stranger, perhaps a work acquaintance who hacks into home devices in order to spy on an individual. This behavior may be part of a psychological fixation (e.g. erotomania) where the stalker imagines he or she is having a relationship with the other person—who in all likelihood has no idea of what is going on.

The other (unfortunately) common situation is for a complete stranger, like a security company technician, to use IoT technology to eavesdrop, watch and potentially manipulate a victim. In this case, the victim could be hundreds of miles away and of course, completely unaware of the illicit surveillance.

Dealing with the IoT Abuse and Stalking Threats

There are a number of ways to detect and prevent the misuse of IoT devices for the purposes of abuse and stalking. Their efficacy is uneven and somewhat dependent on the individual’s level of commitment to solving the problem.

Awareness is a key first step. This is critical in the experience of Susan Moen, Executive Director of the Jackson County, Washington, Sexual Assault Response Team (SART). “It comes up a lot, but people don’t want to believe this is happening to them,” she explained. “Plus, they may not understand the technology very well, and to be honest, who does?”

Moen and her team counsel victims of domestic violence, stalking and sexual assault to keep track of their technological exposure. “For example,” she said, “Are you experiencing what we call ‘social leakage’? Are you telling your sister you’re going to a party, which she posts on Facebook and, in the process, accidentally invites your abuser? It’s not just devices. It’s the complete social media and technology fabric of your life. You’re exposed and you need to figure out where you’re giving information to your abuser.”

“Are you experiencing what we call ‘social leakage’? Are you telling your sister you’re going to a party, which she posts on Facebook and, in the process, accidentally invites your abuser?”

Ruth Glenn has had a similar experience counseling people in technology-driven abuse situations. “When you start to disentangle yourself from a life partner, in tech terms, it’s mind boggling how many different accounts and devices you have to deal with,” she observed. “You have credit card accounts, phone contracts, cable TV, Internet, Wi-Fi, home devices, home security systems, financial accounts and on and on — each one of these can become a way of controlling and abusing another person.”

Leslie Morgan Steiner struck a hopeful note in this context, saying, “There is an upside to the technology, in that in-home monitoring can also be used to record abuse, which in the long run can be used to hold abusers responsible for their emotional and physical violence. Too often, abuse is challenging to prosecute because it is misinterpreted as a he said/she said crime. This technology in effect can be used as a witness to the violence, thus aiding police and judges trying to hold an abuser responsible.”

“Too often, abuse is challenging to prosecute because it is misinterpreted as a he said/she said crime. This technology in effect can be used as a witness to the violence, thus aiding police and judges trying to hold an abuser responsible.”

The legal remedies may not be as sound as one might imagine, however. Paul Gelb, Esq., a Los Angeles-based attorney who specializes in data privacy law, highlighted the complexities and challenges involved in making a legal case against an abuser who uses IoT devices. In Gelb’s view, though there are statutes that help victims, they can be tricky to apply. Revenge Porn laws are useful, but limited. Or, for example, misuse of a listening device might constitute a violation of the Federal Wiretapping Act. A person has a “reasonable expectation of privacy” in his or her home. However, as Gelb pointed out, the Federal Courts have ruled that the Federal Law for wiretapping seldom, if ever, can apply to domestic conflicts. The government is reluctant to make spousal recordings with no consent into a federal crime.

One issue that comes up in pursuing such cases, as Gelb noted, is the definition of “consent.” For instance, if a victim does not change the password to a device that the abuser has access to, does this imply some sort of consent for the abuser to use it? Alternatively, it is possible to argue successfully that changing a password is effectively denying consent for the abuser to access the device.

Leveraging Artificial Intelligence to Detect IoT Abuse

One of the most basic problems in dealing with IoT abuse comes from the sheer scale of the install base. There are millions of listening devices, cameras, sensors and actuators in people’s homes today and the number is only going up. And, according to Gutman of SecuriThings, most of these products were not built with security in mind. They’re easy to hack.

SecuriThings uses Artificial Intelligence (AI) and machine learning to analyze IoT device usage and behavior across very large deployments. For example, they can monitor access logs to a million security cameras and detect anomalies that might indicate abuse. A human observer would never be able to correlate the activities that signal abusive behavior. Only a machine can do it. The process is based on a software “agent” they install on each device. It tracks use and reports back to the SecuriThings database in the cloud.

They can monitor access logs to a million security cameras and detect anomalies that might indicate abuse.

They found, in one case, a set of cameras that were being accessed from the same remote location dozens of times. It turned out that an employee of a service provider was improperly using the cameras to watch people in their homes. They can also find malware and other malicious misuse of devices.

 

It appears that the world is only in the early stages of confronting the risks of IoT misuse. Everyone must now catch up: the law, advocates and counselors and of course, technology. The risks may increase in scale, too, as immense volumes of private data from home devices accumulates in the cloud and on other less-than-secure platforms. SecuriThings and their peers in the cyber security industry are already exploring solutions to these challenges.

 

 

Photo Credit: Greens MPs Flickr via Compfight cc

Photo Credit: Green Energy Futures Flickr via Compfight cc

 

DFARS, NIST 800-171 and the Chinese Hack of American Submarine Technology

The Washington Post reported last month that Chinese military hackers had stolen over 600 GB of sensitive information from a contractor working for the US Navy’s Naval Undersea Warfare Center. The data in question was stored, apparently improperly, on an unclassified network. This storage method made the data more vulnerable to breach than it might have been if it had been managed in accordance with the Defense Federal Acquisition Regulation Supplement (DFARS) cybersecurity standards which govern American defense contractors. DFARS is based on the National Institute of Standards (NIST) 800-171 publication and its framework of controls.

The breach, which has serious implications for US national security, is only the latest in a long series of such compromises of American defense intellectual property and military secrets by the Chinese. It raises—or should raise—major concerns about the security of sensitive data in defense contractors’ networks. We asked a number of experts in the industry for their views on how well the DFARS standard was working and what could be done to avoid future episodes of this kind.

 

 

How could this happen in the first place?

What happened here? Of course, we don’t know the details, but a few elements of the hack seem evident. “Due to the fact that the data was not stored in a classified network, there were too many risks involved from the get-go,” says Eitan Bremler, VP of Product at Safe-T, a provider of software-defined access controls. “While this may have been done originally to simplify access and sharing, it left the data quite easy to steal. Instead, the contractor should have stored the data in a classified network and used secure data access and usage technologies.”

The asymmetry of the attack is also revealing. As Mike Fleck, VP of Security at automated data classification provider, Covata, puts it, “I think we’ll find that this breach was either the result of gross negligence or the contractor was doing everything right and they were simply ‘outgunned.’ Brian NeSmith, CEO and Co-Founder of Arctic Wolf Networks, offers a similar perspective, noting, “This sort of event provides a wake-up call across the contractor community: they’re in the crosshairs of nation-state actors. I expect everyone is re-evaluating and looking to how they might use new tools or processes to reduce risk and ensure compliance with DFARs and NIST 800-171.” Artic Wolf Networks offers a Security Operations Center (SOC) “as-a-service.”

I think we’ll find that this breach was either the result of gross negligence or the contractor was doing everything right and they were simply ‘outgunned.’

According to Pravin Kothari, Founder and CEO of the cloud access security broker (CASB) company CipherCloud, “The sophisticated advanced persistent threats (APTs) driven by nation states are very tough challenges for the very best legacy cyber defense. These nation state, government-funded black hat hackers are facing off against standard off-the-shelf commercial products and perimeter defenses.” To Kothari, it is unlikely that any military contractor can keep such attackers out of its networks. He adds, “The assumption must be they will get into your networks, so you must answer the more relevant questions: How do you detect them rapidly inside of your networks? How can you stop them from using stolen data? How can you stop them and shut down their attack?”

Getting onto an access system isn’t hard.  What the adversary does once on the system is the main point.

The volume of data stolen is itself revealing as to the methods of the attack. “To offload 600+ gigabytes of data would likely have tripped most DLP systems,” observes Stan Engelbrecht, Director of the Cybersecurity Practice at D3 Security. “Again conjecture, but likely the attackers were in the network for a long time and slowly offloaded it.” This is not a hard task to pull off, says Sherban Naum, SVP, Corporate Strategy and Technology at the threat prevention vendor, Bromium. He explains, “Getting onto an access system isn’t hard.  What the adversary does once on the system is the main point.” Naum is also struck by the sheer size of the exfiltration, believing the hackers’ ability to walk off with such much data reveals extensive deficiencies in the contractor’s security controls—NIST or no NIST. He adds, “Data encryption, fine grained access controls, compartmentalizing both users on programs and their respective data, network segmentation, as well as protecting the applications and access to the High Value Assets may not have been implemented.”

 

Does this breach reflect a deficiency in DFARS and NIST 800-171?

NIST 800-171 states that all contractors should comply by deploying adequate security, and report when an incident occurs. In this context, “Adequate security” means “protective measures that are commensurate with the consequences and probability of loss, misuse, or unauthorized access to, or modification of information.”

Experts cite both the nature of the standard itself as well as the quality of its implementation in this type of breach. According to Salvatore Stolfo, co-founder and CTO of data loss prevention vendor Allure Security, “Obviously, the contractor in this case failed to provide adequate security. Indeed, the NIST 800-171 basic security requirements are too weak.”

Aaron Turner, CEO and Founder of HotShot, a maker of compliance messaging tools, remarks, “As a general rule of thumb, when information security leadership within an organization ask me if NIST standards will help them, I can only tell them that compliance with NIST standards is an excellent way to protect an organization from yesterday’s threats and vulnerabilities.  Due to the very nature of the way that NIST standards are created, edited and published, there will always be a gap between attackers’ capabilities and the protections that NIST promotes through their publications and standards.”

When information security leadership within an organization ask me if NIST standards will help them, I can only tell them that compliance with NIST standards is an excellent way to protect an organization from yesterday’s threats and vulnerabilities.

Steven Sprague, CEO of the private key storage company, Rivetz, believes the network security models relied upon by defense contractors (and everyone else) are failing. He comments, “Our practice has been built around known users on unknown computers and a reliance on network security to observe any leakage from the network.” In Sprague’s view, this approach is limited in its effectiveness.

Not everyone is so quick to criticize the standard, however. “This isn’t about how NIST’s recommendations are deficient, quite the contrary. This is about the contractor and the lack of controls and application of the NIST recommendations themselves.  NIST can only lead the horse to water,” says Sherban Naum. “I do not believe there is a deficiency within the framework itself,” states Stan Engelbrecht. “The controls within are very clear and if they are implemented correctly. CUI data should be protected.” Pravin Kothari further notes, “There is no deficiency in NIST 800-171 as it stands.  NIST 800-171 defines 14 sets of controls that provide good guidance for contractors.”

Brian NeSmith believes, “The breach doesn’t reflect a deficiency in the security standards, but it shows that a reliance on meeting security standards does not keep you safe from the bad guys.” He underscores this point of view by explaining, “Unfortunately the good guys have to get their defense right every time, while the bad guys need to get their attack right just once.”

The breach doesn’t reflect a deficiency in the security standards, but it shows that a reliance on meeting security standards does not keep you safe from the bad guys.

Scott Petry, CEO and Co-Founder of Authentic8, the company which makes the Silo cloud browser, concurrs, adding, “Not at all.  In fact, it reflects a deficiency of the security practices – whether technical or human, in the contractor’s organization.  It might reflect a wrong perspective in the organization when embracing the standards.   The intention behind compliance frameworks like NIST and acquisition guidelines like DFARS is to get organizations to internalize and implement security practices as they apply to their business.  Instead, they are used as often-mindless checklists in order to achieve a particular status. ‘we are xy compliant.’   Guidelines can’t deliver security.  Organization implementation of guidelines can help them trend toward security but the approach needs to be different.”

Ultimately, as Mike Fleck explains, “No security program is 100% – in fact, far from it. Compliance regimes are for enforcing a baseline level of security due diligence based on a generalized level of risk tolerance.”

How big a risk is NIST 800-171’s “Self-attestation” policy?

The standard relies on defense contractors to “self-attest” to their compliance. Given the national security importance of the data these companies handle, this policy may appear naïve. Experts in the industry differ on how much of an issue it really is, however. Scott Petry says, “If you’re looking at NIST 800 171 with the objective of achieving security compliance, then yes, there is a connection [between the recent attacks and the ‘self-attestation policy’].  If you’re looking at it as a framework to improve your security posture, with a set of criteria to abide by, then less so.  Just because you’ve implemented the letter of the spec doesn’t make you secure.  As we see here.”

 

Pravin Kothari believes there is some connection between self-attestation and the risk exposure seen in this case. He observes, “it still makes no sense to allow anyone to self-attest on the issue of files, either classified or unclassified.” Aaron Turner warns “The ephemeral nature of information security makes it difficult to achieve objectives even when controls are successfully audited and tested by objective third parties. Whenever an information security leader asks me if a ‘self-compliance’ program will be effective, I caution them that without outside input into the compliance process, it will be nearly impossible to assure that controls are actually deployed in ways that successfully reduce the impact of cyberattacks.”

 

Aligning with this view, Mike Fleck argues, “Compliance assessments are subjective but I believe that third party audits result in a higher burden of proof than self-attestation.” He then notes, “Relative to risk exposure, maybe the biggest issue is that NIST 800-171 compliance is based on a medium risk baseline and it needs to assume the high baseline for some information.”

 

Stan Engelbrecht asks a question that puts the problem in perspective. After stating, “Any time ‘Self-attestation’ is involved it opens the door for customized results, like numbers used to fit any form of statistics wanted. Again, I don’t think this is a failure of the frame but rather the controls around the attestation.” He then asks, “How much follow up did the government agency do? Were the results of the attestation vetted to in fact hold true? These are questions we currently don’t have answers to.”

 

Steven Sprague does not find fault with the standard, commenting, “This is not the issue – the fundamental framework of 1990s LAN architecture trying to secure ports and links is at the heart of the problem.”

 

Do defense contractors have influence over the details of NIST 800-171?

The NIST standards are developed through an open process that Sherban Naum describes as “hugely collaborative.” The experts express concern about this but generally don’t feel that the policy is bad for national security. “As with any government standard, there are intense lobbying pressures that come to bear with the creation of the NIST cyber security publications and standards and the DISA STIGs.  While independent security researchers generally provide input into the early stages of these standards, by the time they are about mid-flight in their development process, the big vendors are the only groups with the resources to dedicate technical expertise,” explains Aaron Turner.

Mike Fleck finds security standards to be intellectually honest, in contrast to standards involving interoperability protocols, which are, in his view, “Much more likely to be influenced by large vendors.” He shares, “In the case of NIST 800-171, it’s in the best interests of the large Defense Industrial Base companies for their subcontractors to be held to a standard. It doesn’t make sense for them to weaken the standard since they are ultimately responsible for the security of the supply chain associated with their prime contracts.”

There are influences on many sides, not just corporate ones.

Fleck’s view may not be accurate, however. It’s far from clear that defense contractors are held responsible for security lapses that occur on their watches. Defense contractors do influence the standards, though some experts, like Stan Engelbrecht are not overly concerned. He points out, “There are influences on many sides, not just corporate ones. Like open source software, from what I see, these standards would be difficult to ‘weaken’ or made easier to comply with due to the public nature of the process.”

One problem with the standards, according to several experts, is their tendency to be backward-looking.  “They’re written more in the context of what happened previously (to prevent it from happening again).  But we know that security vulnerabilities are always evolving, so by definition a static spec is a rear view mirror,” says Scott Petry. He added, “That’s why these processes should be less of a compliance check list and more focused on socialization of security issues and intent of the security practices.”

 

How can this be avoided in the future?

Assuming it is not an isolated incident, this breach reveals serious problems with military contractor cyber security. What can be done about this? Some simply advocate for more rigorous adherence to existing standards, e.g. protecting data at rest. This would be a good start, but there are other steps contractors could take to improve their cyber defense, according to industry experts.

Steven Sprague remarks, “It’s not a contractor problem or a laziness problem. It is, in the end, an architectural problem. We need known devices in a known condition running known services to enable secure information-sharing, with provable controls in place.” He adds, “It is time to move to a data security model where the data is secured in transit and at rest and advanced rights management is used to assure controlled access to the information. The challenge is that this requires a foundation of known devices to enable information sharing – from strong, tamper-resistant identity, to BIOS integrity, to trusted execution of policies for rights management.” Why is this not happening, in his view? He answers, “The current NIST framework touches on these decades-old technologies but does not require or incent a shift to a new architecture of data security.”

It’s not a contractor problem or a laziness problem. It is, in the end, an architectural problem.

Salvatore Stolfo advises that contractors be required to deploy new technologies that track data and documents. “You want to raise the bar against the level of ‘protective security measures’ that obviously failed in this case,” he said, adding, “Detection is far more important, and likely would have noticed the failure to protect far earlier than the entirety of the lost 613 gigabytes of data.”

Isaac Kohen, Founder of user behavior analysis company Teramind, feels that organizations need to take the steps necessary to conduct a thorough investigation of where their sensitive data resides and who has access to it. As he shares, “This includes third-parties. Once that’s understood, organizations can place progressive mitigation technologies like user analytics, DLP and security forensics into their security infrastructure to detect breach quicker and stop data from falling into the wrong hands.”

 

 

 

 

Photo Credit: U.S. Pacific Fleet Flickr via Compfight cc

Photo Credit: U.S. Pacific Fleet Flickr via Compfight cc