By David Richardson
Military leaders got an unpleasant surprise in January 2018 when maps of secret government installations around the world, including forward bases in Iraq and Afghanistan, appeared online.
But this dangerous security breach wasn’t the work of a hostile state actor or whistleblower. The maps in question were visualizations of user location data published by a fitness tracking app, Strava. Servicemen and women who used Strava’s app to track their runs and bike rides on base unwittingly sent a stream of extremely sensitive information back to the company’s servers. Though the information on the map itself was anonymized, later investigations by The Guardian found that the full names of individual service members were discoverable through Strava’s website.
Strava is an example of a risky but non-malicious app. Unlike malicious apps, which are designed to steal data and exploit security vulnerabilities, risky apps don’t intend to harm. But they still may collect sensitive data without being transparent about how they will use it, or accidentally introduce security vulnerabilities that open up a device to attacks from others.
Apps like these pose a challenge for CISOs committed to securing mobility at their organizations. Unlike with a malicious app, the use of a risky app might be appropriate in some situations, but not in others. For instance, location data is not as sensitive for a government analyst working in a downtown office building as it is for a soldier at a forward base in Iraq.
Some organizations ban most apps from their fleet of devices, limiting employees to a short list of known secure options. However, this type of restriction is often a pain point for employees and ultimately defeats the purpose of mobility, which is to give people the flexibility to work when and where they choose using the apps they love. If enterprises, governments, and other organizations want to embrace the full advantages of mobile technology, they’ll need to adopt a more flexible and nuanced solution.
How did we get here?
There is one major reason we are seeing an influx of risky apps into the market: the barrier to entry for new app developers has never been lower than it is today. Thanks to the availability of open-source code libraries and other third-party tools, successful apps don’t have to be coded from scratch anymore. Almost anyone with a computer can write an app or game that goes viral and is downloaded by millions of people around the globe. As of last year, there were 2.2 million apps available in the iOS app store and 2.8 in the Google Play store, according to Statista.
That accessibility is great for the pace of innovation, but it has serious implications for security and privacy. It makes mobile development a fast-paced race to market, where developers want to build a minimum viable product (MVP) as quickly as possible with as little code as possible. There’s little incentive to include security measures before an app even has users, so many new apps come to market without even basic security protections. Such gaps don’t necessarily get filled even after an app becomes successful. This January, researchers found that popular dating app Tinder—which has been on the market since 2012—was still sending users’ photos over an unencrypted HTTP connection.
What’s more, those same open-source libraries and third-party tools that make app development fast and easy may also introduce vulnerabilities and unintended behaviors into apps. For instance, many libraries contain tracking code designed to tell the library developers who is using their library and where—meaning that they may be transmitting sensitive information about end users without the app developer’s knowledge. According to a 2017 Department of Homeland Security (DHS) report on mobile device security, if a code library is compromised, “its use can potentially affect thousands of apps and millions of users.”
Another reason for the proliferation of risky apps: today’s tech business models value companies based on the data they collect even more than the product they offer. This incentivizes companies to collect enormous amounts of data, whether it’s relevant to the function of their app or not—see, for example, MoviePass’ surprising announcement that their popular movie-ticketing app tracks users’ location on their way to and from the movies, or recent reports that Facebook’s app scrapes text and voice call metadata from Android phones. As the Strava example shows, it doesn’t take a security breach to make this kind of data collection dangerous; however, combined with vulnerabilities like the ones listed above, it can make an app extra risky. And, while in a best-case scenario, a company will only use such data internally or share it with advertisers, many startups also make selling data to third parties a key part of their monetization model—third parties that could include fronts for foreign intelligence or ethically questionable data firms like Cambridge Analytica.
A flexible solution for risky apps
So how can enterprises protect themselves from risky non-malicious apps? Educating users about privacy and security concerns is an obvious first step; however, it will never be sufficient. Even if 95 percent of users follow best practices, the 5 percent who don’t could still compromise security for their organizations. Also, many risk behaviors aren’t visible to the end user to begin with. When a user gives a weather app access to her location data, for instance, she has no way of knowing if the app will continuously send that data to a remote server, or if it will only access location when she prompts the app to check if it will rain soon in her zip code.
In order to address the problem of risky apps, CISOs need flexible solutions that do not rely exclusively on end users’ diligence. First, they need visibility into what the apps on their fleet of devices actually do—are they harvesting contacts lists? Collecting call and text metadata? Continuously sending location data to a server halfway around the world? Security firms (including my own) can provide the answers to these questions and more—but first, CISOs need to ask.
Armed with this information, CISOs can start crafting nuanced policies to govern the use of risky apps, rather than issuing blanket bans or reviewing individual apps. For example, salespeople might be prohibited from using apps that harvest contact lists, since their address books may contain sensitive client information; engineers may not need the same restrictions. In neither case would a CISO or other security professional need to vet individual apps.
With such a system in place, a CISO can ensure that a Strava-like incident won’t happen to their organization. And they can do so without changing the way their employees already do business—or altering their workout routines.
David Richardson is senior director of product management at Lookout. He has been building software to help individuals and enterprises secure mobile devices since 2009. He has 45 patents issued related to mobile security and is a frequent speaker at security conferences on the topic of iOS and Android security.