Your devices' latest feature? They can spy on your every move

This article originally appeared in The Conversation.                                                                                             A version also appeared in The Raw Story.

Phone or Spy?    Poster Boy      CC-BY-2.0

Phone or Spy?    Poster Boy      CC-BY-2.0

We now have dozens of smart devices in our houses and even on our bodies. They improve our lives in so many ways – from lowering energy consumption in our homes to egging us on to be active.

But these smart devices respond to whatever commands they are given: we’ve had security experts demonstrate how cars can be hijacked remotely and medical devices in your body and turned into lethal weapons. These risks are now well-recognized by technology developers, and there is a great deal of excellent work going on toward how to avoid them.

But there are other dangers we should be more concerned about that are getting less attention. Your gadgets could be providing a window that any hacker could see right through to spy on you.

Your stuff is surveilling you

Your laptop has a video camera built into it. When it’s recording, a little green light blinks on so you’re aware you’re being recorded. But it can be instructed to videotape your activities without the green camera light being on. And this is not just an in-laboratory warning of a hypothetical danger; it has actually been done, by over-eager school officials and by peeping Toms.

At least you can turn off your laptop: when it is shut, the camera can see only “the other side” of the laptop. But this quick fix doesn’t apply to sound recording devices, like microphones. For example, your phone could listen to conversations in the room even when it appears to be off. So could your TV, or other smart appliances in your home. Some gadgets – such as Amazon’s Echo – are explicitly designed to be voice activated and constantly at the ready to act on your spoken commands.

It’s not just audio and video recording we need to be concerned about. Your smart home monitor knows how many people are in your house and in which rooms at what times. Your smart water meter knows every time a toilet is flushed in your home. Your alarm clock knows what time you woke up each day last month. Your refrigerator knows every time you filled a glass of cold water. Your cellphone has a GPS built into it that can track your location, and hence record your movements. Yes, you can turn off location tracking, but does that mean the phone isn’t keeping track of your location? And do you really know for sure your GPS is off simply because your phone’s screen says it is? At the very least, your service provider knows where you are based on the cellphone towers your phone is communicating with.

We all love our smart gadgets. But beyond the convenience factor, the fact that our devices are networked means they can communicate in ways we don’t want them to, in addition to all the ways that we do.

Is this thing on? Amazon.com, Inc

Next generation wiretapping

A bad actor could figure out how to take control of any of these technologies to learn private information about you. But maybe even more worryingly, could your technology provider become, voluntarily or under compulsion, a party to a scheme through which you unwittingly reveal your secrets?

The recent battle between Apple and the FBI revolved around the feds' request that Apple develop a custom insecure version of iOS, the operating system of the iPhone, to facilitate their hacking into a terrorist’s cell phone. Is breaking into a locked phone just the next step beyond a traditional wiretap in which the government asks an Apple or a Samsung to use its technology to bug the conversations of a suspected terrorist?

But modern phones can be used to do a lot more than listen in on conversations. Could companies be asked to keep location tracking on while indicating to the suspect that it is really off? It would seem to me hard to draw a line between these cases. No wonder some Apple engineers came out as “objectors of conscience” in the Apple-FBI matter. This case was dropped before Apple could be compelled to do anything, so there’s no legal precedent to guide us on how these next-step examples would play out in court.

It is, of course, valuable for law enforcement to monitor criminal suspects, to investigate ongoing criminal behavior and to collect evidence to prosecute. This is the motive behind wiretap laws that allow law enforcement to listen to your phone conversations with no notice to you.

Wiretaps actually got their start in the 1800s as tools of corporate espionage. In 1928, the U.S. Supreme Court ruled in Olmstead v. U.S. that it was constitutional for law enforcement to use wiretaps, and that warrants weren’t required. This decision was superseded only in 1967, by Katz v. U.S., which established a citizen’s right to privacy, and required law enforcement to obtain warrants before bugging a phone conversation. This was long after Congress had passed an act carefully restricting wiretaps, in 1934.

In the early days of wiretapping, there was a physical “tap” – a side connection – that could be applied to a real wire carrying the conversation. Newer technologies eventually permitted the telephone company to encode and multiplex many telephone calls on the same physical wire.

Technology has moved on, but the law isn’t clear yet. Gawler History, CC BY-SA

In the United States, the Communications Assistance for Law Enforcement Act (CALEA) was passed by Congress in 1994, due to worries about law enforcement’s ability to keep up with new communications technologies. It requires communication companies to provide a way for law enforcement to place a wiretap even on newer communication technologies.

The law explicitly exempted information services, such as email. This legal differentiation between communications technologies and information services means companies are obliged to help the government listen in on your phone calls (with a warrant) but are not obliged to help it read your email messages (at least on account of this specific law).

In 2004, the Federal Communications Commission ruled that services such as Voice Over IP (think Skype) were communications services covered by CALEA, and not exempt information services.

Some have since wanted to further broaden this law, and doubtless the Apple FBI dispute brings this issue to the forefront again. Law enforcement will presumably push for greater surveillance powers, and civil liberty advocates will resist.

Nothing To Hide

Perhaps you don’t care about the privacy of criminals. But note that surveillance is not just of known bad actors, but also of suspected bad actors.

History teaches us that lists of suspects can sometimes be drawn way too broadly. You may remember the McCarthy era and J. Edgar Hoover’s reign at the FBI, which infamously included bugging Martin Luther King Jr.’s bedroom. Even today, there are attempts by the British Government Communications Headquarters to monitor everyone who visited the Wikileaks website, even just to browse. Some laws don’t make sense or aren’t fair, so even some “criminals” may still deserve privacy.

And it’s not just law enforcement overreach we have to worry about. Technologies like Finspy are commercially available today to install malware on your computer or phone and “recruit” it to spy on you. Such technologies could be used by anyone, including the “bad actors,” without the cooperation of your device manufacturer or service provider.

Wiretap laws, such as CALEA, apply to explicit communication actions taken by someone, such as actually making a phone call. Wiretaps do not track your movements in the house, they do not listen to your conversations when you are not on the phone, they do not videotape you in your bathroom – but these are all actions our various devices are now capable of performing. With the proliferation of devices in our lives, it is certainly possible to use them for surveillance purposes. There’s no question that by doing so, authorities will catch many bad actors. But there will also be a huge price to pay in terms of privacy and possibly wrongful arrests.

Finally, this may feel futuristic, but I assure you it is not. The FBI was already using a cellphone microphone to eavesdrop on organized crime as long as a decade ago. Commercial interests are not too far behind in doing much the same, with the purpose of targeting a better sales pitch.

Our omnipresent networked devices raise big questions that we should openly debate. How we balance these costs and benefits will determine the type of society we live in.

The Conversation

Passwords, Privacy and Protection: Can Apple Meet FBI’s Demand Without Creating a ‘Backdoor’?

 Updated version of an article that originally appeared in The Conversation.                                                      A version also appeared in US News and World Report and in Huffington Post.

Users enter a passcode to use or access data on an iPhone.   Pieter Ouwerkerk  CC-BY-NC-2.0

Users enter a passcode to use or access data on an iPhone.   Pieter Ouwerkerk  CC-BY-NC-2.0

Apple’s security will erase a phone’s contents after a certain number of failed attempts – something the FBI wants to avoid.  Jacob Tomaw   CC-BY-NC-2.0

Apple’s security will erase a phone’s contents after a certain number of failed attempts – something the FBI wants to avoid.  Jacob Tomaw   CC-BY-NC-2.0

The San Bernardino terrorist suspect Syed Rizwan Farook used an iPhone 5c, which is now in the possession of the FBI. The iPhone is locked. The FBI wants Apple to help unlock it, presumably so they can glean additional evidence or information about other possible attacks. Apple has declined, and appears to ready to defy a court order. Its response is due February 26. So what’s the technology they’re fighting over?

The code to unlock the phone is known only to Farook, who is dead, and any confidants he may have shared it with. Even if he were alive, it would probably be difficult to get him to reveal it.

But phones are typically locked with a very simple personal identification number (PIN) of only four to six digits. That means, at most, there are a million possible PIN values. It’s straightforward to write a computer program that would methodically walk through all these possible values, trying each in turn until the correct one is found. Indeed, there even are products on the market that will do just this. Given that modern computers can execute over one billion instructions every second, even a conservative estimate says testing all one million PIN possibilities would take only about a second.

Ways to ward off attack

One way to defend against this kind of break-in attempt is to do something drastic after multiple failures. For example, Apple deletes all data on the iPhone after 10 incorrect unlocking attempts in succession, if the user has turned on this feature. We don’t know if this defense is activated on Farook’s phone – but the FBI doesn’t want to gamble that it isn’t, turn out to be wrong, and watch the phone be wiped clean after 10 incorrect guesses.

A second approach is to force a delay after each failed attempt. If the real authorized user accidentally types in the wrong code, she won’t mind waiting 60 seconds before the phone will let her try again. But for a computer that wants to try a million possibilities, the time required to try all possibilities has gone up by a factor of a million or more.

The FBI, of course, should have no difficulty programming a computer to try all possible passwords. It simply wants Apple to turn off the defenses.

What the FBI is and isn’t asking for

The feds aren’t demanding Apple create a “backdoor.” In encryption, a backdoor is when someone has a means to access protected content outside of the normal (frontdoor) process. For example, there could be a skeleton key built into the encryption mechanism. The National Institute for Standards and Technology is reputed to have built such a facility into a random number generator, a function used in the heart of most encryption techniques.

Encryption with a backdoor is technology explicitly designed so that a third party – in most cases, law enforcement – can gain access to the protected data when the need arises. But it’s very hard to build a backdoor into encryption, while still making it hard for an attacker to defeat. I don’t believe anyone is calling for such encryption anymore.

Rather than tinker with its encryption, the FBI says it has asked Apple only to modify the defense mechanism built into iOS, its operating system. It’s presumably easy for Apple to create a version of iOS where the delay and data erase features are turned off. This would be a new, less secure version of the standard operating system.

This less secure operating system could be loaded on to the Farook phone, which the FBI could then access more easily. Other iPhones would not be affected.

Software piracy is a major challenge here. Apple has to worry that copies of this insecure operating system may get out and become easily available – and not just to the good guys, but also the bad guys. It’s common practice for software to require that a license be verified explicitly with the software vendor. If the license is not verified, the software will not function. This mechanism can block the insecure operating system from normal use.

But if the insecure operating system is installed for the purpose of data theft, then this normal license protection may not help – even if it doesn’t allow normal use, it may not stop data access. In other words, it could be problematic if copies of this insecure operating system proliferate. However, it doesn’t seem that hard to make sure that a one-time use operating system never leaks out.

It therefore appears there are no major technical barriers, or even immediate consequent difficulties, that prevent Apple from complying with the court order. Furthermore, it is hard to imagine a stronger case for law enforcement to gain access to encrypted data. In fact, a survey finds only 38 percent of Americans side with Apple and agree that they shouldn’t unlock the terror suspect’s phone. Nevertheless, there remain issues.

This is not how a typical user wants to access data on an iPhone, but the FBI has the device with it can can surely open it up and gain physical access      Michelle Ress     CC-BY-NC-ND-2.0

This is not how a typical user wants to access data on an iPhone, but the FBI has the device with it can can surely open it up and gain physical access      Michelle Ress     CC-BY-NC-ND-2.0

Our secure systems already fail all the time

It’s not easy to build a secure system. We have so many breaches reported every day, in spite of the best efforts of so many. And the defenses that Apple has been asked to remove have already been violated, at least for some versions of Apple’s products. Every additional wrinkle in the system design makes it more likely that new exploits will be found.

There is little question that this particular request from FBI will not be the last one. In all likelihood, Apple would be asked to use the desired insecure iOS in other future situations. With every use, the possibility increases of the software being leaked.

It’s also worth noting that this is Farook's work phone: he apparently had a different personal phone.  Furthermore, law enforcement already has access to almost all data of interest on this phone, from a combination of two sources: about six weeks prior to the shooting, this iPhone was backed up ("sync"-ed) to the iCloud, and Apple was able to give this cloud data to the FBI; and for the last six weeks, phone call records were provided to the FBI by Verizon.  So what law enforcement is seeking are any new contacts that were entered during the last six weeks and any notes or other data recorded on the phone.

Finally, the FBI does have access to the data in encrypted form without any help from Apple. These encrypted data look like gobbledygook and must be decrypted before they make sense. (In contrast, if they had, or could guess, the PIN, they would directly have access to the data in the convenient form ordinary users see.)  The point of encryption is to make decryption hard. However, hard does not mean impossible. The FBI could decrypt this data, with sufficient effort and computational power, and they could do this with no help from Apple. However, this route would be expensive, and would take some time. In effect, what they’re requesting of Apple is to make their job easier, cheaper and faster.

Ultimately, how this matter gets resolved may depend more on the big-picture question of what privacy rights we as a society want for the data we record on our personal devices. Understanding the technical questions can inform this discussion.