Wired- Apple's Privacy Mythology Doesn't Match Reality

Click here to view original story

IN 2021, APPLE has cast itself as the world’s superhero of privacy. Its leadership insists “privacy has been central to our work … from the very beginning” and that it’s a “fundamental human right.” Its new advertising even boasts that privacy and the iPhone are the same things. This past spring, rolling out a software update (iOS 14.5) that empowers users to say no to apps surveilling their activity across the internet did demonstrate something important: People choose privacy when they don’t have to struggle for control over their information. Now, only 25 percent of users consent, but before, nearly 75 percent consented by omission to have their information fuel targeted advertising. As Apple plans to add more privacy protections into iOS15, which will be released next month, it continues to brand itself as force potentially capable of slowing down growth at Facebook, a paragon of surveillance capitalism. Unfortunately, Apple’s privacy promises don’t show the full picture.

The company’s most alarming privacy failing may also be one of its most profitable: iCloud. For years, the cloud-based storage service has further entrenched hundreds of millions of Apple customers in its ecosystem, an internet-enabled extension of your hard drive designed for effortlessly offloading photos, movies, and other files to your unseen backup drive. Unfortunately, iCloud makes it nearly as easy for the police to access all of those files.

In the past, Apple has been adamant it won’t weaken the security of its own devices to build in a back door. But with older devices, the door is already built. According to Apple’s law enforcement manual, anyone running iOS 7 or earlier is out of luck if they fall into the police or ICE’s crosshairs. With a simple warrant, Apple will unlock a phone. This may seem par for the course in Silicon Valley, but most tech giants’ CEO’s haven’t previously proclaimed that warrants for their devices endanger “the data security of hundreds of millions of law-abiding people … setting a dangerous precedent that threatens everyone’s civil liberties.” This service is available due to security vulnerabilities eventually addressed in later operating systems.

Since 2015, Apple has drawn the FBI and Justice Department’s ire for each new round of security enhancements of building a device that’s too safe for even Apple to crack. But the dirty little secret with nearly all of Apple’s privacy promises is that there’s been a backdoor all along. Whether it’s iPhone data from Apple’s latest devices or the iMessage data that the company constantly championed as being “end-to-end encrypted,” all of this data is vulnerable when using iCloud.

Apple’s simple design choice to hold onto iCloud encryption keys created complex consequences. They don’t do this with your iPhone (despite government pleas). They don’t do this with iMessage. Some benefits of making an exception for iCloud are clear. If Apple didn’t hold the keys, account users who forgot their password would be out of luck. A truly secure cloud storage would mean the company itself would be no better able than a random attacker to reset your password. And yet, retaining that power lets them wield the terrifying ability to hand over your entire iCloud backup when ordered.

iCloud data goes beyond photos and files and includes location data, such as from “find my phone” or AirTags, Apple’s controversial new tracking devices. With a single court order, all of your Apple devices could be turned against you and made a weaponized surveillance system. Apple could fix it, of course. Plenty of companies have secure file-sharing platforms. The Swiss firm Tresorit offers true “end-to-end encryption” for its cloud service. Tresorit users also see their files uploaded in real-time to the cloud, synced across multiple devices. The difference is that users, not Tresorit, hold the encryption keys. This does mean that if users forget their password, they also lose their files. But as long as providers have the power to recover or change passwords, they have the power to hand that information to the police.

The threat is only growing. Under a new suite of content moderation tools, Apple will scan iCloud uploads and iMessage communications for suspected child sexual abuse materials. While the company once exclusively searched photos uploaded to iCloud for suspected CSAM, the new tools can now turn any photo and text you’ve sent or received against you. Thwarting CSAM is a noble goal, but the consequences could be disastrous for those wrongly accused when the AI fails. But even when the software works as intended, it could be deadly. As Harvard Law School instructor Kendra Albert noted on Twitter, these “features are going to get queer kids kicked out of their homes, beaten, or worse.” Software launched in the name of “child safety” could be a deadly threat to LGBTQ+ children outed to homophobic and transphobic parents. Just as chilling, the tools used to track CSAM today easily can be trained to flag political and religious content tomorrow.

Apple’s privacy threats aren’t confined to the cloud or iMessage. NBC recently reported allegations that the company and other tech giants coerced call center workers to accept company cameras in their homes, even their bedrooms, to track remote work productivity (an Apple spokesperson told NBC the company “prohibits the use of video or photographic monitoring by our suppliers"). These allegations follow complaints about Apple’s earlier use of facial recognition within stores,a claim that the tech giant also denied.

Apple also appears on the cusp of integrating facial verification into a new digital ID card, essentially a digital version of a government issued form of identification, like a driver’s license. Facial verification and recognition are of course different technologies, but recent scholarship suggests that normalizing the former might psychologically predispose people to embrace the latter. Face-powered digital IDs also blur the line, presenting some of the same risks as police facial recognition. That’s because the easier Apple makes it for governments to integrate facial verification into ID checks, the more police and other agencies will turn to biometric identification.

With more than 1 billion iPhone users, this would accelerate the normalization of both automated ID checks and automated facial scanning. Even if, hypothetically, Apple’s software is flawless, the fact remains many companies offer facial verification services. Some are biased and error-prone, especially for women and dark-skinned people. Facial verification errors already block access to resources like unemployment benefits. As people grow accustomed to using faces as ID, they will lose sight of the threat this tech poses. Once face scans become mundane, vulnerable communities will pay a steep price so others can gain minor conveniences.

Apple’s vast penetration of the mobile phone market, the very thing it emphasizes when touting privacy protections, gives it vast power over people’s habits. By changing its software, Apple not only changes our behavior, but it subconsciously shifts our beliefs. The adjustment re-engineers fundamental aspects of our humanity, like what we expect, desire, and deem socially reasonable. We can equate the facial recognition on our phones with the systems deployed by police, even though they share little beyond a name. When our phone’s facial recognition fails, we can be locked out for a few seconds. When police facial recognition fails, our neighbors can be locked behind bars for days, weeks, or even longer.

If Apple wants to sell the world privacy, they shouldn’t hide pathways for authoritarianism in the fine print. True privacy means selling services that protect our data from not just being harvested by ad tech vendors but governments, both foreign and domestic.

Albert Fox Cahn (@FoxCahn) is the founder and executive director of the Surveillance Technology Oversight Project (STOP), a New York–based civil rights and privacy group, and a visiting fellow at Yale Law School’s Information Society Project. Evan Selinger (@EvanSelinger) is a professor of philosophy at Rochester Institute of Technology.

communications staff