Categories
Uncategorized

My Fix for Apple Privacy Panic

Apple believes they’ve made the most privacy respecting Child Safety Features in the history of… technology. Others believe Apple’s new system is dangerous, invasive, and… as anti-privacy as you can get.

These features are currently slated to ship in the US as part of Apple’s fall software updates, including iOS 15 and iPadOS 15, and I’ve already done a 43 minute — yes, 43 minute —mega deep dive video on how exactly they work, why Apple says they’re doing them, and what the main objections are, and I urge anyone even remotely interested or incensed by any of this to watch it. Link in the description, the comments… everywhere.

But, TL;DW — Apple says they want to disrupt the cycle of grooming and child predation in the Messages app, automatically blur sexually explicit images, warn on send and receipt, and optionally alert parents or guardians so children can get help and stay safe. Also, get child sexual abuse material — CSAM — off the iCloud servers, but without scanning our full iCloud Photo Libraries, by instead doing on-device hash matching on upload, to maintain, what they believe, is the most privacy possible.

Others warn the Communication Safety alerts could lead to outing non-hetero children, exposing them to abandonment or abuse, and the on-device component of CSAM detection, rather than maintaining privacy, irrevocably shatters it, inevitably opening up Apple devices to wider detection by increasingly authoritarian governments.

So, what can we do about it?

Communications Safety

Communications Safety, as currently implemented, is opt in. Parents or guardians have to turn it on for child devices as part of Family Sharing’s control system. For child devices set to 17 or under, it will automatically blur any sexually explicit images that come in over iMessage or SMS/MMS, require a tap or click to open them, and a second tap or click to go through a warning screen before they open. Similar warnings and tap-throughs are required if the child device tries to send a sexually explicit image.

The sexually explicit images are detected by computer vision running on-device, in the Messages app, similar to what Apple’s been doing for years already in the Photos app to tag cats and cars and a bunch of other stuff for search. In the Messages apps, it only tags sexually explicit images, doesn’t block any messages or images, and doesn’t report anything to Apple or law enforcement.

Instead, on devices set up for children 12-year-olds-or-under, parents or guardians have the added option of setting up a notification. That way, if the child taps to view, taps through the warning, and then taps through an additional warning that the parent or guardian will be notified, the notification gets sent. Still not to Apple, still not to law enforcement, just to the parent or guardian… but that’s the exact objection here.

The notification warning may deter some… even most children, and some… even most parents or guardians really will be there to help and to get help, but… some won’t. And that means already at risk children, specifically non-hetero children, could be outed by this system, putting them further at risk for abuse or abandonment.

So, what I’d like to see, is Apple change the option from notification to an actual block. So, instead of just blurring and warning about sexually explicit images with the option to notify the parent or guardian, it’d be the same thing but with the option to block the image completely.

A parent or guarding could still take physical custody of a child device, either way, under either implementation, and the warnings, explanations, and resources to get help will still be presented either way, under either implementation, and yes, this will reduce the chances of a positive parent or guardian intervention, but it will also prevent the chances of a negative parent or guardian intervention through this system.

But not only does changing notify to block remove the potential for data leaks, it’s more in keeping with the existing Content & Policy Restrictions, which have for years let parents and guardians block explicit lyrics, R-rated movies, and access to other apps and services entirely.

This is obviously very, very different from that, but so are the potential ramifications and precedents it sets, so if Apple is intent on going ahead with Communications Safety, that’s my suggestion for addressing the objections and reducing the potential risks.

CSAM Detection

CSAM Detection compares NeuroHashes from a blinded database of known and existing CSAM images, provided by the National Center for Missing and Exploited Children, NCMEC, and other child safety organizations, to NeuroHashes of the images on your device, as they’re being uploaded to iCloud Photo Library. Since the database is blinded, it requires a secret on the iCloud server to decrypt the headers, any matched hashes are stored in a secure voucher, the system periodically creates synthetic vouchers so the server can never really know the exact number of true vouchers, and if and when an unknown threshold of matched hashes is reached, it’ll unlock all the vouchers and forward them for manual, human review at Apple, and if verified, will lead to the account being shut down and a report being sent to the National Center for Missing and Exploited Children for them to refer on to law enforcement.

And if that sounds incredibly cumbersome, complex, and confusing, that’s why I made a 43 minute video about it. Please watch and share.

Now, the objection here isn’t just that Apple is matching hashes on-device, because Apple’s been full on scanning full-on images using computer vision to enable face and image search in the Photos app for years, and will be doing machine learning character recognition with Live Text soon as well. They’ve also been detecting malware signatures and enforcing DRM On iOS for years.

It isn’t even that Apple is detecting and reporting CSAM at all. Facebook, Google, Microsoft, Twitter, Imgr, TikTok, Snapchat, and pretty much everyone else has been doing full-on server side scanning for and reporting on CSAM for upwards of a decade as well.

But… on their servers. Which, many people seem ok with. As if the act of uploading to a company’s computer relinquishes their feelings of complete privacy over the images. Plus… everybody’s doing it.

It’s Apple’s combination and conflation of those two things — on-device hash-matching with server-side reporting, that’s creating a ton of push back.

One, because it eliminates the last mainstream photo storage option that wasn’t doing any scanning at all, which makes it intolerable to privacy absolutists and leaves them with… no other mainstream options.

Two, because it’s putting something on our devices that’s reporting off our devices, and while Apple thinks on-device is always more private, the reporting part makes many people feel the exact opposite, that’s it’s a proactive, presumption of guilt fueled violation.

Third, because while Apple can do anything on iOS at any time, doing this on iOS and doing it now takes what was a vague, kind of ephemeral truth and solidifies it into a screeching, neon, alarm bell. And it makes them feel like it’s no longer a matter of if but when Apple will increase the scope of the detection, either on their own or because of government pressure.

Apple has pushed back on this, saying it’s not possible and even if it was, they’d never do it, but it’s the harshest of reminders that we’re all ultimately at the complete and utter mercy of every platform company and device maker, with charging leadership and conflicted business interests, always, forever.

And while some will say Apple wouldn’t have over-engineered such a specific system if they weren’t hoping to keep the scope restricted, forestall or prevent increasing regulation over content reporting and anti-encryption, and maybe even lock down the rest of iOS user data even more strongly…

…Others will say Apple wouldn’t have over-engineered such a specific system if they weren’t planning to increase the scope of detection and broker deals with regulators, with this as basically a proof-of-concept implementation.

And the worst part is no one can ever really know which of those it is, because like Shrodingers dead or alive cat, we’re all just left wondering what may or may not come next, but can’t ever know until it actually does at some point or doesn’t… forever.

So, if a government is emboldened by these new features, maybe Apple will fight and win like they did over breaking encryption in the San Bernardino case, or maybe they’ll fight and lose like they did over repatriating iCloud data to China, or we won’t know, like maybe they did or didn’t when coming up with these Child Safety features to begin with, and we’ll get a privacy-centric terrorist radicalization detection system next.

Fourth, hat tip John Gruber, that given Facebook made 20 million CSAM reports last year, Google 500 thousand, and most other tech companies 100 thousand or more, and Apple made 265… period… it may simply be impossible for the manual human review team to cope with what the launch of this system will kick up.

I don’t know if there’s any way to solve for that, but for the rest of it, I’d like to see Apple move the on-device element off device. Since Apple believes on-server scanning is a violation of privacy, and a significant group of users see on-device matching as a violation period. How about moving the hash detection to a private relay server that’s not Apple’s? Like the inverse of Apple’s upcoming Private Relay service proper. That private relay server would do the hash matching and secure voucher generation, Apple’s server would still hold the secret for header decryption and voucher threshold, nothing would be on our devices, and Apple wouldn’t have to scan our full libraries.

Zero violation for us, zero knowledge for Apple.

And if that can’t work, because I’m just a dumb YouTuber who doesn’t know what the hell I’m talking about, and not a genius privacy engineer who already looked at this eight ways from Babylon, fall back to on-server scanning, because if Apple needs to detect CSAM but doesn’t want to scan the server, well… need beats want, and if a compromise has to be made, Apple has to be the one to make it, not users, never users.

Because that way, even if Apple says the privacy is technically worse, the sanctity of our devices will be absolutely better. And every time China, or the U.S. or the U.K., E.U., Canada, Australia, Belgium, anyone comes knocking, Apple can say, flat out, bring all the server-side warrants you want, but we don’t fuck with user data on device, not for this, not for anything, not for you, not for anyone, not ever.

Let me know what you think in the comments below, let Apple know via apple.com/feedback or email, and then share this video and I’ll you in the next one!