News

Published on October 23rd, 2019 📆 | 5628 Views ⚑

0

Encryption and Combating Child Exploitation Imagery


Convert Text to Speech

Child exploitation images are a horrific problem. Even the most clinical descriptions (such as the 23 sites described in the Freedom Hosting NIT warrant application) turn the stomach and chill the soul. Many individuals and companies, such as Facebook, go to extraordinary efforts to fight this menace. The larger computer security community, however, often seems to take a bit of a glib attitude toward the issue, with even reputable and honest experts like Bruce Schneier describing child pornography as one of the “Four Horsemen of the Information Apocalypse” along with terrorists, drug dealers, [and] kidnappers[.]”

This disconnect is a problem for government efforts to address child exploitation online, particularly concerning encryption. The current systems for detecting these images rely on bulk surveillance by private companies, and even the most cursory encryption—with “exceptional access” or no—will eliminate this surveillance. If the government is serious about policy changes designed to keep this detection capability in the face of encryption, however, the best policy is not to weaken communication security but instead to mandate endpoint scanning of images as they appear on phones and computers.

And this is why the government is partially responsible for that same disconnect. Although the problem is horrific, proposals to weaken encryption often use child exploitation images as an excuse in ways that don’t meaningfully address the problem. Attorney General William Barr’s recent complaints against “warrant proof encryption” are simply the latest in a long history.

Let’s begin by reviewing what steps Facebook and other entities currently take to detect known child exploitation images. Using the PhotoDNA technology, every image that passes through Facebook’s network is processed to create a hash, a (relatively) short numerical representation of the image that is effectively a unique identifier. The hash is basically a “fingerprint” of the photo: The photo itself can’t be recovered from the hash, but the hash uniquely identifies the photograph. Facebook then checks that hash value against a large list maintained by the National Center for Missing & Exploited Children (NCMEC) of known exploitation images. If an image matches a NCMEC hash, Facebook then automatically creates a report and forwards it to NCMEC.

Facebook is far from the only company to use this technique. Microsoft, which developed PhotoDNA, even makes the technology available as a free cloud-based service to approved partners. The implementation details are kept secret to prevent someone from taking child exploitation images and tweaking them to evade detection.

Of course, this works only because Facebook and other Silicon Valley companies are engaged in a campaign of mass surveillance: They see and check every photograph they can. Barr’s recent letter to Facebook, in which he calls on Facebook to maintain its access to content in order to keep scanning for this material, was simply a request to maintain this status quo.

So what would happen if Barr could magically obtain the sort of messenger encryption he professes to desire? In that world, every message would be encrypted but with a method that enables “exceptional access,” allowing law enforcement to get individual warrants to access an individual’s communications.

Such an exceptional access mechanism would do effectively nothing to combat child exploitation images, however. Facebook’s program is a bulk surveillance program—meaning that law enforcement would need to decrypt and then search every image transmitted, which clearly runs counter to the idea of access being “exceptional.” Barr’s wish for normal encryption but exceptional access has little to do with fighting this scourge.

Rather, if Barr were truly serious about the problem and wanted to propose legislation, the ideal legislative solution would not try to weaken encryption. Instead, an effective proposal would go around encryption by mandating that everyone’s device examine every image—turning the current centralized mass surveillance system into a privacy-sensitive, distributed surveillance system. In this world, everyone’s phone and computer would check to see if the machine contained known child-abuse images; if present, those images would then be reported to the government or NCMEC. This would ensure that the status quo results of the current bulk surveillance system are maintained even if every communication were encrypted.





Such legislation would require that the National Institute of Standards and Technology (NIST), working with the private sector, develop and publish a public version of PhotoDNA. It needs to be public because introducing it onto phones and computers would naturally reveal the details of the algorithm, and NIST has the experience necessary to select public algorithms in the same manner it selects cryptographic algorithms.

A public version would be less resistant to someone permuting images—but this is acceptable, as once a permuted version is discovered through some other means, the image is once again detectable. Additionally, both phones and computers currently have protected computing environments, usually for digital rights management, that could also be used to protect the “public” PhotoDNA algorithm from tampering.

NCMEC would then provide a downloadable approximate hash database. Such a database wouldn’t list all hashes but would use a probabilistic data structure to save space. That is, if a hash matches in the database, the database will say with certainty “yes, there is a match,” but if a hash is not in the database, it will only indicate that there is probably not a match.

The legislation would finally require that all major operating system vendors, notably Apple (MacOS/iOS), Microsoft (Windows) and Google (Android), include code to automatically scan every image and video when it is downloaded or displayed, calculate the hash and check against a local copy of the database. If the image matches, the system must query NCMEC to see if there is an exact hash match and, if so, upload a copy to NCMEC with associated identifying information for the computer or phone.

This would offer several advantages over the existing system. Cryptographic protections would simply become a nonissue, as the scanning would take place when the image is displayed. It would also significantly reduce the number of companies that need to be involved, as only a few operating system vendors, rather than a plethora of image hosters and other service providers, need to deploy the resulting system.

Of course, civil libertarians will object. After all, this is mandating that every device be a participant in government-mandated mass surveillance—so perhaps it might be called a “modest” proposal. It is privacy-sensitive mass surveillance, as devices only report images that probably match the known NCMEC database of child exploitation images, but it is still mass surveillance.

Unlike complaints about “warrant proof” message encryption, however, this would at least work to meaningfully address the problem of known child exploitation images.



Source link

Tagged with: • • • •



Comments are closed.