Featured no image

Published on July 17th, 2020 📆 | 2481 Views ⚑

0

Data Protection in the Wake of Deepfakes


iSpeech.org

Multimedia information should be original (i.e., authentic) and users should feel safe when consuming multimedia content. With the increase of faceswapping applications and deepfake generators, authenticity of multimedia content is being compromised more than ever.

Recently research alleged that famous Apps such as TikTok are in the process of introducing deepfakes into their platform. These emerging apps could harvest a large amount of face data to train their models and improve the accuracy of those generation applications.

On the other hand, the acquired face data can be used for unauthorized face swaps and for creating deepfakes. This will have a significant impact on user’s privacy and security.

Increasingly, governments around the world are reacting to these privacy evading applications (e.g., India banning TikTok and USA investigating the privacy issues of TikTok) and in the process of enacting laws to reduce the impact from deepfakes. For instance, USA passed the Deepfakes Accountability Act in 2019 which mandated deepfakes to be watermarked for the purpose of identification.

To reduce the imminent threat to privacy, future multimedia applications should deploy solutions to measure its authenticity and in turn, protect user privacy and improve end users satisfaction or overall Quality of Security (i.e. QoSec)

Is it personal data? Some researchers argue that fake images or videos cannot be considered as personal data since it can no longer be attributed to any individual after the modification. According to GDPR, personal data doesn’t need to be objective. Subjective information such as opinions, judgements or estimates can be personal data. Therefore, deepfakes could be considered within the broader classification of personal data as defined in GDPR.

On a separate issue, GDPR only applies to information which relates to an identifiable living individual and hence, information relating to a deceased person does not constitute personal data and therefore is not subject to the GDPR. This is an issue when deepfakes of deceased individuals such as popular politicians, celebrities or spiritual leaders are created to provide misleading information. To this end, supplementary laws are being enacted by the legislators (e.g., Privacy act of Hungary) to obtain the consent from the heirs of such deceased persons.

Creation of deepfakes: If we can categorize deepfakes as personal data, processing of personal images or videos needs to be subjected to either informed consent or legitimate interest. If the creators (controllers or processors) haven’t obtained prior consent, they are violating the GDPR regulation.

The consent is twofold in this case: the consent should be obtained not only from the person in the original video, but also from the person in the fabricated video. The users need to be vigilant when using apps such as Faceapp, TikTok or any other emerging face apps and read their privacy notices carefully to investigate their data usage for further processing.





The acquired face data may not necessarily used for creating deepfakes, rather as training data for their models. Either way explicit consent should be enforced by these categories of apps to avoid privacy and security infringements.  

Under GDPR, it is necessary to carry out a Data Protection Impact Assessment (DPIA) to minimize the risk to individuals due to processing. The use of new technologies (such as creating deepfakes using algorithms such as Generative Adversarial Networks (GANs)) and collection of biometric data (including face data), are processing activities that require DPIA under GDPR to minimize the risks to individuals. These measures will inherently protect privacy of individuals who are using these new apps.

Potential solutions: The use of video fingerprinting and marking the content explicitly as fabricated content would be able to solve this problem to a certain extent. Another popular approach is to probe the provenance of the image and video content to find the origin, for example conduct a reverse image search to find similar content appeared before. Recent research also suggest that distributed technology solutions such as Blockchain can provide the much needed solution to protect from fake news, disinformation and Deepfakes.

AI and ML algorithms such as GANs are widely deployed to create true like deepfakes. The collection of sample data through emerging face apps with or without user consent have increased the effectiveness of these algorithms by having a large training dataset to train these algorithms.

Most of the proposed detection methods are also based on AI and ML. However, it will take a considerable time to train these detection models and embedding these complex detection algorithms in real-time application will be a challenge.

Similar to generation of deepfakes, detection algorithms need to go through a training phase, which will result in collecting more user data, which would in turn, infringe user privacy.


Dr Chaminda Hewage- BSc Eng. (Hons), PhD (Surrey) is an Associate Professor in Data Security at Cardiff Metropolitan University, UK. He is an expert in data security and research on human/social factor and emerging threats in cybersecurity. He is the principle investigator of a number of research projects looking at various frontiers of cybersecurity. 


Source link

Tagged with:



Comments are closed.