Apple Removes Deepfake Apps After TikTok Advertising Scandal

Apple Removes Deepfake Apps After TikTok Advertising Scandal

bbc.com

Apple Removes Deepfake Apps After TikTok Advertising Scandal

Four apps offering to digitally remove clothing from photos were removed from Apple's AppStore after a BBC investigation revealed they were advertised on TikTok as tools to create non-consensual deepfakes; one app was downloaded over 1 million times on Google Play, and TikTok removed the ads but similar videos remain.

Russian
United Kingdom
TechnologyCybersecurityTiktokOnline SafetyDeepfakeNon-Consensual ImageryApple App StoreNudify Apps
AppleTiktokGoogleBbcPrinceton Center For Information Technology Policy
Arvind NarayananLena HeadeyMaisie Williams
What are the long-term implications of readily available deepfake technology for online safety and the potential for abuse?
This incident underscores the urgent need for stricter regulations on deepfake technology and its distribution. The ease of access to these apps, coupled with their promotion on platforms like TikTok, poses a significant threat. Technological companies and regulators must collaborate to limit access to such technologies and prevent their monetization.
What immediate actions did Apple and TikTok take in response to the discovery of apps used to create non-consensual deepfakes?
Apple removed four apps from its AppStore after a BBC investigation revealed they were advertised on TikTok as tools to create non-consensual deepfakes. These apps, described as tools to ".make selfies spicier.", were advertised on TikTok to create sexualized images of women without their consent, with ads urging users to ".get the hottest photos of your crush." One app, downloaded over a million times on Google Play, was removed permanently from the Apple AppStore.
How did the advertising strategy of these apps on TikTok contribute to the creation and dissemination of non-consensual sexualized images?
The BBC investigation highlights the misuse of deepfake technology for non-consensual pornography. The apps, advertised on TikTok, allowed users to digitally remove clothing from photos, creating sexualized images. This issue is global, with similar scandals in Spain, South Korea, and the US, showcasing the potential for blackmail and harm.

Cognitive Concepts

4/5

Framing Bias

The headline and opening paragraphs immediately establish a negative tone, focusing on the illicit and harmful nature of the apps. The sequencing of information prioritizes the negative aspects, emphasizing the exploitation and potential illegality. The inclusion of high-profile examples like Game of Thrones actresses further amplifies the negative framing.

3/5

Language Bias

The article uses strong, negative language such as "sexually suggestive," "exploitative," and "illegal." While accurately reflecting the nature of the apps, the consistent use of such loaded terms contributes to a negative and potentially biased tone. More neutral alternatives such as "inappropriate," or "potentially harmful" could have been used in some instances to reduce the emotional impact of the reporting.

3/5

Bias by Omission

The article focuses heavily on the negative impacts of the apps and the actions taken by Apple and TikTok. However, it omits discussion of potential benefits or alternative uses of deepfake technology, or any counterarguments to the narrative of widespread harm. While acknowledging space constraints is valid, including a brief mention of potential positive applications (e.g., in film or entertainment) would have provided a more balanced perspective.

2/5

False Dichotomy

The article presents a somewhat false dichotomy by focusing solely on the negative consequences of the apps, without exploring the complexities of deepfake technology itself. The implication is that the technology is inherently harmful, neglecting the potential for responsible use or the difficulty in regulating it completely.

3/5

Gender Bias

The article disproportionately focuses on the sexualization of women. While it mentions the creation of images of both genders, the examples and emphasis are heavily skewed towards female victims. The descriptions of the app functionalities strongly imply that the primary, and perhaps only, target is women. The article should provide a more balanced account of the potential impact on both men and women.

Sustainable Development Goals

Gender Equality Negative
Direct Relevance

The article highlights the creation and promotion of apps that generate non-consensual nude or partially nude images of individuals, predominantly women. This directly violates their right to privacy and bodily autonomy, undermining efforts towards gender equality. The use of deepfake technology to create such images is a serious abuse and exacerbates existing power imbalances and vulnerabilities faced by women.