April 8 2025
New Arrival

Apple and Google pull AI ‘nudify’ apps amid rising deepfake abuse concerns

post-img

The tech gaints have begun removing AI-driven “nudify” apps from their app stores following investigations that linked the tools to non-consensual sexual imagery, raising fresh concerns over platform accountability, AI misuse, and gaps in app review processes.

 

 

Apple and Google have taken action against a growing number of AI-powered “nudify” applications after investigations revealed that the tools were being used to generate non-consensual sexualised images, often targeting women using publicly available photos. The removals come amid mounting pressure on major technology platforms to curb technology-facilitated abuse enabled by generative artificial intelligence.

An investigation by the Tech Transparency Project (TTP) found that more than 100 such applications were accessible to users despite app store policies that prohibit sexually explicit content. These apps use advanced AI models to digitally remove clothing from images, making the creation of explicit deepfakes faster and more accessible through consumer-facing platforms.

Platform response and revenue questions

Following the release of the findings, Apple confirmed it had removed dozens of apps flagged in the report and issued warnings to additional developers. Google said it initially suspended several applications and later permanently removed others as part of an ongoing review of policy violations.

The scale of the issue has raised concerns beyond content moderation. TTP estimates that nudify apps collectively generated hundreds of millions of downloads and over $100 million in lifetime revenue. Because app stores typically take commissions on in-app purchases, the report argues that platform operators indirectly benefited financially from applications that enabled abusive content.

The investigation also highlighted data security risks, particularly for apps linked to overseas developers. TTP warned that certain jurisdictions’ data retention laws could expose victims’ images to broader misuse once uploaded to such services.

Broader scrutiny of AI image tools

The controversy has intensified scrutiny of AI image-generation tools more broadly, including those integrated into popular social media platforms. Researchers noted that searches for nudify-related terms surfaced mainstream AI chatbots and image tools, prompting questions about safeguards and discoverability.

In response to public and regulatory pressure, some AI providers have moved to restrict image-editing capabilities, introduce regional limitations, and tighten access controls. However, critics argue that these measures remain reactive rather than preventive.

Advocacy groups and lawmakers in the United States and abroad have renewed calls for stricter regulation, including outright bans on applications that facilitate non-consensual sexual imagery. Regulators in Europe, the UK, and India are also increasing scrutiny of app store oversight and AI safety practices.

The episode underscores a growing challenge for technology companies: balancing rapid AI innovation with effective safeguards to prevent misuse, protect users, and uphold platform responsibility in an era of increasingly powerful generative tools.