
Clothoff, an app known for creating non-consensual deepfake porn, claims to be donating money to support victims of AI through a partnership with ASU Label, but questions are being raised about the transparency and true intentions of this partnership.
Disclaimer: This article discusses content of a sexual nature.
Clothoff, one of the most famous apps that uses artificial intelligence to create unauthorized deepfake porn, has announced financial support for people who have suffered from the misuse of AI technologies. The developers emphasize cooperation with the organization ASU Label, which states that its goal is to “protect rights in the age of artificial intelligence.”
However, it remains unclear who exactly is behind this organization and why it decided to work with a platform that, on the contrary, launches services like “free AI for undressing people in photos.” This calls into question the sincerity of the declared goals.
The official website of ASU Label does not contain any information about the team, owners or partners, which only increases doubts about the transparency of this structure.
In December 2024, several Clothoff-related websites posted text claiming to be working with Asulabel and making donations to help those affected by AI. The message also listed the website address asulabel.com and a contact email address of [email protected].
This text was posted on several Clothoff-related websites, including the main Clothoff.io website, which was shut down in late 2024. However, the company still operates a number of websites with similar domains.
The website asulable.com was not found. However, asulabel.com – the domain of the mentioned contact email – links to the website of an organization that calls itself AisafeUse Label (ASU) or ASU Label. This website for “ASU Label” also contains a logo in the upper left corner that matches the logo on Klotoff’s website, further indicating that Klotoff was referring to this organization.
According to DomainTools, a tool that displays information about website domain registrations, the ASU Label domain was registered on October 15, 2024. The popular Internet Archive Wayback Machine first recorded the ASU Label site on November 13, and Klotoff’s first mention of ASU Label was archived the following month in December.
ASU Label states that its main mission is to help people who have been harmed by the dangerous use of neural networks. However, their website does not provide specific descriptions of how they do this, nor do they provide direct links to resources for victims.
When asked about their activities, the organization’s representatives stated that they provide “direct support” and “assistance to individuals,” but did not specify what exactly this entails. It was also noted that ASU Label is a non-profit organization, but without specifying the country of registration. A search of data in a number of international registries yielded no results — information about “ASU,” “AisafeUse Label,” or “ASU Label” could not be found, although it is worth considering that the databases may be incomplete or outdated.
When asked to provide evidence of its registration as a charitable or non-profit organization, ASU Label said it had made a “collective decision not to disclose its legal documents” due to the alleged existence of “numerous enemies” who, it said, were seeking to disrupt the organization’s mission. However, it did not explain who these “enemies” were or what the conflicts were.
ASU Label did not provide any answers regarding the harm caused by deepfake pornography produced by the Clothoff platform. It also did not disclose who exactly is behind ASU Label, other than the general statement that the organization was founded by “a group of specialists in the fields of artificial intelligence, law, and civic activism.” No additional information was provided about its partner or affiliated organizations.
It is also noted that ASU Label is supposedly an independent entity: “We are not owned or controlled by any company, including Clothoff. “We are a team of like-minded people united for a charitable cause,” one message said.
While the ASU Label email address can be found on Clothoff-related sites, it is not on ASU’s own website and does not appear in public searches. The site itself only has a pop-up form for those who want to join the organization or need help with negative experiences related to AI.
Links to ASU Label can currently only be found on Clothoff’s sites or on the organization’s own website. When both parties were asked about their interactions, Clothoff responded that it occasionally collaborates with ASU Label.
Comments noted that ASU sometimes reaches out to individuals for help or to offer to participate in research initiatives. In such cases, Clothoff provides support — both financial and analytical.
ASU Label, for its part, also confirmed the collaboration. According to them, in addition to donations, Clothoff is involved in research related to improving legal regulation in various countries. As an example, they cited joint work on studying the spread of deepfake content in Japan.
However, no information about ASU Label was found in the open Japanese registry of non-profit organizations, so the question of their legal presence remains open.
Clothoff describes its product as “an adult platform designed for safe, consensual exploration of intimate fantasies” and assures that they “strictly prohibit illegal use.” However, these statements contradict the actual functionality of the service.
The app allows users to create nude images of anyone using artificial intelligence — regardless of the person in the photo’s consent. The most common victims are women, who face psychological trauma, harassment, and reputational damage. In some countries, including the United States, there have been cases of minors using the platform to create images of classmates without their consent.
Clothoff representatives also expressed the position that there are “much more serious problems in the world than photos on the Internet,” adding that in the future, such things may even be perceived with humor – as a kind of April Fool’s Day joke.
However, in a number of countries, such as the UK, the US and a growing number of countries, the platform’s activities are banned. For example, the US has passed the Take It Down law, which criminalises the distribution of intimate images without consent. The UK is preparing a new law that will, for the first time in the world, prohibit not only the creation but also the possession of AI tools designed to produce CSAM (child sexual abuse material). Violators face up to five years in prison.
Commenting on the future of technology, Clothoff stated that “the evolution of AI cannot be stopped,” and “attempts to ban its development are futile.” At the same time, the platform carefully hides information about its owners. None of its numerous websites contain information about the individuals or companies behind the project.
During the inspection, it turned out that a software development company was mentioned in the footer of one of the sites. After contacting this company, it turned out that its management knew nothing about the presence of its name on Clothoff’s sites and had no connection with them. Later, the company contacted the platform administrators with a request to remove the mention, to which they replied: “We have already removed your address. Confirmation is attached.”
Soon, another name appeared in the same place in the footer — this time an investment company focused on artificial intelligence. This is at least the fourth company that has been linked to Clothoff since 2023. However, due to the lack of direct evidence that these entities actually operate the platform, their names are not disclosed.
In response to a request for information about the numerous references to various companies, Clothoff stated that he has a holding company that “controls several businesses.” However, there was no confirmation or denial of any connection to the specific organizations listed on the sites.
According to Clothoff, their parent company is owned by a “group of enthusiast engineers,” but the identities of these people are not disclosed, allegedly due to non-disclosure agreements.
Previous independent investigations have linked the platform to several companies and have also named individuals, including a brother and sister from Belarus, who were likely involved in the creation or administration of the resource.
The official ASU Label website lists various types of potential harm that AI can cause, from spreading misinformation to job losses, algorithmic bias, and dangerous recommendations. The article on deepfakes gives an example of the face swapping of famous people in adult content, which it says can cause serious reputational damage.
However, their stated collaboration with Clothoff, a platform known for creating unauthorized pornographic content based on deepfakes, directly contradicts the ASU Label’s stated goal of protecting human rights and supporting victims of AI.
Interestingly, some text analysis tools, in particularGPTZero , Quillbot і ZeroGPT, showed a 90–100% probability that the text on the ASU Label homepage was generated by AI. Other pages tested—for example, articles about the risks of AI or methods for detecting deepfakes—also showed high rates: from 75 to 100% probability of machine origin.
When asked for clarification, ASU Label representatives did not deny the use of artificial intelligence in creating texts and stated that they did not see any problem with this, since AI helps to structure the materials on the site.
This seems somewhat contradictory against the background of the organization’s own publication in December, which discussed the risks of manipulation through artificial intelligence. This article warned that visual and textual content generated by AI, including deepfakes and articles, can misinform or form distorted narratives. As noted, such materials are often indistinguishable from real content, which poses a serious threat to information security.
Despite claims of charity and protection of the rights of AI victims, ASU Label’s cooperation with Clothoff, a platform that creates unauthorized deepfake porn, appears controversial and opaque. The lack of clear data on ASU’s structure, activities, and registration, as well as the use of AI to create content, undermines trust in their intentions. All this calls into question the true purpose of such initiatives – helping people or covering up dubious activities.
If you have been a victim of image-based sexual abuse, you can find an international list of resources for survivors and victims here.