HomeLatest FeedsTechnology NewsTaylor Swift deepfakes show need for remedy of AI abuse

Taylor Swift deepfakes show need for remedy of AI abuse


Key Takeaways

  • Deepfakes created with AI are increasing rapidly, with a 900 percent annual increase in 2021 alone. The ease of manipulation suggests that the genie is already out of the bottle.
  • Microsoft Designer, a part of Microsoft Copilot, was used to create explicit deepfakes of Taylor Swift. Workarounds and loopholes enable the creation of non-consensual AI-generated sexual images.
  • The need for AI regulation is urgent, as companies have moved too swiftly without considering the potential for abuse. Regulating AI is necessary, but it’s challenging to define and enforce laws that address non-consensual imagery effectively.


In the rapidly evolving landscape of artificial intelligence (AI), a deeply concerning phenomenon has emerged. Recently, the AI-generated phenomenon of “deepfakes” has come into the spotlight through Taylor Swift. These sophisticated, AI-manipulated images and videos are unsettlingly realistic, allowing for the creation of explicit content without the consent of the individuals featured. The issue reached a critical point when a user on X (previously known as Twitter) distributed explicit deepfakes portraying Taylor Swift.

Artificial intelligence is powerful, and I’m a proponent of using powerful technology for good. AI image generators are controversial for a whole host of reasons, especially concerning the origin of the data they were trained on. With that power comes the potential for abuse, and explicit images of Taylor Swift, generated by Microsoft Designer (which is also part of Microsoft Copilot) were seen 45 million times before the account that shared them was suspended. There are, of course, genuine uses for AI image generators, such as to generate images for a Dungeons and Dragons campaign or creating your own wallpapers.

Ultimately, one thing remains crystal clear: the need for AI regulation is imperative and urgent.


Deepfakes made with AI are going to become more common

Taylor Swift is just the first high-profile instance

Nasir Memon, a computer science professor at New York University, noted a staggering 900 percent annual increase in deepfake videos as of 2021. This surge occurred even before the widespread adoption of generative AI, which at that time required significantly more human guidance. Now, with advancements in technology, it has become alarmingly simple to modify tools like Stable Diffusion to bypass NSFW content filters, allowing unrestricted creation of virtually any content. This ease of manipulation suggests that, in many ways, the proverbial genie is already out of the bottle.

With that said, the most surprising thing about the Taylor Swift deepfakes is that they weren’t even done with a locally run tool. No, according to 404 Media, they were generated using Microsoft Designer as a part of Microsoft Copilot. Worse still, 404 Media found that those images were made in a “group dedicated to making non-consensual AI generated sexual images of women.” While Designer is supposedly blocked from generating images of existing people, a loophole allowed a user to generate an image of Taylor Swift by asking it to draw “Taylor ‘singer’ Swift.”

These workarounds are enabled through the same methods that allow bad actors to jailbreak a large language model (LLM) tool like ChatGPT. Rather than explicitly asking the AI to draw something, you talk around what you want, describing the result rather than describing the action. In this case, users would request Designer to draw images clearly depicting a sexual scenario without using terms that would indicate a sexual scenario, thereby fooling and circumventing the protections built into Designer. With that in mind, it’s nearly impossible to moderate and restrict an AI tool with just keywords: language will always be more creative than an AI can ever be.

If you want an idea of just how terrifying this phenomenon is, the video at the start of this section was generated with Stable Video Diffusion, from a photo that was generated with Stable Diffusion. And in the last few weeks, we’ve also seen fake Joe Biden robocalls too, where voters were encouraged to “save” their vote for the November election, rather than casting it during the primary elections.

Regulating AI is the only way forward

But even then, how?

When it comes to AI, companies have arguably moved too swiftly. While it was pretty funny to witness Google’s own Bard launch and promptly see the tool refer to the months of the year as “January, Febuary, Maruary, Apruary…”, it’s clear that many companies have adopted the “launch first and ask questions later” model. With ChatGPT especially, there’s been a constant back and forth battle to restrict illegal content. The GPT-4 whitepaper goes into greater detail, but the lengths OpenAI has gone to in order to prevent abuse are staggering.

However, it’s clearly not enough. GPT-4, being a text-only model, can’t do much more harm than any normal user could do. Image generators are a different story, though, as they enable anyone to create photo-realistic images in seconds. You’ll never be able to account for every way that an infinite number of users will try to brute-force an AI tool into doing something it shouldn’t. With the likes of Stable Diffusion being out there in the wild, even if OpenAI or Microsoft found a silver bullet to prevent this sort of abuse, it wouldn’t prevent people with powerful enough hardware from doing it themselves, anyway.

While regulating AI is a clear necessity to ensure that all entities get in line, it’s hard to even see how they could. While revenge porn is illegal in many countries, it gets murky when an image is technically not of a specific person but the person’s likeness. Non-consensual imagery of all kinds is heinous, but even trying to figure out how the law would be written in an all-encompassing way is problematic. What’s more, if you’re only punishing the outcome rather than preventing it from occurring in the first place, these efforts will have little impact on underground forums dedicated to non-consensual image generation.

Nevertheless, U.S. lawmakers on both sides of the aisle are now listening. In a statement made by White House press secretary Karine Jean-Pierre on Friday, the White House confirmed that it was investigating the matter. “We are alarmed by the reports of the circulation of false images. We are going to do what we can to deal with this issue,” she said.

X has already blocked searches for “Taylor Swift”

For now, the best defense we have against the sharing of AI deepfakes is to pressure the social media platforms that the images are shared on. If it seems a bit extreme for X (Twitter) to block searches for “Taylor Swift,” this should help stifle the spread of those images on the platform, at least for the time being. And if other platforms implement a policy of automatically removing those images, this will help prevent their further spread. Granted, groups that generate deepfakes will continue to exist, but anything that slows their distribution is ultimately a good thing.

Preventing searches and stifling the spread of already-existing images can only do so much, but that’s about all that can be done consistently at the current moment. Even plugging gaps in tools like Designer will only go so far, and there are other ways to generate non-consensual imagery if someone really intends to. This is a ridiculously difficult situation to solve, and a combination of regulation, platform compliance, and measures taken by AI companies is very likely going to be the best remedy we’ll land on.

Mr.Mario
Mr.Mario
I am a tech enthusiast, cinema lover, and news follower. and i loved to be stay updated with the latest tech trends and developments. With a passion for cyber security, I continuously seeks new knowledge and enjoys learning new things.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read