From The Straits Times    |

Getty

I’m no Taylor Swift, so why would anyone create a deepfake pornographic video or photo of me?

That is what most of us regular women would think when confronted with the topic of generative artificial intelligence (AI) and deepfake technology. It would be, like the title of Taylor’s famous hit, something that could only happen in our wildest dreams. Yet there has been a growing number of reported cases where women, especially younger women, find themselves the subject of such explicit content.

In New Jersey, a 15-year-old girl discovered that there were nude images of herself created using an AI application, and distributed to others on social media. According to a lawsuit filed in February this year, the girl accepted an Instagram friend request from a male classmate, who then downloaded a fully clothed photo of the girl that he used to generate those images. The male classmate is believed to have used a programme called Clothesoff to create the nude images.

The truth is that you don’t have to be a celebrity to be a victim. Anyone can be a target of pornographic deepfakes, but women are undeniably among the most vulnerable groups.

Revenge porn, for instance, is a form of digital abuse or online harm that is made easier for perpetrators by technological advancements. It is not easy for one to tell that the hyperrealistic sexually explicit images or videos are generated by AI, and the consequences of having such content distributed online may be irreparable. Apart from posing risks to the victim’s personal privacy, reputation, mental/emotional health, and safety, having such content on the Internet also perpetuates harmful stereotypes in society.

Dealing with deepfakes

If you find yourself targeted by a deepfake, immediate action is crucial. Firstly, don’t panic! Reach out to trusted family or friends for support. Document the fake content, and gather evidence by taking screenshots or recordings to report the incident. Most social media platforms have policies against non-consensual intimate imagery and impersonation, so reporting to the platforms and requesting for a takedown is essential.

You should also lodge a police report. Finally, seek advice from a lawyer or advocacy organisations specialising in online harassment and privacy rights. Shecares@SCWO is a local support centre for targets of online harms. You can reach out to them for counselling services and pro bono legal assistance. On the legal remedies in Singapore, while laws and policies exist to address cybercrimes, including revenge porn and harassment, the enforcement of these laws in the context of deepfakes remains a challenge.

The Protection from Harassment Act 2014 (POHA) that criminalises cases of stalking and harassment may not always capture instances of deepfakes, given the specific requirements to prove an offence under the Act. One type of offence is based on the perpetrators having the “intent to cause harassment, alarm or distress”, and this may be even more difficult to prove when the perpetrators are often anonymous.

Even outside of Singapore, in the New Jersey case, the police were apparently unable to charge anyone for the nude photos. The girl turned to specific US laws to seek relief. These includes laws that allow an individual to recover US$150,000 ($202,000) and litigation cost if nude pictures of that person are disseminated without consent, as well as remedies for those victimised by child pornography, for invasion of privacy and intrusion on seclusion, negligent infliction of emotional distress, and endangering the welfare of children.

While Singapore does not have such specific legislation providing compensation for victims, the POHA contains an avenue for civil recourse. The victim of POHA offences may sue any individual or entity, and damages may be awarded in the victim’s favour if the court is satisfied with the alleged contravention of the POHA.

If the victim simply wishes for the offensive deepfakes to stop circulation online, she could seek a stop publication order under the POHA. She would have to satisfy the court that the relevant “statement”, which is defined in the Act to include an image (moving or otherwise), was published by the respondent, and is false. The latter may be a challenge and require expert evidence to show that the deepfakes were created by generative AI.

Be that as it may, going through any legal process is an expensive, time-consuming and intimidating process. It is common for victims to not want to spend the time and money going to court, and replaying a traumatising experience. Many victims may not even know the identity of the perpetrator for the purpose of initiating a lawsuit.

The reality is that the rapid evolution of deepfake technology is currently outpacing legislative responses in Singapore. As we await further updates to the law, I would start by taking the following steps to stay safe online:

  • Enable Privacy Settings: Utilise privacy settings on social media platforms to limit who can view your content, tag you in posts and send you private messages. Regularly review and adjust these settings to maintain control over your online presence.
  • Secure Your Accounts: Use strong, unique passwords for all your online accounts, and enable two-factor authentication wherever possible. Regularly update your passwords to prevent unauthorised access.
  • Control Your Digital Footprint: Be mindful of the information you share online and who has access to them. Avoid posting sensitive or compromising content that could be manipulated into deepfakes.
  • Stay Vigilant: Be wary of unsolicited messages or friend requests from unknown individuals. Always verify the authenticity of online interactions before sharing personal information or engaging with content.

As much as the era of generative AI is an exciting time, there are dangers that loom and lurk ahead. Until the law gets up to speed with the harms posed to women by generative AI, there is no doubt that we need to stand guard.

Kelley Wong is an associate at Singapore law firm Morgan Lewis Stamford