From The Straits Times    |

Artificial intelligence (AI) generated advertising? Jamie Yeo is all for it. Earlier this year, the Singaporean actress, model, former radio DJ and founder of Lula J Jewelry, who relocated to the UK with her family in 2022, inked a “deepfake” deal with Hugosave, a financial technology firm.

The deal gives Hugosave permission to use a digitally manipulated likeness of her to market their content. According to a July 26 post on Hugosave’s Facebook page, the company has partnered with Jamie to produce a series of financial education videos with AI technology to promote financial literacy among Singaporeans.

The process of creating “deepfakes” is relatively quick and simple. Jamie stands in front of a green screen for a few hours, during which her facial expressions and body movements are captured. She then spends a few more hours in a recording studio to capture her voice. An AI programme combines this visual and audio footage to create digital avatars that can be “manipulated” to say anything.

Here, the former Growing Up star tells us her reasons for embracing this technology and why she thinks it’s “safe to bet on change”.

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

A post shared by Jamie Yeo (@iamjamieyeo)

What motivated your decision to sell your likeness, allowing a company to create videos and other content using your image?

Jamie Yeo (JY): First, it was because the job was for a Singapore-based, Monetary Authority of Singapore-registered financial institution, Hugosave. I knew they would be legit. Second, I was intrigued about the process and wanted to be a part of it. Of course, I had some concerns:

  • Exploitation of my image: Using my image and voice beyond the agreed engagement, because there are people out there who will take your content, and boost or use it without your consent for promotions that haven’t been agreed upon. 
  • Misrepresentation: Words or actions attributed to me that I’d never say or do.

In saying this, you might wonder why I agreed to the deal. I had these few rules:

  • Clear contracts: Clarity between me and the client around how and where my image and voice would be used and what message would be put out. 
  • Strong rule of law: Working in markets where I trust that the legal system, even if not yet developed properly to manage AI risks, is at least fair and robust. 
  • Alignment with the brand: Working with organisations with high standards (regulated, ideally). 
  • Trust: Ultimately, my agreeing to the deal was a leap of faith, so I needed comfort that the counterparty was ethical and would protect and respect my image and voice. In Singapore, it’s easier, as it’s a comparatively small place and I know many people, so the risk of my trust being abused is lower (and especially since the reputational consequences in a small market would be high). 

How does the AI company utilise your likeness to create content? 

JY: I put in two hours of recording in the studio in Singapore, presenting and reading scripts filled with random lines to the camera, in different “styles”.

1. Presenting in a natural “me” way, moving my hands and head naturally, smiling naturally.

2. Presenting without moving my head or showing any facial expression, but moving my hands naturally.

3. Presenting without moving my head or my body, and being almost deadpan. No smiling.

I also spent an hour in the studio having my voice recorded. I read about 12 to 15 A4-sized pages of random scripts.

The AI company only uses this footage to create content when Hugosave commissions them to.

Can you share the details about the contract you signed?

JY: My contract with Hugosave is confidential, but essentially, the company owns footage of me as well as the right to repurpose this footage using AI. The AI company that works with them does not own the footage.

AI is already a part of our lives and is here to stay. For some in my industry, AI can make things more efficient and allows production companies and others to do more with less.

I understand that AI also presents risks to writers, film crew, and those of us in front of the camera or mic. However, I’d rather focus on how I can work with, and not against, the technology. For instance:

– Pricing based on a balance between less work required and the value of extended use and rights to my image and likeness. 

– Explaining to clients where AI might not connect, for example, interpersonal scenes or dialogues that might lack organic authenticity (like those in video games). 

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

A post shared by Jamie Yeo (@iamjamieyeo)

Do you have the freedom to select the projects you want to be associated with, or is the AI company solely responsible for making those decisions?

JY: In my case, Hugosave owns the footage and they have the rights to repurpose them to create content related to their product. The AI company doesn’t own the footage; Hugosave just commissions them to create content using my digital likeness.

Have you considered the possibility of your image being used to endorse products or ideas that you don’t support? How do you think you’d address such situations? 

JY: Yes, I have considered that possibility, but I trust Hugosave to safeguard the footage they own. And anyway, the possibility already exists for anyone who has appeared in a video or image.

Deepfakes are here and of course it’s a concern, especially given the volume of footage that’s already out there. If that happens to me, I’ll deal with it, for example, by releasing a statement saying that I didn’t give permission for the endorsement and directing people to the original footage or scripts. 

What do the full image rights entail and what level of control do you have over how your likeness is used by the AI company? 

JY: Hugosave will only be using my likeness for videos that pertain directly to their product. The AI company doesn’t own my likeness and will not be able to use it in any way, unless the client, Hugosave, tells them they want to use it in their content.

During the recent actor’s strike in Hollywood, there were concerns about AI technology potentially threatening livelihoods by acquiring likeness rights. What are your thoughts on the matter?

JY: Although AI does pose a threat, I prefer to think in terms of how we can respond and adapt. For instance, 24-hour news cycles were meant to threaten long-form journalism, but the abundance of podcasts and documentaries suggests that the appetite exists for both. Music videos were meant to kill radio. We still read books even though we have smartphones and tablets loaded with apps. 

I am an optimist, and I have faith that we’ll always crave the authenticity and imperfection of raw human performances and words.

Yes, AI will get better at mimicking us, but we still go to music festivals and theatres in the era of Spotify and Netflix. We therefore need to find a balance. With walkouts and other issues, I understand that that balance isn’t there yet because the technology is fairly new. However, it’s safe to bet on change, and I’d rather try to work with, rather than against, change.