Some People Actually Kind of Love Deepfakes

Posted by

A month ago, the consulting company Accenture presented a potential client an unusual and attention-grabbing pitch for a new project. Instead of the usual slide deck, the client saw deepfakes of several real employees standing on a virtual stage, offering perfectly delivered descriptions of the project they hoped to work on.

“I wanted them to meet our team,” says Renato Scaff, a senior managing director at Accenture who came up with the idea. “It’s also a way for us to differentiate ourselves from the competition.”

The deepfakes were generated—with employees’ consent—by Touchcast, a company Accenture has invested in that offers a platform for interactive presentations featuring avatars of real or synthetic people. Touchcast’s avatars can respond to typed or spoken questions using AI models that analyze relevant information and generate answers on the fly.

“There’s an element of creepy,” Scaff says of his deepfake employees. “But there’s a bigger element of cool.”

Deepfakes are a potent and dangerous weapon of disinformation and reputational harm. But that same technology is being adopted by companies that see it instead as a clever and catchy new way to reach and interact with customers.

Those experiments aren’t limited to the corporate sector. Monica Arés, executive director of the Innovation, Digital Education, and Analytics Lab at Imperial College Business School in London, has created deepfakes of real professors that she hopes could be a more engaging and effective way to answer students’ questions and queries outside of the classroom. Arés says the technology has the potential to increase personalization, provide new ways to manage and assess students, and boost student engagement. “You still have the likeness of a human speaking to you, so it feels very natural,” she says.

As is often the case these days, we have AI to thank for this unraveling of reality. It has long been possible for Hollywood studios to copy actors’ voices, faces, and mannerisms with software, but in recent years AI has made similar technology widely accessible and virtually free. Besides Touchcast, companies including Synthesia and HeyGen offer businesses a way to generate avatars of real or fake individuals for presentations, marketing, and customer service.

Edo Segal, founder and CEO of Touchcast, believes that digital avatars could be a new way of presenting and interacting with content. His company has developed a software platform called Genything that will allow anyone to create their own digital twin.

At the same time, deepfakes are becoming a major concern as elections loom in many countries, including the US. Last month, AI-generated robocalls featuring a fake Joe Biden were used to spread election disinformation. Taylor Swift also recently became a target of deepfake porn generated using widely available AI image tools.

“Deepfake images are certainly something that we find concerning and alarming,” Ben Buchanan, the White House Special Adviser for AI, told WIRED in a recent interview. The Swift deepfake “is a key data point in a broader trend which disproportionately impacts women and girls, who are overwhelmingly targets of online harassment and abuse,” he said.

A new US AI Safety Institute, created under a White House executive order issued last October, is currently developing standards for watermarking AI-generated media. Meta, Google, Microsoft, and other tech companies are also developing technology designed to spot AI forgeries in what is becoming a high-stakes AI arms race.

Some political uses of deepfakery, however, highlight the dual potential of the technology.

Imran Khan, Pakistan’s former prime minister, delivered a rallying address to his party’s followers last Saturday despite being stuck behind bars. The former cricket star, jailed in what his party has characterized as a military coup, gave his speech using deepfake software that conjured up a convincing copy of him sitting behind a desk and speaking words that he never actually uttered.

As AI-powered video manipulation improves and becomes easier to use, business and consumer interest in legitimate uses of the technology is likely to grow. The Chinese tech giant Baidu recently developed a way for users of its chatbot app to create deepfakes for sending Lunar New Year greetings.

Even for early adopters, the potential for misuse isn’t entirely out of mind. “There’s no question that security needs to be paramount,” says Accenture’s Scaff. “Once you have a synthetic twin, you can make them do and say anything.”