The Deepfakes Analysis Unit (DAU) analysed a video featuring billionaire Elon Musk, founder of several tech companies, purportedly inaugurating an A.I.-powered stock trading software. After examining the video using A.I. detection tools and escalating it to our expert partners, we concluded that synthetic speech was used over original footage of Musk and strung with interviews of people who seem to be scam artists.
The DAU tipline received a link from a seemingly dubious website, which prominently featured this almost six-minute video promoting the supposed get-rich-quick platform. The video opens with Mr. Musk on a stage addressing a crowd, apparently making a case for investing in the platform.
He is featured for almost two-and-a-half minutes but with barely any close-ups, making it difficult to assess if there is any consonance between his lip movements and the words that can be heard in the audio track in a voice that sounds like his but scripted, rushed and without inflection.
A string of interviews with four different men follow over the next three minutes, with the first one posing as a top executive leading the supposed financial platform followed by three others identifying as beta testers of the platform. Each of them vouch for the product and the returns that it promises.
A logo resembling that of Neuralink, a neurotechnology company founded by Musk, is visible on the bottom right corner of the video frame throughout the length of the video. A visible inconsistency in the clip featuring the man identifying as the top executive was that the couple times there was a “Neuralink” reference, either his audio was muted or his lip movements seemed odd. The backdrop in his clip also seems artificial and unlike that of someone supposedly occupying a top management position.
The lip synchronisation for two of the three men identified as beta testers in the video seems to be fine. However, for the one featured toward the end the lip-sync seems imperfect, which could be because of a rendering or compression problem in the video.
We used a combination of a keyword search and a reverse image search using screenshots from the video, to track down the origin of the different clips seen in the video.
Musk’s clip could be traced to this video uploaded on Dec. 3, 2015 from the official YouTube channel of the University of Sorbonne in France. The clothing, body language, and backdrop of Musk are identical in the original and the manipulated video. However, the visuals displayed on the projector seen in both the videos are not identical, neither is a Neuralink logo visible anywhere. The audio track is different too as Musk talks about climate change in the original video without a mention of any A.I.-powered trading software or Neuralink.
We could not trace the original videos for the other men featured in the video. We also looked up the website featuring the video that was sent to the tipline, it has no mention of the supposed top executive or any of the other men. There are several red flags on the website such as partner logos resembling but not identical to those of top tech companies; and bold text on the landing page that reads: “smart investing that makes you $1500 in 5 hours and cures poverty”.
To discern if A.I. had been used to manipulate the visuals and the audio track in the video we put it through A.I. detection tools.
The voice detection tool of Loccus.ai, a company that specialises in artificial intelligence solutions for voice safety, returned results which indicated that there was a 0.52 percent probability of the audio being real; suggesting a likelihood of synthetic speech in the video.
Hive AI’s deepfake detection tool detected A.I. manipulation in several portions of the video, most notably in the faces of Musk and one of the beta testers, while its audio tool picked up hints of A.I.-generated audio in some bits.
To get a further understanding of the A.I. manipulation in the video, we ran the video through TrueMedia’s deepfake detector which overall categorised the video as having “substantial evidence of manipulation”. The tool gave a 100 percent confidence score to the subcategory of “AI-generated audio detection”, a 91 percent confidence score to “foundational features”, and an 85 percent confidence score to “audio analysis” — all indicating a high probability of the use of A.I. in the production of the audio track.
We also put the video through the A.I. speech classifier of ElevenLabs, a company specialising in A.I. voice research and deployment, to get another analysis on the audio. It returned results which indicated that there was very little possibility that the audio track used in the video was generated using their software.
Subsequently, we reached out to ElevenLabs to get a confirmation on the results from the classifier. They told the DAU that the audio is synthetic, implying that the audio is made using A.I. They added that the user who broke their “terms of use” by generating the synthetic audio had been identified as a bad actor through their in-house automated moderation system in the past, and blocked from their platform.
To get another expert view, we escalated the video to Dr. Hany Farid, co-founder of GetReal Labs and his team, they specialise in digital forensics and A.I. detection. They noted that Musk’s part in the video is definitely a different audio stream that was dubbed without any lip syncing. His audio does appear to be synthetic and his fakes are some of the easiest to generate, they added.
The team also mentioned that of the four people featured in the video after Musk’s clip none is a deepfake, and that at least two of them appear to be paid spokespersons. They shared this report from a financial reviews and watchdog website, where the men were recognised as paid actors.
On the basis of our findings and the analysis from experts, we have assessed that the visuals of Musk were overlain with synthetic audio and linked to fake reviews to fabricate a video. It is yet another instance of an elaborate attempt at concocting a financial scam.
(Written by Debraj Sarkar and edited by Pamposh Raina)
Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.