Video of Mukesh Ambani Endorsing a Gaming Platform Is Fake

June 24, 2024
June 19, 2024
Cheapfakes/Shallowfakes blog main image
Screengrabs of the video sent to the DAU

The Deepfakes Analysis Unit (DAU) analysed a video that features Mukesh Ambani, chairperson of Reliance Industries, apparently promoting an online gaming platform. After putting the video through A.I. detection tools and getting our expert partners to weigh in, we were able to conclude that the video was fabricated using A.I.-generated audio. 

The 42-second clip in Hindi was sent to the DAU by a fact-checking partner for verification. It includes subtitles in Hindi for the male voice that can be heard throughout. The video purports that Mr. Ambani is making a case for joining the gaming platform to earn quick and easy money. 

In one half of the video frame, Ambani can be seen talking, in the other half graphics roll out apparently indicating multiplier returns on a betting amount. A logo resembling that of ICICI Bank with the rupee symbol and numbers against it also flashes a few times. Other visuals such as mounds of cash and expensive cars feature in the video as well.  

The lip-sync is inconsistent at various points in the video. The audio sounds robotic, the words are enunciated very quickly making Ambani’s lip movements seem odd. The overall delivery is uncharacteristic of his style of delivery as visible in his public speeches. His head movements are also off at a few places in the video, giving the impression that the same video clip was looped and patched with an audio track. 

After running a reverse image search using screenshots of Ambani’s close-up from the video, we were able to locate this video published on July 21, 2017 from a Youtube channel that publishes updates from Reliance Industries. On comparing this video with the one received for verification, we were able to assess that Ambani’s clothing and the blue backdrop in both videos are identical. It appears that a clip was extracted from this video to concoct the other video. 

To discern if A.I. had been used to manipulate the visual and the audio elements in the video under review, we put it through A.I. detection tools. 

The voice detection tool of Loccus.ai, a company that specialises in artificial intelligence solutions for voice safety, returned results which indicated that the probability of the audio being real was negligible at 0.03 percent, suggesting high percentage of synthetic speech in the clip. 

Screenshot of the analysis from Loccus.ai's audio detection tool

Hive AI’s deepfake video detection tool indicated A.I. manipulation during a certain time frame in the video, A.I. tampering in the audio was also caught by their audio tool.

Screenshot of the analysis from Hive AI's deepfake video detection tool

We used TrueMedia’s deepfake detector, which presented substantial evidence of manipulation in the video, suggesting a high probability of A.I. use in the production of the video. It gave a 100 percent confidence score to the subcategory of “AI generated audio detection”, pointing to the use of synthetic audio in the video. However, the tool gave a little over 50 percent to the subcategories that indicate A.I. manipulation in the faces seen in a video. In this case of course, it is only Ambani’s face that is visible in the video.

Screenshot of the audio analysis from TrueMedia’s deepfake detection tool
Screenshot of the analysis from TrueMedia’s deepfake detection tool

To further understand whether the audio had been produced using generative A.I., we escalated it to our partner IdentifAI, a San Francisco-based deepfake security startup. They examined the authenticity of the audio in the clip through their proprietary audio detection software.

First, they took two real voice samples of Ambani to generate an audio profile of his. This audio profile served as a representation of his voice to determine if the suspected audio is of Ambani’s or not. Then, they compared his profile with the audio sample retrieved after removing all the background noise from the video under review. The analysis for the same was done using a heat-map.

Screenshot of the heat-map analysis from IdentifAI

The image on the left displays a comparison of the real voice sample of Ambani’s with the audio profile generated by our partner. The image on the right displays the comparison between the audio sample retrieved from the fake video and the audio profile generated by our partner. The patterns in both images are not identical. Based on this analysis and iterative testing, the team at IdentifAI assessed that the  audio being attributed to Ambani in the video has been generated using A.I. 

We also reached out to ElevenLabs to get a further analysis on the audio. They told the DAU that the audio is synthetic, implying that it was generated using A.I. They added that the user who broke their “terms of use” by generating the synthetic audio has been identified as a bad actor and blocked from using their tools.

They also noted that, they would use the audio escalated to them by the DAU to further train their in-house automated moderation system to better capture and block the generation of similar content in the future. 

We also wanted to gauge if A.I. had been used to alter the lip movements of Ambani in the video. We sought an expert view from a lab run by the team of Dr. Hany Farid, a professor of computer science at the University of California in Berkeley, who specialises in digital forensics and A.I. detection. They noted the presence of artifacts in the video that seemed consistent with a lip-sync deepfake; they studied Ambani’s mouth movements and noticed odd twitches in the video while analysing his speech.   

On the basis of our findings and analyses from experts, we can conclude that the words being attributed to Ambani in the video were not uttered by him and the voice which can be heard in the video was generated using A.I.

(Written by Debopriya Bhattacharya with inputs from Debraj Sarkar, and edited by Pamposh Raina.)

Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.