The Deepfakes Analysis Unit (DAU) analysed a video apparently featuring Mukesh Ambani, chairperson of Reliance Industries, and Arnab Goswami, Republic TV editor-in-chief, promoting a dubious financial investment program. After putting the video through A.I. detection tools and getting our expert partners to weigh in, we were able to establish that the video has been manipulated with synthetic audio.
The three-minute-and-32-second video in English was escalated to the DAU tipline for assessment multiple times. It was shared in video format as well as a link to a Facebook post which carried the video. The video was uploaded on Facebook on Dec. 23, 2024, until last week it had garnered around 600,000 views. The video is not publicly visible anymore.
In December the DAU had debunked a similar video which was fabricated by patching together A.I.-generated audio with two separate video clips, one featuring Mr. Ambani and the other featuring Mr. Goswami. The fake video made it appear that the two prominent personalities were endorsing some get-rich-quick scheme and encouraging the public to invest in it.
The new video opens the same way as the previous one with Goswami apparently speaking to the camera from a studio-like setting. However, it differs in terms of the visuals and the details conveyed through the accompanying audio track.
The voice recorded over Goswami’s visuals announces that Ambani had hidden a supposed investment program from the public. And that it has the potential to “eradicate poverty” and “reduce inflation rates” in India. The audio track also mentions that the program was launched by Shaktikanta Das, the recently retired governor of the Reserve Bank of India (RBI). The previous Goswami-Ambani video claimed that the investment project was launched by Elon Musk, the world’s richest man.
Mr. Das has been linked to content peddling financial scams in previous instances as well. Last month, the DAU debunked one such video that had been produced using footage from an interview of his and combining it with an A.I.-generated audio track. The narrative concocted through that video suggested that Das and the Indian government had hidden information about a supposed income-generating program.
The new Goswami-Ambani video is also packaged on similar lines whereby it is apparently making public a lucrative financial investment program, which had been kept secret so far. The practice of misleading by claiming that a purported financial program or an unheard of remedy is backed by the Indian government seems to be a rising trend in scam videos recently debunked by the DAU such as this and this.
Ambani’s tie as well as the backdrop in the video helped us establish that the same video track was used to fabricate another video, which the DAU debunked in July. In that video A.I. voice was used to make it seem that he was endorsing a dubious gaming platform.
The sequence of the visuals in the new Goswami- Ambani video is the same as the old one. Ambani’s video track, which follows Goswami’s, forms the bulk of the video content. The male voice recorded over his video track is distinct from that recorded over Goswami’s segment.
The voice makes some claims that are identical to those made in the older Goswami-Ambani video, such as the supposed financial program being run on A.I.-driven algorithms; the maximum monthly income that can be earned after the initial investment; the number of people who can participate in the program; and also that there is a limited window available to register for the program.
The audio track also suggests that Ambani has made money using the program, which was kept secret because it was being tested before the supposed public launch. The audio even claims that the government has allowed tax exemption on the profits from the scheme.
The text graphics in the old and the new video are almost identical. The use of black, red, and green fonts on a white background is the same and so is their placement at the bottom of the video frame, the only difference is the initial investment amount mentioned in the graphics. Goswami’s full name appears in the beginning of the video for a few seconds, right above the main graphics, which wasn’t the case in the older video. No other logos are present in this video nor were they visible in the other video.
This video too ends with a black screen directing viewers through text graphics to click on a link, which is only visible in the body of the Facebook post carrying the video. For the purpose of this report, we clicked on the link, which led to a website that appears to have been designed to emulate the official website of Times of India, an Indian English-language newspaper. (We would like to caution our readers against clicking on suspicious links.)
The masthead of Times of India and the color scheme on their website has been used in the copycat website, which is replete with content promoting scammy financial schemes such as the one being peddled through the video. Prominent public figures including anchors, politicians, and business leaders, are linked to the supposed schemes. Dr. Manmohan Singh, former Indian Prime Minister, is prominently featured on the website. The DAU also debunked a video manipulated using A.I. voice to make it appear that he was promoting one such financial scheme.
The voice attributed to Goswami in the video sounds somewhat like his, however, devoid of the pitch and accent heard in his news shows and recorded videos available online. The delivery sounds monotonous and fast-paced. The voice recorded over Ambani’s video track sounds much different to his voice as heard in his recorded speeches and interviews. It sounds scripted, lacking natural intonation and has a peculiar accent which does not match his natural accent.
The overall video quality is poor. Visual oddities are more noticeable in the segment featuring Goswami, which is shorter and features zoomed-in frames. His lip-sync is imperfect, and his mouth movement resembles that of a puppet at one instance in the video. His teeth are not visible for the most part except for a few frames when a white line appears above his lower lip.
The frames featuring Ambani keep zooming in and out, making it difficult to observe the alignment of his lip movements with the audio track. In the frames zoomed-in on Ambani's face, his mouth region appears especially blurry and an additional set of teeth seem to appear when he opens his mouth while speaking. In the zoomed-out frames, however, his whole face becomes blurry.
We undertook a reverse image search using screenshots from the video and established that unrelated clips from two separate videos were used to stitch together this manipulated video.
Ambani’s clips were traced to this video published on July 21, 2017 from a YouTube channel that publishes updates from Reliance Industries. Goswami’s video was traced to this video, published on Dec. 6, 2024 from the official YouTube channel of Republic World, an India-based English news channel. Neither Ambani nor Goswami mention any investment program in the original videos featuring them.
The clothing and body language of Ambani and Goswami in the manipulated video and their respective original videos are identical. The background of Goswami’s clips also matches that in the original video featuring him, however, the background and foreground in some clips featuring Ambani is different from his original video. The text in the backdrop has been edited out and the color of the podium is different.
The Republic TV logo and text graphics in the original video of Goswami are not part of the manipulated video; the logo of Reliance and its subsidiary Jio, both featured in the original video of Ambani are missing from the doctored video.
To discern the extent of A.I. manipulation in the video under review, we put it through A.I. detection tools.
The voice tool of Hiya, a company that specialises in artificial intelligence solutions for voice safety, indicated that there is a 99 percent probability that the audio track in the video was A.I.-generated.
Hive AI’s deepfake video detection tool indicated that the video was manipulated using A.I. It pointed out markers of manipulation in multiple frames featuring Ambani but the segments featuring Goswami did not point to any manipulation. Their audio detection tool indicated that most of the audio track was manipulated using A.I. but for three 10-second-segments — one in the beginning and two consecutive segments toward the end.
The deepfake detector of our partner TrueMedia suggested substantial evidence of manipulation in the video. The A.I.-generated insights offered by the tool provide additional contextual analysis by stating that the transcript reads like a fraudulent scheme or a scam, filled with exaggerated claims of financial returns and creating an urgency to sign up.
In a breakdown of the overall analysis, the tool gave an 84 percent confidence score to the “face manipulation detector” subcategory that detects potential A.I. manipulation of faces, as in the case of face swaps and face reenactment. It gave a 64 percent confidence score to the “video facial analysis” subcategory that analyses video frames for discrepancies in facial features.
The tool gave a 100 percent confidence score to the “A.I.-generated audio detector” subcategory as well as the “voice anti-spoofing analysis” subcategory. The former detects A.I.-generated audio and the latter analyses if the audio was generated by an A.I. audio-generator.
For a further analysis on the audio track from the video we put it through the A.I. speech classifier of ElevenLabs, a company specialising in voice A.I. research and deployment. The classifier returned results as “very likely”, indicating that it is highly probable that the audio track in the video was generated using their software.
For expert analysis, we shared the video with our detection partner ConTrailsAI, a Bangalore-based startup with its own A.I. tools for detection of audio and video spoofs.
The team ran the video through audio as well as video detection models, and the results that returned indicated high confidence for manipulation in the audio track. The video track was found to be manipulated as well, but, with lower confidence compared to the audio track.
In their report, they noted that considerable evidence of A.I.-generation was found in the video, overall. They added that voice clones of Goswami and Ambani were used and lip-sync technique was used to manipulate the video.
To get another expert analysis on the video, we escalated it to Global Deepfake Detection System (GODDS), a detection system set up by Northwestern University’s Security & AI Lab (NSAIL). They used 22 deepfake detection algorithms to review the video as well as seven deepfake detection algorithms to analyze the audio track; two human analysts trained to detect deepfakes also reviewed the video overall.
Two models out of 22 gave a higher probability of the video being fake, while the 20 remaining models indicated a lower probability of the video being fake. Six of the seven models gave a higher probability of the audio being fake, only one model gave a lower probability of the audio being fake.
The team’s observations on the voices and lip movements of the speakers corroborated our analysis. They pointed out that Ambani’s neck and ears change shape and size several times throughout the video, which as per them is common in artificially generated samples.
They also highlighted a visual glitch in the beginning of the video track — flickering of the video frame which often occurs during a television broadcast — suggesting that this may have been added to make detection of artificial manipulations more challenging.
The team concluded that the video is likely inauthentic and generated through artificial intelligence.
On the basis of our findings and expert analyses, we can conclude that the video featuring Goswami and Ambani was manipulated with A.I. voice. It appears to be yet another instance of a scam being peddled by falsely linking it to a popular news anchor and a top business leader.
(Written by Debopriya Bhattacharya and Debraj Sarkar, edited by Pamposh Raina.)
Kindly Note: The manipulated audio/video files that we receive on the tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.