Video of Nirmala Sitharaman Promoting Income Generating Platform Is Fake

February 25, 2025
February 25, 2025
Manipulated Media/Altered Mediablog main image
Screengrabs of the video analysed by the DAU

The Deepfakes Analysis Unit (DAU) analysed a video that apparently shows Indian Finance Minister Nirmala Sitharaman endorsing an investment platform. After running the video through A.I. detection tools and getting our expert partners to weigh in, we were able to conclude that the video was manipulated using A.I.-generated audio.

A Facebook link to the three-minute-and-30-second video in English was sent to the DAU tipline for assessment. The video, embedded in a post, has garnered more than one million views since it was uploaded on Feb.19, 2025. The profile details of the video uploader suggest that the account caters to “financial service”.

The video features Ms. Sitharaman seated with a decorative wooden panel in the backdrop and a table lamp to her right. It appears that she is being interviewed by someone as she is not looking into the camera instead she’s focusing on something or someone else. No other person is seen in any of the frames and no media house logo is visible either.

A female voice recorded over her video track endorses some “Quantum A.I.” platform for its promise of financial returns of “up to 80,000 rupees a day” on an initial investment of “21,000 rupees”. It claims that “50,000 Indians” have earned “passive income” from the supposed platform, which has been created to “strengthen the financial position of Indian citizens.”

Text graphics placed at the bottom of the video frame highlight the same investment amount, however, the monetary value for the returns is different from that mentioned in the audio.

The voice attributed to Sitharaman sounds somewhat like hers, however, it has a peculiar accent which does not match her natural accent. It is also devoid of the pitch and characteristic pauses that can be heard in her recorded interviews and speeches. The overall delivery sounds scripted and robotic without any natural pauses in between sentences as is the case with human speech.

The video framing alternates between a medium shot and a medium close-up with jump cuts. Many inconsistencies are visible around Sitharaman’s mouth region. Her lip movements are mostly in alignment with the accompanying audio, however, in some frames her lips appear to be moving even when the audio track pauses momentarily.

Her teeth seem blurry for the most part, sometimes they blend with the lips and a brown patch appears in place of the teeth. At one point, the area below her nose all the way down to her chin seems to shake oddly and appears to have a lighter skin colour compared to the rest of her face.

Subject's mouth area seems artificially placed onto her face

The supposed platform is peddled as “fully automated” and one that “uses a smart algorithm” for profit analysis and “making optimal trades at the right time”. The audio track warns viewers to stay away from “fraudulent schemes” and assures that the purported financial project has “official” support and provides “verified facts”. It also claims that people would not need to rely on “government aid” or “worry about financial instability” anymore.

There is a build-up of urgency before the video ends abruptly. The voice implores the audience to register, claiming that the “window of opportunity is closing due to high demand”.

In December, the DAU debunked a similar video featuring Sitharaman and Shaktikanta Das, former governor of the Reserve Bank of India. The video was manipulated using synthetic audio to peddle yet another supposed lucrative investment platform to scam people. And that video too claimed that the purported scheme had government endorsement.

We have observed that the figure of “21,000 rupees” mentioned in this video has been frequently used as the “initial investment” amount in financial scam videos such as the ones debunked by us through this and this report. This points to the similarity in the scripts used to produce such fake content.

We undertook a reverse image search using screenshots from the video and established that the clips of the interviewer were removed from the original video, and the ones featuring Sitharaman were used to stitch together this manipulated version.

Sitharaman’s clips were traced to this video published on Feb. 2, 2025 from the official YouTube channel of India Today, an India-based English news channel.

The clothes, body language, and the backdrop of Sitharaman in the manipulated video and the one we traced are identical. There are no text graphics in the original video nor is there a mention of any financial scheme; the content is in English. The frames used in the manipulated video are more zoomed-in and do not carry the India Today logo placed at the top and bottom right corner in the original video.

To discern the extent of A.I. manipulation in the video under review, we put it through A.I. detection tools.

The voice tool of Hiya, a company that specialises in artificial intelligence solutions for voice safety, indicated that there is a 96 percent probability that the audio track in the video was generated or modified using A.I.

Screenshot of the analysis from Hiya’s audio detection tool

Hive AI’s deepfake video detection tool pointed out markers in various frames featuring Sitharaman, indicating A.I. manipulation in the video track. Their audio detection tool highlighted A.I. manipulation in most of the audio track of the video.

Screenshot of the analysis from Hive AI’s deepfake video detection tool

For a further analysis on the audio track we also put it through the A.I. speech classifier of ElevenLabs, a company specialising in voice A.I. research and deployment. The tool returned results indicating that it was very unlikely that the audio track used in the video was generated using their platform.

We reached out to ElevenLabs for a comment on the analysis. They told us that they were able to confirm that the audio is A.I.-generated. They added that they continue to identify and block attempts that involve the misuse of their tools and/or their use for generation of prohibited content.

To get expert analysis on the video, we escalated it to the Global Deepfake Detection System (GODDS), a detection system set up by Northwestern University’s Security & AI Lab (NSAIL). They used a combination of 15 deepfake detection algorithms to analyse the video and 20 algorithms to analyse the audio track. Two human analysts trained to detect deepfakes also reviewed the video.

Of the 15 predictive models used to analyse the video, one gave a higher probability of the video being fake and the remaining 14 indicated a lower probability of the video being fake. For the audio analysis, 13 of the 20 models gave a higher probability of the audio being fake, while the remaining seven models gave a lower probability of the audio being fake.

The team’s observations on the speech and mouth movements of the subject as well as the noticeable oddities on her face corroborated our analysis. They also pointed to a seemingly unnatural “static” sound as background noise, which they noted may possibly have been added as an anti-detection measure. The team concluded that the video is likely fake and generated with artificial intelligence.

To get another expert to weigh in on the video, we escalated it to our partner GetRealLabs, co-founded by Dr. Hany Farid and his team, they specialise in digital forensics and A.I. detection.

The team pointed to the mismatch between the movement of the lips and the voice. They ran the video through one of their proprietary voice analysis models and the results indicated that the audio is fake. They added that the characteristics of the voice in the media suggest that it was synthetically generated.

After comparing the video with the original, they were able to confirm that the words and voice being attributed to Sitharaman were not hers. The team said that the original was used to make the lip-sync deepfake.

On the basis of our findings and analysis from experts, we can conclude that original footage featuring Sitharaman was manipulated with synthetic audio to peddle a dubious income generating platform in a bid to scam the public.

(Written by Debraj Sarkar and Debopriya Bhattacharya, edited by Pamposh Raina.)

Kindly Note: The manipulated video/audio files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.

You can read below the fact-checks related to this piece published by our partners:

Fact Check: Nirmala Sitharaman endorses trading platform to earn easy money? No, viral video is digitally manipulated