The Deepfakes Analysis Unit (DAU) analysed a video that shows Dr. Subrahmanyam Jaishankar, India’s External Affairs Minister, apparently endorsing a financial investment platform. After putting the video through A.I.-detection tools and getting our expert partners to weigh in, we were able to conclude that the original video featuring Dr.Jaishankar was manipulated with an A.I.-generated audio track.
The two-minute-and-21-second video in English was discovered by the DAU on Facebook through social media monitoring. It was published on March 23, 2025 from an account named “True Leadership”, their display picture had random stock charts. In their profile details they identified themselves as an “event planner” in Viana, a city in Angola. The video is not available anymore, however, it had garnered more than 400,000 views until yesterday. We don’t have any evidence to suggest that this suspicious video originated from this account or another.
In the video, Jaishankar, seems to be addressing an audience as two microphones can be seen placed in front of him, the lighting is dim and a dark backdrop is visible. It appears that he is not looking into the camera and seems to be glancing downwards from time-to-time, as if reading from something. Bold text graphics in English at the bottom of the video frame, visible throughout, seem to convey that an investment of “21,000 rupees” will yield returns worth “60,000 rupees every day”. The structuring of the text graphics lacks syntax.
A male voice recorded over his video track claims that the platform has undergone “rigorous international testing” and “received official accreditation”. It makes it sound as if Jaishankar has tested the purported platform and made 1.5 million rupees in a month with an initial investment of 21,000 rupees.
Touting the supposed platform as risk-free, the same voice adds that every investment is apparently insured by some “National Bank of India”. It also claims that this get-rich-quick scheme uses “advanced artificial intelligence technologies” to trade in the “global market” and grows the capital of Indian citizens, promising to “change” their “financial reality”.
The voice goes on to suggest that people could find themselves “waking up tomorrow and seeing more money” in their account “without doing anything”. It asks viewers to click on some “registration link” below the video to get in touch with a supposed “project manager”. The video ends by creating a sense of urgency as it announces that the registration spots for applicants are limited and they can activate their account by investing 21,000 rupees.
It wasn’t exactly clear which link was being referred to, however, we noticed a "learn more” tab in the Facebook post carrying the video, which was positioned right below the video. For the purpose of this report, we clicked on that tab and landed on a web page which appeared to be a detailed article about investment opportunities in India in 2024. This page emulates the colour scheme and logo visible on the official website of Tickertape, a stock analysis and investment research platform. (We would like to caution our readers against clicking on suspicious links.)
The web address of the page is not the same as that of the genuine Tickertape website. However, clicking on the logo and hyperlinks on the page led us to content posted from the authentic website.
The voice attributed to Jaishankar bears similarity to his natural voice and even captures his mellow tone but the accent sounds different when compared with his recorded interviews and speeches. The overall delivery is hastened and scripted, missing the pitch and pauses characteristic of his style of speaking.
His lip movements appear to faintly align with the audio track. In some frames it seems as if he has an extra lip, which moves even when his upper and lower lip close. His upper set of teeth is barely visible, the lower set appears like a blurred off-white patch, and in a few instances the lips seem to blend in with the teeth. His lips also appear to change shape and reveal unnaturally elongated teeth in a few frames.
His chin seems to change shape throughout the video because of his head moving up and down. The video quality of the portion below the nostrils down to the chin is poor compared to the rest of his face.
This is yet another doctored video where an initial investment amount of “21,000 rupees” is being recommended. The DAU has debunked several financial scam videos, such as this, this, and this, promoting dubious investment platforms where the same number was used.
Another similarity between this video and previous scam videos is the messaging, which tries to create a sense of scarcity around the supposed financial opportunity. A peculiarity of this video, though, is the promise of protecting every investment through a supposed insurance.
We undertook a reverse image search using screenshots from the video being analysed through this report. Jaishankar’s clips were traced to this video published on the official website of Asia Society, a nonprofit focused on spreading awareness about Asia, on Sept. 24, 2024. The same video was also published from their YouTube channel.
The clothes and backdrop of Jaishankar in the manipulated video and the one we traced are identical. In the Asia Society video he has been filmed in different positions, ranging from him standing to a seated interaction with an interviewer. A few clips of him standing while addressing a gathering have been used to create the doctored video.
There is no mention of any financial platform in the original video, which is also in English. It does not carry any text graphics; the Asia Society logo visible in the top right corner of the video frame has been cropped out in the manipulated video, which is of a lower quality compared to the original.
To discern the extent of A.I. manipulation in the video under review, we put it through A.I. detection tools.
The voice tool of Hiya, a company that specialises in artificial intelligence solutions for voice safety, indicated that there is a 99 percent probability of the audio track in the video being A.I.-generated.
.png)
Hive AI’s deepfake video detection tool pointed out markers in various frames, indicating A.I. manipulation in the video track. Their audio detection tool highlighted A.I. manipulation in almost the entire audio track of the video.

We also ran the audio track through Deepfake-O-Meter, an open platform developed by Media Forensics Lab (MDFL) at UB for deepfake image, video, and audio detection. The tool provides a selection of classifiers that can be used to analyse media files.
We chose six audio detectors, out of which two gave strong indicators of A.I. manipulation in the audio. AASIST (2021) and RawNet2 (2021) are designed to detect audio impersonations, voice clones, replay attacks, and other forms of audio spoofing. The Linear Frequency Cepstral Coefficient (LFCC) - Light Convolutional Neural Network (LCNN) model helps distinguish between real and synthetic speech to identify deepfake audio.
RawNet3 (2023) allows for nuanced detection of synthetic audio while RawNet2-Vocoder (2023) is useful in identifying synthesised speech. Whisper (2023) is designed to analyse synthetic human voices.

For a further analysis on the audio track we also put it through the A.I. speech classifier of ElevenLabs, a company specialising in voice A.I. research and deployment. The tool returned results indicating that it was “very likely” that the audio track used in the video was generated using their platform.
We reached out to ElevenLabs for a comment on the analysis. They told us that they were able to confirm that the audio is A.I.-generated. They added that they have taken swift action against the individuals who misused their tools to hold them accountable.
For expert analysis, we escalated the video to our detection partner ConTrailsAI, a Bangalore-based startup with its own A.I. tools for detection of audio and video spoofs. The team ran the video through audio and video detection models, the results that returned indicated that the video and audio track had been manipulated or generated using A.I.
They noted that low-resolution frames were used in the video. However, they were still able to clearly detect the use of lip-sync techniques in the video. They added that an audio clone of Jaishankar was created and used to produce the fake audio.


To get another expert to weigh in on the video, we escalated it to our partner GetRealLabs, co-founded by Dr. Hany Farid and his team, they specialise in digital forensics and A.I. detection.
The team stated that there is evidence suggesting that the video contains synthetic, A.I.-generated material. They used multiple analysis techniques, such as audio to lip movement variance, which indicated that the audio track has been synthesised and the mouth movements are not natural.
The team’s observations about the similarity between Jaishankar’s real voice and the one in the doctored video and the difference in the accent, echoed our observations. They added that the suspect voice sounds flat and robotic throughout.
They noted that the lack of filler words in the subject’s monologue in the suspect video is also unusual, suggesting that the audio was created using text-to-speech software rather than through normal speech patterns. They pointed out that the subject’s speech in the original video contains many natural filler words.
On the basis of our findings and analysis from experts, we can conclude that original footage featuring Jaishankar was manipulated using synthetic audio to fabricate the video. This appears to be yet another attempt to link a prominent public figure to a dubious financial platform in order to scam people.
(Written by Debraj Sarkar, Rahul Adhikari, and Debopriya Bhattacharya, edited by Pamposh Raina.)
Kindly Note: The manipulated video/audio files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.
You can read below the fact-checks related to this piece published by our partners:
Deepfake Video Of S Jaishankar Promoting Dubious Investment Platform Goes Viral