The Deepfakes Analysis Unit (DAU) analysed a video that shows Sudha Murty, founder of the Infosys Foundation — philanthropic arm of the tech giant Infosys — apparently endorsing a financial investment platform. After putting the video through A.I.-detection tools and getting our expert partners to weigh in, we were able to conclude that the original video featuring Mrs. Murty was manipulated with an A.I.-generated audio track.
A Facebook link to the one-minute-and-15-second video in English was sent to the DAU tipline for assessment. It was published on March 9, 2025 and has since garnered more than 56,000 views. “Top Highlights” is the name displayed on the Facebook account and their display picture seems to have been taken from an Indian female journalist’s official X account. In their profile details they identify themselves as a “printing service” in Karachi, capital of Pakistan’s Sindh province.
The video appears to be an interview with a microphone prominently visible in front of Murty, also a member of the Indian Parliament, and a plain green wall in the background. A peculiar feature of the microphone is a red square with a flipped 18 emblazoned on it; “18” is associated with the Network18 group, an Indian media conglomerate.
She appears to be looking at someone or something else instead of focussing on the camera. Bold text graphics at the top of the video frame, visible for a short duration, mention a deadline of “March 27”; and graphics at the bottom of the frame, visible throughout, specify that an investment of “21,000” rupees in “March” will yield returns worth “17 lakhs” (1.7 million rupees) in “28 days”.
A female voice recorded over her video track accuses the Indian government of withholding information from Indian citizens regarding the supposed lucrative platform being referred to as “Quantum AI”. Touting the platform as highly profitable, the voice claims that it is “completely legal”, “fully licensed” and guarantees earnings of “1,500,000” rupees per month” with an investment of “21,000 rupees”. However, the text graphics mention higher returns for the same investment amount.
The same voice also claims that the “main goal” of the supposed platform is to “improve the lives of Indian citizens”, and that it has become a “true lifeline” which is “paving the way for financial freedom”. The video ends with a sense of urgency, advising viewers that there is a link to some “article” below the video which they can click to “register”. However, no such registration link is visible anywhere.
The video framing alternates between a medium shot and a medium close-up with a few jump cuts. Murty’s lip movements appear to be in sync with the accompanying audio for the most part. However, in some frames her lips seem to move unnaturally fast and continue to move even when there are momentary pauses in the audio.
Her upper set of teeth is barely visible. The lower set of teeth appear as patches of white and brown across different frames. Her teeth also seem to change shape as her mouth moves; in some frames the arch of the lower set of teeth appears to be defined by the shape of her lower lip. There is an unusual shine on her face but for the area below her nose all the way down to her chin, which is extremely blurry. Her lower lip seems to blend into her chin.
The voice attributed to Murty sounds peculiar, it attempts to imitate her natural accent and lisp but fails on both fronts when compared with her recorded interviews. A hissing sound which gradually fades away, and abrupt pauses can be heard in the audio track. The overall delivery sounds very scripted and hastened; it lacks the pitch, intonation as well as the characteristic pauses that are typical of her style of delivery.
This is yet another doctored video where an initial investment amount of “21,000 rupees” is being recommended. The DAU has debunked several financial scam videos, such as this, this, and this, promoting dubious investment platforms where the same number was used.
Another similarity between this video and some other scam videos, including health scam videos, that we have previously debunked is that they fabricate conspiracy theories to create a sense of urgency. This is yet another example of a video which claimed that the government deliberately kept information about a supposed platform from Indian citizens. In some other videos that we have analysed, the messaging emphasised that the opportunity to register for a supposed financial platform was available to a select few, and they would regret it if they didn’t take action.
We undertook a reverse image search using screenshots from the video being analysed through this report. Murty’s clips were traced to this video published on Jan. 19, 2023 from the official YouTube channel of CNBC-TV18, an English business news channel, part of the Network18 group.
The clothes of Murty in the manipulated video and the one we traced are identical, the background colour varies slightly but her body seems to be angled in the opposite direction in the two videos.
The microphone held in front of her in the video from YouTube has “18” emblazoned on it and the CNBC-TV18 logo on the holder. The flipped 18 that we pointed to earlier established that the clips used in the doctored video are mirrored versions of the clips from the YouTube video. This also explains why she seems to be angled toward the right in the manipulated video and not left as is the case in the original video.
There is no mention of any financial platform or any conspiracy in the original video, which is also in English. That video carries text graphics at the bottom of the frame highlighting Murty’s responses to questions posed by a reporter and the CNBC-TV18 logo in the top right corner. The frames used in the manipulated version are more zoomed-in and do not carry the official logo.
To discern the extent of A.I. manipulation in the video under review, we put it through A.I. detection tools.
The voice tool of Hiya, a company that specialises in artificial intelligence solutions of voice safety, indicated that there is a 35 percent probability of the audio track in the video having been generated or modified using A.I.

Hive AI’s deepfake video detection tool was unable to find any evidence of A.I. manipulation in the video track. However, its audio detection tool highlighted A.I. manipulation in most of the audio track, except for a 10-second segment.

We also ran the audio track from the video through Deepfake-O-Meter, an open platform developed by Media Forensics Lab (MDFL) at UB for detection of A.I.-generated image, video, and audio. The tool provides a selection of classifiers that can be used to analyse media files.
We chose six audio detectors, out of which five gave strong indicators of A.I. in the audio. AASIST (2021) and RawNet2 (2021) are designed to detect audio impersonations, voice clones, replay attacks, and other forms of audio spoofs. The Linear Frequency Cepstral Coefficient (LFCC) - Light Convolutional Neural Network (LCNN) model helps distinguish between real and synthetic speech to identify audio deepfakes.
RawNet3 (2023) allows for nuanced detection of synthetic audio while RawNet2-Vocoder (2023) is useful in identifying synthesised speech. Whisper (2023) is designed to analyse synthetic human voices.

For a further analysis on the audio track we also ran it through the A.I. speech classifier of ElevenLabs, a company specialising in voice A.I. research and deployment. The tool results indicated that the audio track in the video was “likely” generated using their platform.
We reached out to ElevenLabs for a comment on the analysis. They told us that they were able to confirm that the audio is A.I.-generated. They added that they have taken swift action against the individuals who misused their tools to hold them accountable.
For expert analysis, we escalated the video to our detection partner ConTrailsAI, a Bangalore-based startup with its own A.I. tools for detection of audio and video spoofs. The team ran the video through audio and video detection models, the results that returned indicated A.I. manipulation in the video and audio track.
In their report, they noted that the lip movements of the subject appeared unnatural. They mentioned that her teeth look inconsistent as they change slightly throughout the video. They observed that her entire mouth region has a slightly lower resolution and looks blurrier than the rest of the face.
They added that the voice in the video sounds relatively monotonous, and the audio track has distinct silences in between which are indicators of possible manipulation.


To get another expert to weigh in on the video, we reached out to our partners at RIT’s DeFake project. Akib Shahriyar from the project stated that there is a noticeable desynchronisation between the audio and Murty’s mouth movements. Mr. Shahriyar noted that in several instances her mouth remains unnaturally open or closed, suggesting potential manipulation of the audio with the mouth frames.
He stated that the generated mouth movements appear colour-corrected to hide the manipulation. Despite that, he pointed to a vertical tear artefact visible along the right nostril-mouth region. This, he said, was a result of the mouth-tracking algorithm struggling and the deepfake generation technique failing to properly blend regenerated pixels in the specific area.

He added that the video appears to be mirrored from its original source, which is evident from the reversed TV 18 logo on the microphone. His observation corroborated our analysis.
On the basis of our findings and expert analyses, we can conclude that original footage featuring Murty was manipulated with synthetic audio to fabricate the video. This appears to be yet another attempt to link a public figure with a dubious financial platform to swindle people.
(Written by Debraj Sarkar, Rahul Adhikari, and Debopriya Bhattacharya, edited by Pamposh Raina.)
Kindly Note: The manipulated video/audio files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.