The Deepfakes Analysis Unit (DAU) analysed a video that apparently shows veteran Bollywood actor Amitabh Bachchan with Rajat Sharma, India TV news anchor and chairperson, discussing a supposed miraculous cure for joint pain. After putting the video through A.I. detection tools and getting our expert partners to weigh in, we were able to conclude that original footage featuring the two men was manipulated with synthetic audio to fabricate the video.
The 10-minute video in Hindi, embedded in a Facebook post, was escalated to the DAU by a fact-checking partner for verification. The video opens with a split screen that shows Mr. Sharma in one frame and Mr. Bachchan in another, in what appears to be a television interview. A male voice recorded over Sharma’s visuals announces a breakthrough cure for joint pain which the voice claims is providing instant relief to many Indians, including Sharma. That voice then invites Bachchan to narrate his experience from the supposed treatment.
The male voice recorded over the visuals of the veteran actor claims that he was purportedly cured by the miraculous remedy in less than a day and that none of the medicines he tried over the years had worked. That voice highly endorses it as an affordable and efficient treatment for Indians suffering from joint pain, and suggests that it is the brainchild of someone named “Deepak Chopra”. The video ends with the voice attributed to Sharma encouraging viewers to click on some link — not visible in the video — to connect with “Deepak Chopra” and avail the treatment.
A logo resembling that of India TV, a Hindi television channel, is visible in the bottom right corner of the video frame. The names of both the men are highlighted through graphics in Hindi toward the lower third of the video screen, below that a ticker asks viewers to subscribe to “AajTak”, a Hindi television channel not linked to the one mentioned earlier.
Captions in Hindi, offering transcription of the supposed interview, appear at the bottom of the video frame. Text graphics also in Hindi endorsing the miraculous remedy border the top of the video frame.
Of the 10-minute-duration, the recorded video content runs for about two minutes and five seconds. The rest of the video carries a static image of what looks like a sketch of a doctor sitting with a patient in a setting where the backdrop is enveloped by Hindu deities and symbols considered sacred in Hinduism. Text in Hindi superimposed on that sketch urges people to click on a “more details” button below the video, though it was not visible to us.
The overall video quality is poor. Visuals of both Sharma and Bachchan are out-of-sync with their respective audio tracks. Jump cuts in frames are evident when the audio tracks transition from one voice to the other. Both the audio tracks are devoid of any background noise.
The voice and accent being attributed to Sharma sound somewhat like his, when compared with his recorded videos available online, but the delivery lacks natural intonation and has some unnatural pauses. The audio track with Bachchan’s visuals bears some similarity to his characteristic baritone voice but sounds hastened, scripted, and monotonic; overall different from his usual style of delivery, as visible in his recorded interviews, marked by significant pauses and crisp diction.
A reverse image search using screenshots from the video under review led us to this video, published from the official Youtube channel of India TV on April 12, 2014. The backdrop, clothing, and body language of Sharma and Bachchan in both the videos are identical. In the original video they converse in Hindi but do not discuss any kind of cure. It appears that separate clips from the original video — roughly 13 minutes of which feature the two men — have been lifted to create the manipulated video.
The frames in the doctored video are more zoomed-in. The original video does not carry the India TV logo, captions, or ticker, and the supers in this video are also different from those in the manipulated video.
To discern the extent of A.I.-manipulation in the video under review, we put it through A.I.-detection tools.
The voice detection tool of Hiya, a company that specialises in artificial intelligence solutions for voice safety returned results indicating that there is an 80 percent probability that an A.I.-generated audio track was used in the video.
Hive AI’s deepfake video detection tool indicated that the video was manipulated using A.I. It pointed out one marker — on Bachchan’s face — in the entire runtime of the video. Their audio tool detected the use of A.I. in the audio track.
The deepfake detector of our partner TrueMedia suggested substantial evidence of manipulation in the video. The “A.I.generated insights” offered by the tool provide additional contextual analysis by stating that the video transcript appears to be a promotional piece with exaggerated claims resembling an advertisement.
In a breakdown of the overall analysis, their tool gave a 99 percent confidence score to the “face manipulation detector” subcategory, which detects the likelihood of A.I. manipulation of faces as in the case of face swaps and face reenactment. It also gave a 99 percent confidence score to “video facial analysis”, a subcategory that analyses video frames for unusual patterns and discrepancies in facial features.
The tool gave a 100 percent confidence score to the “A.I.-generated audio detector” subcategory, which detects A.I.-generated audio. The tool also gave a 96 percent confidence score to “voice anti-spoofing analysis” and 81 percent confidence score to “audio authenticity detector”, both subcategories analyse the audio for evidence of it having been created by an A.I. audio generator or cloning.
We reached out to ElevenLabs, a company specialising in voice A.I. research and deployment for an expert analysis on the audio. They told us that they were able to confirm that the audio is synthetic, implying that it was A.I.-generated.
To get another expert to weigh in on the manipulations in the video, we reached out to our partners at RIT’s DeFake Project. Kelly Wu from the project stated that despite the low quality of the video, she found that Bachchan’s visuals had some unnatural mouth movements. She pointed to his mouth movement not syncing with the audio and that his mouth appears to be moving even as there is no audio at one point. She noted that one clip repeats in the video but each time with a different audio track.
Saniat Sohrawardi from the project stated that there is a good chance that the video was lip-sync-ed because of traces of the throat area inflating. He added that it resembles the breathing of a toad, he called this visual artefact “toading”.
Akib Shahriyar from the project noted that transitions at different points break the continuity of the video; he suggests that this could be due to stitching different clips together. He also noticed that when one speaker talks, the other one goes unnaturally still for the most part. He added that this might stem from a limitation in the technique used by the video creators, which perhaps allowed for the lip-sync and face-detection to work for one person at a time instead of both people featured in the video frame.
To get another expert analysis on the video, we escalated it to the Global Deepfake Detection System (GODDS), a detection system set up by Northwestern University’s Security & AI Lab (NSAIL). They used a combination of 22 deepfake detection algorithms and analyses from two human analysts trained to detect deepfakes, to review the video escalated by us.
Of the 22 predictive models used to analyse the video, eight models gave a higher probability of the video being fake, while the remaining 14 models indicated a lower probability of the video being fake.
Referring to Bachchan’s visuals, the team noted in their report that the subject’s mouth and chin seem to move independently of his face, creating an unnatural appearance. They added that his speech and mouth movements appear to be frequently misaligned; they also pointed to instances where audio can be heard but there is no corresponding mouth movement.
The team also noticed a flash of an image which recurs at regular intervals in the runtime of the video. They stated that this could be a sign of manipulation since the inserted image flashes every 30 seconds and is not related to the supposed newscast represented in the video.
The video’s appearance is highly blurry, which according to the team, is sometimes used to conceal manipulations to the media. The team added that the video is so blurry that at certain points, features of the face, such as the eyes, seem to disappear. In the overall verdict, the GODDS team concluded that the video is likely to be fake and generated with artificial intelligence.
On the basis of our findings and analyses from experts, we can conclude that the video of Sharma and Bachchan promoting a cure is fake. The original video featuring them has been fabricated using synthetic audio to peddle a scam.
(Written by Debopriya Bhattacharya and Debraj Sarkar, edited by Pamposh Raina.)
Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.
You can read below the fact-checks related to this piece published by our partners:
Fact Check: Viral Clip Of Amitabh Bachchan Talking About Joint Pain Is A Deepfake
जोड़ों के दर्द की दवा का प्रचार करते अमिताभ बच्चन का वीडियो फेक है
Fact Check: जोड़ों के दर्द की दवा का प्रचार करते रजत शर्मा और अमिताभ बच्चन का यह वीडियो डीपफेक है