The Deepfakes Analysis Unit (DAU) analysed a video that features Indian cricketer Virat Kohli apparently promoting a gaming platform. After putting the video through A.I. detection tools and getting our expert partners to weigh in, we were able to conclude that the video is a patchwork of an interview clip of Mr.Kohli’s and synthetic audio recorded over unrelated visuals of his.
The DAU spotted the 26-second video in English on Instagram during social media monitoring. The handle that posted the video has added the name of the supposed gaming platform in their profile information and their display picture as well. The account information suggests that it is an India-based account with more than 102,000 followers.
The video opens with a man in a studio-like setting asking Kohli about a life-changing moment. For a few seconds, Kohli is visible in the video frame answering the question, then visuals of his close-up are replaced by animation showcasing the supposed platform’s user interface, an older image of Kohli posing next to an expensive car, and snippets of Kohli posing with his wife. The voice recorded over these visuals, purported to be Kohli’s, endorses the platform for its profitable returns and even suggests that using the platform has been a life-changing experience.
An Instagram handle, different from the one used to post the video, is mentioned using superimposed text in the video frame for most of its duration and so are captions in English. In a wide shot that shows the interviewer and Kohli in the same frame, the words “Royal Challengers Podcast” and an insignia are visible in the backdrop.
That insignia and the typeset for the “Royal Challengers” text resembles the official logo of the cricket franchise of the Indian Premier League (IPL) which goes by the same name. A website link of the supposed platform appears toward the end of the video.
There are no visual oddities in the video, however, the audio seems to carry two separate tracks. For the initial eight seconds of the video the audio of the two speakers is clearly discernible and matches their lip movements.
As soon as the visuals transition from Kohli’s close-up to the disjointed visuals, an awkward pause can be heard and the audio track that follows has a changed pitch and tone. That voice bears similarity to Kohli’s, however, when compared with his recorded interviews the delivery sounds scripted, hastened without natural pauses, and has an accent unlike his.
A keyword search with “Royal Challengers Podcast” and reverse image search using screenshots from the video led us to this podcast episode published on Feb. 25, 2023 from the official YouTube channel of Royal Challengers Bengaluru. The initial eight-second clip from the video we reviewed seems to have been lifted from this video. The clothing, backdrop, body language, and the exchange between the interviewer and Kohli in both the videos is identical.
A static logo of the cricket franchise is visible in the top right of the video frame and an interactive logo can be seen at the bottom right in the original video; both have been cropped out from the manipulated video. Animations of the supposed platform’s interface, captions, and visuals of Kohli posing with his wife or the car are not part of the original video; he does not mention any betting platform in that video.
The Instagram handle mentioned as superimposed text in the manipulated video carries a clip from the podcast featuring Kohli, a part of which is identical to the one seen in the manipulated video. This clip also has pictures of Kohli that we saw in the manipulated video. The IPL franchise logos seem to have been edited out from this clip as well. It appears that this clip has been used to create the manipulated video; the handle of the influencer with a million followers has been deliberately added to mislead people.
In August we debunked a fake video, which was manipulated using A.I. and a clip of Kohli’s lifted from the same podcast was used in that video as well.
To discern the extent of A.I. manipulation in the video under review, we put it through A.I.-detection tools.
The voice detection tool of Hiya, a company that specialises in artificial intelligence solutions for voice safety returned results indicating that there was a high probability that an A.I.-generated audio track was used in the video.
Hive AI’s deepfake video detection tool found no traces of A.I. manipulation in the video. The audio tool did indicate A.I. tampering throughout the audio except for the last 10-second segment.
The deepfake detector of our partner TrueMedia indicated substantial evidence of manipulation in the video. The “A.I.-generated insights” offered by the tool provided additional contextual analysis by stating that the audio transcript reads like a promotional statement, characterised by overly polished language, rather than genuine conversation.
The tool gave a 65 percent confidence score to “video facial analysis”, a subcategory which analyses the video frames for unusual patterns and discrepancies in facial features. The tool also found very little evidence for “face manipulation detector” subcategory, which detects potential A.I. manipulation of faces in images and videos, as in case of face swaps and face reenactment.
The tool gave a 94 percent confidence score to “audio authenticity detector”, a subcategory that analyses audio for evidence that it was created by an A.I. generator or cloning. The tool also gave a 71 percent confidence score to “voice anti-spoofing analysis” and 70 percent confidence score to “A.I.-generated audio detector”, both categories indicate that the audio was generated using A.I.
We reached out to ElevenLabs, a company specialising in voice A.I. research and deployment for an expert analysis on the audio. They told us that they were not able to confirm that the audio was A.I.-generated. They added that they have been actively identifying and blocking attempts to generate prohibited content, however, the exact generation of this audio from their platform has not been confirmed.
We also ran the video through DeepFake-O-Meter, an open platform developed by Media Forensics Lab (MDFL) at UB for deepfake image, video, and audio detection. The tool gives an option of various classifiers through which a media file, in this case video, can be run to receive analysis.
We chose six audio detectors, out of which three gave strong indications of A.I. in the audio. The RawNet2 (2021) and AASIST (2021) detectors focus on detecting audio impersonations, voice clones, replay attacks, and other types of audio spoofs. Linear Frequency Cepstral Coefficient (LFCC)-Light Convolutional Neural Network (LCNN) model classifies genuine versus synthetic speech to detect audio deepfakes.
To get another expert to weigh in on the audio featured in the video, we escalated it to our partner Validia, a San-Francisco based deepfake security startup.
They told us that the video starts off as both real audio and video from what appears to be a podcast, and quickly transitions into what is evidently a deepfake audio sample. They concluded this based on a few reasons. First, they pointed out the robotic and structured sample of the audio, which they noted is indicative of computer-generated-audio. They added that with audio deepfakes, the video of individuals targeted through the audio is usually not shown to create the perception that the audio is real, which is evident here.
They further noted that when they compared the audio before and after the switch, the voice samples did not line up as that of the same individual. They said that the audio is likely a very poor attempt at mimicking the voice of the speaker in the video, referring to Kohli.
To get an expert analysis on the visual and audio elements, we escalated it to the Global Deepfake Detection System (GODDS), a detection system set up by Northwestern University’s Security & AI Lab (NSAIL). They used a combination of 22 deepfake detection algorithms and analyses from two human analysts trained to detect deepfakes, to review the video escalated by us.
Of the 22 predictive models used to analyse the video, 11 models gave a higher probability of the video being fake, while the remaining 11 models indicated a lower probability of the video being fake.
The team noted in their report that just before the subject begins promoting the betting platform, the video cuts out and his mouth shape does not match the words that follow. This may indicate that the promotional audio is inauthentic.
They added that the subject’s speech becomes significantly more monotone compared to when he speaks before the video cuts out and that the sentences are far shorter and more simplistic when he promotes the betting platform.
The oddities in the audio track and the subject’s speech pattern corroborate our own observations of the video. In the overall verdict, the GODDS team concluded that the video is likely to be fake and generated with artificial intelligence.
On the basis of our findings and analysis from experts, we can conclude that the video of Kohli promoting a gaming platform is fake. Original footage from an interview of his was spliced with synthetic audio recorded over unrelated visuals to peddle a scam.
(Written by Debraj Sarkar and edited by Pamposh Raina.)
Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.
You can read below the fact-checks related to this piece published by our partners:
Video of Virat Kohli edited using deepfake audio to promote illegal betting site