The Deepfakes Analysis Unit (DAU) examined a video where Bollywood actor Ranveer Singh is apparently criticising Prime Minister Narendra Modi for unemployment and inflation. Toward the end of that video after his apparent monologue ends another male voice can be heard seeking votes for the Indian National Congress (INC) party. After putting the video through A.I. detection tools, and escalating it to two of our partners in California— one of whom has expertise in digital forensics, and the other one has an A.I. detection tool— we concluded that parts of the video were manipulated using fake audio.
The 41-second Hindi video clip was sent to the DAU tipline for assessment. We found inconsistencies with Mr. Singh’s lip sync between the eight-second and 12-second mark, between the 18-second and 19-second mark, as well as the 28-second and 34-second mark.
In the video, we noticed a hand-held microphone bearing ANI’s logo, which helped us track down the original video, which had been uploaded on the official Youtube channel of the news agency on April 14, 2024. In the original video, between the time codes 2:17-2:53 the actor’s sound bite can be heard clearly. The original video clip featuring the actor had also been posted by ANI on the microblogging site X on the same date.
The backdrop, clothing, and the body language of the actor in the original video and that in the manipulated video, are identical, but, there are inconsistencies in the pitch, delivery, and the sound levels in both the videos. The speech in both the videos is in Hindi, and many of the words from the original video can be heard in the fabricated version.
Given the repetition of words in the two videos, we wanted to further investigate the nature of manipulation. We specifically wanted to know whether the manipulated audio was a combination of the original audio and synthetic audio or if it was a case where the actor’s voice had been generated using voice cloning tools.
We used several A.I. detection tools for our analyses. TrueMedia’s deepfake detector returned results with an overall analysis of “little evidence of AI manipulation”. It found no evidence of the actor’s face having been recreated using generative A.I.
We ran the video through the audio detection tool of AI or Not , which gave a five percent probability of the audio having any elements of A.I. in it. The free version of AI or Not that we accessed only detects A.I. use in images and audio.
We also put the video through the voice detection tool of Loccus.ai, the results that returned indicated that the probability of the audio being real was 76.54 percent, which means that there is less percentage of synthetic speech in this video.
We also approached IdentifAI, a San Francisco-based deepfake security startup. Using their audio detection software they analysed the authenticity of the audio in the clip.
They retrieved the audio samples from the original and manipulated videos while removing the background noise. Then using the original audio they cloned the voice of the Bollywood actor with the ElevenLabs Voice Cloning software, and compared that with the manipulated audio that was being analysed.
Using a heat-map analysis the team compared the original voice with the manipulated audio, and also compared the original voice with the voice clone they created. They found similarities between both the comparisons, which indicates the use of some type of audio generation technology.
To get another expert view on the video under investigation, we escalated it to a lab run by the team of Dr. Hany Farid, a professor of computer science at the University of California in Berkeley, who specialises in digital forensics and A.I. detection. They noted that the video stream appeared unaltered, some parts of the video and audio streams were desynchronised, and portions of the audio track had been changed.
Dr. Farid’s team mentioned that the words were selectively inserted, which changed the narrative. Those portions, which have been altered, have visibly desynchronised lips. Since only some part of the audio was altered, the audio segment roughly between the 21-second mark and the 29-second mark is the same in the original and the manipulated video, they explained. It doesn’t appear that all the words that the actor said were altered, the team further added.
On the basis of our findings and analyses, we can conclude that the Bollywood actor’s audio was manipulated by inserting synthetic audio in parts of the original video.
(Written by Areeba Falak and edited by Pamposh Raina.)
Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.
You can read below the fact-checks related to this piece published by our partners:
Ranveer Singh Endorses Congress, Mocks PM Narendra Modi? No, Viral Video Is Deepfake
Actor Ranveer Singh's video criticising PM Modi is manipulated
Fact Check: Video of Ranveer Singh speaking critically about PM Modi is edited
Video Of Ranveer Singh Criticising PM Modi Is A Deepfake AI Voice Clone
Altered Video of Ranveer Singh Goes Viral as Him Campaigning For Congress
फैक्ट चेक: रणवीर सिंह ने नहीं की पीएम मोदी की आलोचना, फर्जी है ये वीडियो