Stalin’s Voice Clone or Impression Used for Fake Narrative About Karunanidhi

August 16, 2024
August 5, 2024
blog main image
Screengrabs of the video analysed by the DAU

The Deepfakes Analysis Unit (DAU) analysed an audio being purported as a statement made by M.K. Stalin, Chief Minister of Tamil Nadu. After putting the audio through A.I. detection tools and escalating it to our detection partners, we were able to establish that it was not Mr. Stalin’s voice but a voice clone or a recorded impression of his.

The 28-second audio embedded in an X link was shared by a fact-checking partner with the DAU for verification. The supposed statement by Stalin is in Tamil and the audio plays over a static photo of him with a microphone positioned close to his mouth. As the audio plays out a red circular wave graphic superimposed on the microphone flashes along.

The tweet with the audio was posted on July 20, 2024 and has since gathered more than 56,000 views on the microblogging site. It carries a caption in Tamil which translates to: this is Stalin’s audio, DMK’s audio politics. The DMK reference is, of course, to Dravida Munnetra Kazhagam, the political party led by him.

The voice in the audio uses derogatory words for M.Karunanidhi, father of Stalin and deceased patriarch of the DMK, and accuses other members of the same family of wrongdoing. That voice bears similarity to Stalin’s voice, however, there is no inflection in the delivery. It is unlike a public speech, which the accompanying photo tries to suggest it is, and sounds heavily scripted. There is a distinct lack of background noise throughout the audio.

We put the audio through A.I. detection tools to check if it had any A.I. elements in it.

The voice detection tool of Loccus.ai, a company that specialises in artificial intelligence solutions for voice safety, returned results which indicated that there was a 0.58 percent probability of the audio being real; suggesting a likelihood of it being synthetic speech.

Screenshot of the analysis from Loccus.ai’s audio detection tool

Hive AI’s audio detection tool gave strong indicators of A.I. use in the speech.

We also ran the audio through TrueMedia’s deepfake detector which overall categorised the audio as having substantial evidence of manipulation. The tool gave a 100 percent confidence score to the subcategory of “AI-generated audio detection”, and an 88 percent confidence score to “audio analysis”, both indicating a high probability of the use of A.I. in the audio. 

Screenshot of the overall analysis from TrueMedia’s deepfake detection tool
Screenshot of the analysis from TrueMedia’s deepfake detection tool

The audio was also put through Itisaar, a deepfake detection service, which is a collaboration between IIT Jodhpur and DigitID, a tech startup that has partnered with the DAU. Their analysis gave a high confidence score to the audio being fake.

We escalated the audio to ElevenLabs, a company specialising in voice A.I. research and deployment. They told us that they were not able to confirm that the audio was A.I.-generated. They added that they have been actively identifying and blocking attempts to generate prohibited content, however, the exact generation of this audio from their platform has not been confirmed.

To get another expert to weigh in on the audio, we escalated it to our partner IdentifAI, a San Francisco-based deepfake security startup. They used their audio detection software to check the authenticity of the audio.

First, they took two real voice samples of Stalin’s to generate an audio profile of his. Then they used a heat-map analysis to compare that profile with the audio that the DAU had shared with them.

Screenshot of the heat-map analysis from IdentifAI

The image on the left displays a comparison between his real voice and the audio profile of his generated by our partner. The image on the right represents the comparison of the generated audio profile with the audio escalated by the DAU. There are visible patterns of dissimilarity than similarity in the two.

Based on the heat-map analysis and iterative testing, the team at IdentifAI were able to establish that the audio under review is not Stalin’s voice. They added that the audio is either a poorly constructed deepfake or is another person’s voice.

On the basis of our findings and analyses from experts, we were able to establish that the words attributed to Stalin were not uttered by him. However, we were unable to confirm whether it was a voice clone of Stalin’s or a recorded impersonation.

(Written by Debopriya Bhattacharya and Debraj Sarkar, edited by Pamposh Raina.)

Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.

You can read below the fact-checks related to this piece published by our partners: 

கருணாநிதியை கேடுகெட்ட பிறவி என்று தமிழக முதலமைச்சர் ஸ்டாலின் கூறினாரா?  (Tamil)

Related Articles