Deepfake Video of Dead Separatist Prabhakaran Addresses Followers

September 16, 2024
September 16, 2024
Deepfakeblog main image
Screengrabs of the video analysed by the DAU

The Deepfakes Analysis Unit (DAU) analysed a video featuring the likeness of Velupillai Prabhakaran, chief of the now disbanded separatist outfit Liberation Tigers of Tamil Eelam (LTTE). He died in 2009 as per media reports. After putting the video through A.I. detection tools and escalating it to our expert partners, we were able to establish that the video is a deepfake.

The two-minute-and-six-second video in Tamil, embedded in a tweet posted on the microblogging site X, was sent to the DAU by a fact-checking partner for analysis. It shows Mr. Prabhakaran’s likeness speaking to the camera. Creating the likeness seems like an attempt to project that Prabhakaran is alive and has aged.

A superimposed caption in Tamil at the lower end of the video frame reads: “The video of the national leader has been leaked”. The voice in the video can be heard making an emotional appeal to Tamils to unite and keep alive the struggle for an independent homeland for the Tamils which received a setback in 2009. (At the DAU we refrain from using direct quotes from a suspicious audio or video, however, in this case since it deals with a deceased person known to have led a separatist insurgency we are making an exception to clarify the context for our readers.) 

A logo resembling that of TikTok, a Chinese social media app banned in India, can be seen at the bottom right corner of the video along with the handle that perhaps posted the content. The tweet with the video has garnered about 4,700 views since it was posted on Aug. 31, 2024.

We attempted a reverse image search using screenshots from the video featuring Prabhakaran’s likeness but did not get any credible results. We also searched online for recorded speeches of Prabhakaran to check if the references made in the video could be traced to any of his public speeches or interviews. We could not find a match, however, we noticed that Prabhakaran’s voice in those recordings and the voice in the video under review sounded somewhat similar.

The differences lay in the delivery, which is much slower with no change in tone or pitch making the audio sound monotonous and scripted. A distinct lack of background noise is apparent throughout the audio. The lip movement is imperfect and the teeth appear distorted at various points in the video.

The other visible oddities include the unnatural movement of the head and neck. Only the upper part of the neck seems to be moving as the mouth moves to speak. The neck area is especially blurred, and an odd movement in the lower part of the neck around the left side is visible each time the head moves. It appears to be a case of a face having been digitally stitched onto a body with poor alignment of the neck. The forehead seems unnaturally smooth, showing no wrinkles even as the eyebrows move. 

Lower part of the neck is static while the upper part can be seen moving

To discern if A.I. was used to manipulate the video we ran it through A.I. detection tools.

The voice detection tool of Loccus.ai, a company that specialises in artificial intelligence solutions for voice safety returned results which indicated that there was a high probability that the audio track in the video was A.I.-generated.

Screenshot of the analysis from Loccus.ai’s audio detection tool

Hive AI’s deepfake video detection tool pointed out markers in numerous frames throughout the video indicating the likely manipulations using A.I. Their audio tool also gave strong indicators of A.I. use in the audio track.

Screenshot of the analysis from Hive AI’s deepfake video detection tool

The deepfake detector of our partner TrueMedia suggested substantial evidence of manipulation in the video. In a breakdown of the overall analysis, their tool gave a 99 percent confidence score to the subcategory of “AI-generated audio detector”, which indicates the probability of the audio being synthetic. The two “audio analysis” subcategories gave contrasting confidence scores of 92 percent and 30 percent. Each of the two subcategories use different classifiers to analyse if the audio was produced using an A.I. generator or cloning.

The tool also gave a 99 percent confidence score to “face manipulation detector”, a subcategory which detects the likelihood of A.I. manipulation of faces as in the case of face swaps and face reenactment.

Screenshot of the overall analysis from TrueMedia’s deepfake detection tool
Screenshot of the audio and video analysis from TrueMedia’s deepfake detection tool

For a further analysis on the audio we also put it through the A.I. speech classifier of ElevenLabs, a company specialising in voice A.I. research and deployment. It returned results as “very unlikely" indicating that it was highly unlikely that the audio track featured in the video was generated using their software.

We reached out to ElevenLabs for a comment on the analysis. They told the DAU that they were not able to confirm that the audio was A.I.-generated. They added that they have been actively identifying and blocking attempts to generate prohibited content, however, the exact generation of this audio from their platform has not been confirmed.

To get another expert to weigh in on the audio, we escalated it to our partner IdentifAI, a San Francisco-based deepfake security startup. They used their audio detection software to check the authenticity of the audio.

First they took two real voice samples of Prabhakaran’s to generate an audio profile of his which served as a representation of his real voice. Then they isolated the voice from the video escalated to them by the DAU using an A.I. tool and they also removed the background audio to get a clean sample. Following which they used a heatmap analysis to compare the generated audio profile and the retrieved audio.

Screenshot of the heatmap analysis from IdentifAI

The image on the left displays the comparison between two real voice samples of Prabhakaran’s. The image on the right draws a comparison between a real voice sample and the retrieved audio from the suspicious video. There is similarity in the audio patterns in the image on the left and difference in the audio patterns in the image on the right.

The team at IdentifAI mentioned that based on the analysis from their tool and the artefacts visible in the video, it can be established that the voice is not Prabhakaran’s, the audio is an attempted deepfake.

They did not, however, rule out the possibility of the voice being another person’s voice or a generated voice not trained on Prabhakaran’s voice sample, synchronised with the video track. They explained that there are very few publicly available high-quality voice samples of Prabhakaran. As a result there is not enough training data to generate a deepfake voice of such quality as can be heard in the video.

To get another expert to weigh in on the video, we escalated it to the Global Online Deepfake Detection System (GODDS), a detection service set up by Northwestern University’s Security & AI Lab (NSAIL). They used a combination of 22 deepfake detection algorithms and analyses from two human analysts trained to detect deepfakes, to review the video escalated by the DAU.

Of the 22 predictive models used to analyse the video, 11 models gave a higher probability of the video being fake, while the remaining 11 models indicated a lower probability of the video being fake.

The team observed inconsistencies in the subject’s head movements which corroborated our own observations. They also pointed to the lip movements and speech appearing out of sync and the mouth moving independently from the face in an unnatural manner. The blinking of the subject is infrequent, in one instance there’s no blinking noticeable in a 30 second segment.

They added that the video appears blurry and it seems that there is a filter over the media. They noted how the TikTok account that was used to post the video has posted only this video and the profile picture of that account is a known deepfake of Prabhakaran’s daughter. The team concluded that the video is likely to be fake and created with artificial intelligence.

For further expert analysis, especially on the audio track, we escalated it to our partner GetRealLabs, co-founded by Dr. Hany Farid and his team, they specialise in digital forensics and A.I. detection. They said that the video has characteristics that suggest it may be a “one-shot talking face” or “puppet-master/avatar”. They were referring to generation techniques based on their observation, which is that from about mid-neck down the subject’s body doesn’t move for the entire video.

They also said that there were A.I. artefacts throughout the audio track. They added that the video appears to have additional treatment —at least a “retro” filter that makes it appear somewhat like an old film. They also used the reverse image search technique and discovered what they said looked like several instances of a younger portrait of this person. They noted that in those images the pose and the facial lighting were identical but with varying backgrounds found on what appear to be sites aligned with the LTTE.

On the basis of our findings and analyses from experts, we were able to establish that the likeness of the separatist is a deepfake. The accompanying audio, while not uttered by the subject, could be an impersonation, or a synthetic voice not trained on the subject’s voice sample.

(Written by Debraj Sarkar and Debopriya Bhattacharya, edited by Pamposh Raina.)

Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.

You can read below the fact-checks related to this piece published by our partners:

Fact Check: Viral video of LTTE Chief V. Prabhakaran is Al-Generated