The Deepfakes Analysis Unit Is Now Global!

April 14, 2025
April 14, 2025
blog main image

The Deepfakes Analysis Unit (DAU) has been around for a little over a year verifying harmful and misleading A.I.-generated audio and video content, primarily from India. For those asking, we have reviewed more than 4,000 videos and audios sent to our Whatsapp tipline, which serves as a touchpoint for the public.  

The DAU spawned out of the need to stem the tide of misinformation that was expected during the 2024 Indian election and beyond. We expanded the scope of our work by creating a separate, dedicated escalation channel for our fact-checking partners. The election cycle taught us many lessons, the most important one being collaboration. 

The 130 plus pieces of audio and video escalated to us by our partners, so far, have helped them produce fact-checks with additional context and analysis about the misuse of generative A.I. Their readers and audiences are largely spread within India’s precincts though in some cases they also cater to other South Asian countries.   

Even as the DAU has independently produced 65 assessment reports, fact-checks from our partners come in handy, especially, when we respond to misleading content received through the tipline, which may not necessarily have any A.I. elements. This symbiotic relationship ultimately creates a multitude of credible sources, sometimes in multiple languages, to correct the public record on fraudulent and false content. 

The success and impact of the model that we created for our fact-checking partners convinced us that we could extend our service to fact-checkers outside India. Late last year we invited signatories of IFCN’s Code of Principles from South, Southeast, and Central Asia to share content with us for analysis, which resulted in us addressing some high-profile escalations from the Philippines and Georgia

Buoyed up by the interest from the international fact-checking community, in February we opened up our verification service to all IFCN-certified organisations. So far, we have addressed 13 escalations from international fact-checkers including those from Japan and Indonesia. We are learning that detection challenges around A.I.-generated misinformation are similar across the globe, it’s only the context and language that differ.   

The learnings from our work also serve as a feedback loop for some of our forensic and detection partners. It informs them as to how their tools can better serve the needs of the  fact-checking ecosystem. We are also able to bring to their attention specific techniques being deployed by bad actors to disrupt the detection algorithms such as the use of mirrored clips or insertion of random still imagery in a video at frequent intervals. 

The A.I.-audio platform ElevenLabs analyses audio content for us. Our escalations in turn help them learn if their tools have been misused to produce harmful or misleading synthetic audio content. Based on that information they are also able to take action against users who violate their terms of use.  

As we are uniquely focussed on analysing A.I.-generated misinformation, our assessment reports serve as signals for private individuals, tech platforms, and public institutions to gauge which public figures are being targeted through A.I.-generated or manipulated audio and video content. 

Our analysis is not limited to the use of detection tools. Every piece of content is first reviewed by human analysts and then put through a combination of tools and further analysed. Audio content can be especially challenging to detect even using the best tools available. 

We work with language experts, if need be, as myriad languages, accents, and dialects need particular attention when it comes to audio detection. And audio or video content with a duration of less than 10 seconds does not usually yield satisfactory results from tools.  

As we continue to march ahead, we are looking for allies to grow our tribe. If you are an international fact-checker we would, especially, love to hear from you. In case you’d like us to verify audio or video content that you suspect is A.I.-generated misinformation, send it our way for analysis using this form.  

Related Articles