Using Artificial Intelligence and Machine Learning to IdentifyTerrorist Content Online

This report from the Tech Against Terrorism Europe project examines the use of artificial intelligence (AI) to identify terrorist content online.

AI tools have become essential given the vast amount of content posted daily.

On average, every minute Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload over 500 hours of new content. The volume of data generated is growing exponentially and is currently estimated at 120 zettabytes every day. A vast amount of terrorist content is posted across the online ecosystem.

While there has been a degree of automation to detect terrorist content, AI tools have the potential to further improve content moderation. However, the report contextualises the effectiveness of AI and how it should be used effectively:

“As important as automation is, given the sheer volume of (terrorist) content posted online, both matching-based and classification-based tools have their limitations. Their use must best supplemented by human input, with appropriate oversight mechanisms in place.”

The report found that most automated content-based tools rely on either matching images/videos to a database or using machine learning to classify content. However, these approaches have shortcomings, including difficulties compiling suitable training data and algorithms lacking cultural sensitivity.

To address this, the report recommends “developing minimum standards for content moderators, promoting AI tools to safeguard moderator wellbeing, and enabling collaboration across the industry.”

The report's recommendations come as platforms adapt to the EU’s 2021 Terrorist Content Online Regulation mandating swift takedown of terrorist content. While many platforms are expanding automated detection to meet legal requirements, the report cautions that exclusively automated enforcement risks disproportionate impacts on marginalised groups and activists. It calls for human oversight and appropriate accountability mechanisms.

Tech Against Terrorism Europe

Tech Against Terrorism Europe (TATE) supports tech companies in complying with the EU’s Terrorist Content Online (TCO) regulation and counter the terrorist threat. This project works with hosting service providers to ensure understanding and compliance of the EU’s TCO Regulation. TATE is a consortium of partners from academia and civil society. Consortium partners are: Dublin City University (DCU), Ghent University, The JOS Project, LMU Munich, Saher Europe, Swansea University and Tech Against Terrorism

Disclaimer: Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.

Previous
Previous

Online Course: EU TCO Regulation

Next
Next

Combatting terrorist content online: Proactive AI detection.