Using AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls

This opinion piece describes the two main ways in which automated tools identify terrorist content online. It argues that, while automated content moderation tools are essential, they also have limitations. Human input remains essential, and it is critical that human moderators possess the necessary expertise and receive wellbeing support. Collaborative initiatives are also needed to build capacity across the sector – and need to be promoted by governments, international organisations and the largest tech companies.

Previous
Previous

Digital Services Act & Terrorist Content Online Regulation: Analysis and Comparison

Next
Next

Online Course: EU TCO Regulation