Using AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls
This opinion piece describes the two main ways in which automated tools identify terrorist content online. It argues that, while automated content moderation tools are essential, they also have limitations. Human input remains essential, and it is critical that human moderators possess the necessary expertise and receive wellbeing support. Collaborative initiatives are also needed to build capacity across the sector – and need to be promoted by governments, international organisations and the largest tech companies.