After the horrific terrorist attacks on mosques in New Zealand, there was a rash of uploads to social media of video from the attacks
. Whether put there by people wishing to glorify the criminal acts or by those whose curiosity ran to the morbid, they needed to come down, and fast.
YouTube, one of the largest video hosting platforms on the Internet, immediately sprang into action with a system they'd developed to respond to other situations in which footage of horrific death went viral
. However, its executives also knew that their systems were likely to become overwhelmed by the sheer number of people who would try to beat the system, repackaging or remixing the video in order to slip it past the algorithms that were supposed to bar reposts of prohibited content.
In order to combat the sheer volume of reposts, they disabled the search function and allowed their algorithms to block videos without human review, rather than merely flagging them to be reviewed and leaving the final determination to a human being. Presumably these steps would only be used again during similar emergency situations, in which particularly shocking images of violence go viral on the 'Net and trying to eliminate them by the usual methods becomes a whack-a-mole madness.
In the days after the attack, YouTube's executives have been looking at their corporate response to the problem, trying to identify the weak points as the first step to preventing future incidents from growing as big as it did. However, they're running against a hard technical problem -- present-day AI is nowhere near as powerful as people think it is. It's trivially easy for a technically adept person to alter a video just enough that AI won't flag it as a remix or repackage of a video that has already been banned. And AI lacks the ability to make the fine judgments that humans do to distinguish between that which has been posted to appeal to morbid curiosity and that which has legitimate scientific or historical content.