Moderation feels like the one of the most ethical uses of AI. Being able to prevent a lot of the worst content from being posted and preventing people from being exposed to it.
> Putting that kind of filter in the way of speech seems ripe for abuse.
On one hand I agree with you. Any automatic filter implemented can later be expanded to cover more and more things, such as messages from political adversaries for example. It's a slippery slope as we all know.
On the other hand I don't think it applies in this context very much. If we're talking about content published by a corporation or such (say a newspaper for example) they already filter all their gathered news themselves and have no obligation to publish things they don't feel like.
Similarly if we're talking about user uploaded content on social media I don't think they have any obligation to publish everything and anything that their users decide to upload either, and it's not the expectations of the users that anything can be hosted there for them. Users already know that youtube/facebook/tiktok/what-have-you have seemingly arbitrary rules regarding what content they're willing to host and not.
Now if for example DNS providers or ISPs decide to implement these sort of filters on the web at large that's a different matter I think. In which case I agree with you.
I don't think the issue here is related to AI. Without AI, moderators would still have to look at these same videos. The difference is they would hit the public first before being flagged and sent to moderators. Now with AI they can be prevented from ever going public.
The fact that we still need to traumatize workers to confirm the automated decisions is sad. The only other ways I can see to resolve this would be either to just blindly trust the AI result without any human oversight, or to require all facebook users to link their government ID to accounts and only allow posting by users in countries where the authorities arrest the people posting these things.
Outsourcing everything. Even PTSD from training AI to India so privileged law enforcement officers and social media moderators don't have to. This system is so hypocritical and broken.
The agencies already have massive collections of csam from every arrest and site seizure. They already have systems that can identify existing csam by fingerprint or computer vision, so very little needs to be seen by humans, only newly produced material.