Due to the volume of messages and the endless need to maximise profits, companies will accept 10% flagged content may be false positives but act against all 100% of flagged content, meaning that the default will be innocent people having action taken against them but with no real recourse to clear their name.
I also didn't find anything in there about expectations for reducing numbers of false negatives (where automation fails to flag suspicious activity). Content control is basically just PR if it ignores the majority of activity it is designed to police.
Doesn't the system eat itself in the end? False flag 10%, remove those users, falsely flag some more, remove them, and so on until all that's left are people who don't use it or just send pictures of their food?
Well, it does not as long as the (predictable?) self-censorship sets in. After some time users know not to post pics of vegetable soup if those pics tend to get misclassified.
I, for example, am currently banned from Reddit, after something like 8 years of usage, due to writing a comment mentioning that Reddit administration was corrupt (how's that for irony? I fear the far-right may have been correct about Reddit administration).
In this case it is not a broken system but an actively malicious one but the point still stands: if it takes 8 years to get falsely banned, they still have plenty of active users at any given time, even high-value ones.
Offline? If the system is as automated and un-appealable as the doom posters are saying, their lives are ruined and they're banned from those platforms forever.
I also didn't find anything in there about expectations for reducing numbers of false negatives (where automation fails to flag suspicious activity). Content control is basically just PR if it ignores the majority of activity it is designed to police.