Automated vs Human Review
Cluster discusses the balance between automated algorithms (like ML flagging) and human review in tech company moderation, bans, and decision-making, highlighting false positives, scalability issues, and the need for human oversight.
Activity Over Time
Top Contributors
Keywords
Sample Comments
You think they're doing this manually? There's probably some human supervision, but this is actually a failure of their algorithms.
I suspect the workaround on the side of the companies doing this is to include human review (or appeals) to ensure the decision is no longer based "solely on automated processing".Even if not intended, a reviewer that sees mostly true positives is very likely to become a blind rubber stamp.
I'm not sure why everyone is so hostile. Your idea has merit, along the lines of a heuristic that you trigger a human review as a follow-up. I'd be surprised if this isn't exactly the direction things go, although I don't think the tools will be given for free, but rather made part of the platform itself, or perhaps as an add-on service.
I'm curious how many human reviews are triggered after ML flags a problem. If it's nearly 100%, why have the ML step at all?
the issue is step 6 - review and actionEvery single tech company is getting rid of manual human review towards an AI based approach. Human-ops they call it - they dont want their employees to be doing this harmful work, plus computers are cheaper and better atWe hear about failures of inhuman ops all the time on HN. people being banned, falsely accused, cancelled, accounts locked, credit denied. All because the decisions which were once by humans are now made by machine. This will happen e
I suspect there is manual review. But I also suspect that the threshold for triggering that would be pretty high, and a single positive detection would not yet be enough. Sure, false positives happen, but I imagine that repeated false positives would be diminishingly rare.
Review by a machine system sounds dystopian and buggy. If it just automatically flagged for human review though, that sounds reasonable
Manual moderation doesn't scale. It needs to be automated.
That's why we have the manual human reviewers, you see.
This is by design.You can't expect a free service to have highly trained human judgement whenever you want.They need it to run fully autonomous. And does run flawlessly for 99% of the users, which is impressive.I wish they offered a paid option in the 1% cases. Like an arbitration.But that would be a cost center for them, and they don't want it.