Automated vs Human Review

Cluster discusses the balance between automated algorithms (like ML flagging) and human review in tech company moderation, bans, and decision-making, highlighting false positives, scalability issues, and the need for human oversight.

➡️ Stable 0.5x AI & Machine Learning
2,140
Comments
20
Years Active
5
Top Authors
#4011
Topic ID

Activity Over Time

2007
1
2008
3
2009
16
2010
24
2011
40
2012
40
2013
51
2014
48
2015
47
2016
60
2017
99
2018
140
2019
156
2020
164
2021
298
2022
254
2023
228
2024
211
2025
236
2026
26

Keywords

AFAIK AI HN ML youtube.com YouTube WILL automated review human positives false positives manual false reviewers humans flags

Sample Comments

proc0 Feb 28, 2019 View on HN

You think they're doing this manually? There's probably some human supervision, but this is actually a failure of their algorithms.

tgsovlerkhgsel Jun 1, 2022 View on HN

I suspect the workaround on the side of the companies doing this is to include human review (or appeals) to ensure the decision is no longer based "solely on automated processing".Even if not intended, a reviewer that sees mostly true positives is very likely to become a blind rubber stamp.

rpdillon Sep 18, 2025 View on HN

I'm not sure why everyone is so hostile. Your idea has merit, along the lines of a heuristic that you trigger a human review as a follow-up. I'd be surprised if this isn't exactly the direction things go, although I don't think the tools will be given for free, but rather made part of the platform itself, or perhaps as an add-on service.

naasking Sep 14, 2021 View on HN

I'm curious how many human reviews are triggered after ML flags a problem. If it's nearly 100%, why have the ML step at all?

thinkingemote Aug 18, 2021 View on HN

the issue is step 6 - review and actionEvery single tech company is getting rid of manual human review towards an AI based approach. Human-ops they call it - they dont want their employees to be doing this harmful work, plus computers are cheaper and better atWe hear about failures of inhuman ops all the time on HN. people being banned, falsely accused, cancelled, accounts locked, credit denied. All because the decisions which were once by humans are now made by machine. This will happen e

zokier Mar 24, 2018 View on HN

I suspect there is manual review. But I also suspect that the threshold for triggering that would be pretty high, and a single positive detection would not yet be enough. Sure, false positives happen, but I imagine that repeated false positives would be diminishingly rare.

jammygit Aug 15, 2019 View on HN

Review by a machine system sounds dystopian and buggy. If it just automatically flagged for human review though, that sounds reasonable

MattGaiser Aug 11, 2021 View on HN

Manual moderation doesn't scale. It needs to be automated.

avnigo Aug 26, 2021 View on HN

That's why we have the manual human reviewers, you see.

rmbyrro Jun 7, 2022 View on HN

This is by design.You can't expect a free service to have highly trained human judgement whenever you want.They need it to run fully autonomous. And does run flawlessly for 99% of the users, which is impressive.I wish they offered a paid option in the 1% cases. Like an arbitration.But that would be a cost center for them, and they don't want it.