Algorithmic Racial Bias
The cluster discusses biases in machine learning algorithms, particularly racial discrimination arising from biased training data, proxies for race in features like zip codes, and debates on fairness in applications like criminal justice, hiring, and facial recognition.
Activity Over Time
Top Contributors
Keywords
Sample Comments
I hope you have considered racial bias.e.g.: https://www.nature.com/articles/d41586-019-03228-6or: https://www.technologyreview.com/2020/07/17&
Aren't algorithms also biased in theory since they are created by people?
Relevant:https://www.technologyreview.com/s/601775/why-we-should-expe...https://www.propublica.org/article/machine-bias-risk-assessm...
It is not racist to point out institutional racial biases when there is evidence for their existence. It is racist to make baseless speculations about an individual person's biases simply because of their ethnicity. It's the difference between pointing out that black Americans are more likely to be involved in violent crime than white Americans (not racist) vs. assuming that a particular individual is going to commit a violent crime simply because they are black (racist).The algorit
"Algorithm isn't racially discriminatory" is an unusual complaint.
But that loan officer brings their own bias to the scenario. They could just as easily say the Johnson's are unreliable because they are black. It wasn't that long ago that saying that was institutionalized and I still suspect it occurs. An algorithm is colorblind. This isn't to say an algorithm is necessarily good, but that humans aren't either. One of the reasons bureaucratic red tape exists is as an effort to overcome individual judgement in favor of consistent and fair ju
Seriously?http://www.nytimes.com/2015/07/10/upshot/when-algorithms-dis...http://spectrum.ieee.org/tech-talk/computing/software/compu
The text does not support your interpretation. He says that data in the United States almost always has a racial bias, and that even when you explicitly remove race from the data, you and your ML model can still inadvertently make predictions that fall along racial lines because of how other data points strongly correlate to race.He then goes on to talk about COMPAS, which was accused of being racially biased even though COMPAS explicitly did not include race as one of its inputs. The author
That's the problem. If there is ANY bias shown in your results, your algo (actually you) will be accused of prejudice. That is what happened with the Re-Offending Risk Assessment in the linked article. It didn't have race or anything like it. It had suburb.
sorry that isn't what I meant. I was trying to refer to the systemic biases typically found in this algorithms (correctly identifying lighter skinned people over darker skinned people, algorithms learning to use race as a factor in decision a king, etc.)