AI Black Box Problem

The cluster discusses the 'black box' nature of neural networks and deep learning models, debating their interpretability, explainability of decisions, and the challenges or progress in making AI understandable to humans.

πŸ“‰ Falling 0.5x AI & Machine Learning
2,735
Comments
18
Years Active
5
Top Authors
#6693
Topic ID

Activity Over Time

2009
3
2010
1
2011
7
2012
5
2013
14
2014
13
2015
60
2016
173
2017
229
2018
254
2019
232
2020
309
2021
226
2022
260
2023
375
2024
272
2025
291
2026
11

Keywords

DNN AI NN ML TLDR networks.html DeepLearning CNN github.io darpa.mil neural black box model black neural network models box linear network neural networks

Sample Comments

harmoat β€’ Jan 12, 2021 β€’ View on HN

Aren't large neural network already black boxes we don't understand built by machines we understand?

1MachineElf β€’ Sep 15, 2022 β€’ View on HN

In layman's terms, how would you explain that deep learning isn't a black box?

adastra22 β€’ Dec 1, 2025 β€’ View on HN

It feels like you’re blaming the AI engineers here, that they built it this way out of ignorance or something. Look into interpretability research. It is a hard problem!

lsaferite β€’ Mar 14, 2016 β€’ View on HN

Knowing almost nothing about neural nets, is it possible for a NN based AI to explain it's decisions in some manner that we would understand?

jbooth β€’ Jul 12, 2016 β€’ View on HN

What is it with hacker news commenters and assuming other people don't know things?The comment I was responding to was talking about training sets, I riffed on that. Depending on your definition of "full explanatory power", the model itself might very well not be enough, especially in the case of neural networks. Could you take a set of weights in a 5-deep neural network, look at an input vector, and have any kind of intuition about the output? It could be that there&

kuu β€’ Jan 15, 2020 β€’ View on HN

There is work in progress for understanding the content of a neural network and avoid this "black box" effect. This is still far from reached but there are some advances. See [0] for example.Also, remember that AI includes more than NN. You can use some other models, as a Linear Regression or a Random Forest which are perfectly explainable.[0] https://chr

umvi β€’ Apr 13, 2020 β€’ View on HN

Counterexample:Convolutional Neural Networks (CNNs) (or any ML model, really)We still don't fully understand how or why they work so well. If you have software that queries a CNN and the CNN returns some prediction, there's absolutely no way of knowing or understanding the reasoning behind the decision. It's a black box of magical layers and weights that have been "trained" to make good predictions.If you have fraud detection software using ML and it flags somet

Sniffnoy β€’ Apr 9, 2019 β€’ View on HN

Why on earth should we trust (for such things) a model that includes an opaque neural network?

383toast β€’ Dec 27, 2024 β€’ View on HN

Wouldn't the work on interpretability would solve these concerns?

bobcostas55 β€’ Mar 19, 2018 β€’ View on HN

If a neural network has let's say 50 million parameters, it doesn't matter if you can trace any of the calculations, you're never going to actually understand how the model operates.Sometimes it's possible to extract a sort of conceptual understanding in some cases, eg you might say that this layer performs edge detection or whatever. But that's not much of an explanation.They do it because it works. You trust your Uber driver to get you to your destination even th