Neural Networks History

Discussions focus on the long history of neural networks from the 1950s onward, their periods of popularity and decline during AI winters, and the reasons for their recent practical success due to hardware, data availability, and incremental advances like backpropagation and CNNs.

➡️ Stable 0.5x AI & Machine Learning
2,866
Comments
20
Years Active
5
Top Authors
#5665
Topic ID

Activity Over Time

2007
1
2008
6
2009
12
2010
20
2011
17
2012
34
2013
52
2014
63
2015
105
2016
226
2017
263
2018
202
2019
187
2020
173
2021
186
2022
177
2023
448
2024
368
2025
318
2026
8

Keywords

McCulloch e.g II LR youtu.be toronto.edu LSTM A3C CNN CEO neural nets neural nets networks learning neural networks et al et deep learning al

Sample Comments

coldsauce Aug 27, 2018 View on HN

Weren't neural nets popular back then?

Componica Oct 21, 2024 View on HN

My take during that era was neural nets were considered taboo after the second AI winter of the early 90s. For example, I once proposed a start-up to consider a CNN as an alternative to their handcrafted SVM for detecting retina lesions. The CEO scoffed, telling me neural networks were dead only to acknowledge they were wrong a decade later. Younger people today might not understand, but there was a lot of pushback if you even considered using a neural network during those years. At the time, pe

seanmcdirmid Sep 3, 2014 View on HN

Parent might be living in the recent past. There was a migration away from NNs in the 90s/early 00s, then Hinton and other people brought it back to life...with a vengeance :)

amelius Mar 27, 2019 View on HN

True, but neural networks existed for a long time. The hardware is what made NNs actually practical. Recent theoretical advances have been only incremental. For example, CNNs are just evolutionary since image recognition has used convolutions long before NNs were practical, and CNNs are just a straightforward translation of existing techniques to NNs.

mikert89 Oct 12, 2025 View on HN

they didnt have the compute or the data to make use of NNs. but theoretically NNs made sense even back then, and many people thought they could give rise to intelligent machines. they were probably right, and its a shame they didnt live to see whats happening right now

FakeRemore Jun 18, 2020 View on HN

The first neural networks appeared in the 50s (with significant limitations), and proper research into them and appropriate funding first started in the 80s. NN's didn't become a thing overnight, and neither will this. I'm not sure why you're expecting this thing to be proven now. Science is slow. It takes time to build evidence and "prove" things and figure out how useful a model is.

HarHarVeryFunny Nov 13, 2025 View on HN

Depends on how far back you are going. There was the whole 1969 Minsky Perceptron flap where he said ANNs (i.e Perceptrons) were useless because they can't learn XOR (and no-one at the time knew how to train multi-layer ANNs), which stiffled ANN research and funding for a while. It would then be almost 20 years until the 1986 PDP handbook published LeCun and Hinton's rediscovery of backpropagation as a way to train multi-layer ANNs thereby making them practical.The JEPA parallel is

galaxyLogic Aug 23, 2025 View on HN

What happened to Neural Networks?

iskander Nov 24, 2012 View on HN

Are you talking about Hinton's "A Fast Learning Algorithm for Deep Belief Nets"? Before that was published, Hinton's lab and their spiritual allies were training large restricted boltzmann machines via truncated sampling for decades. And Yann LeCun's convolutional networks (the architecture used in Google's vision project) have also been trained via plain old stochastic gradient descent for decades.As far as I can tell there hasn't been any single revolutionary breakthrough in this field...we

danny_codes Dec 10, 2021 View on HN

Google published "Attention is all you need" in 2017, which introduced the Transformer Deep Learning Architecture. This is the general framework by which GPT-3 was trained, and is increasingly replacing most other architectures in NLP tasks.So.. I guess I disagree strongly with your premise? Look, it takes some time for these ideas to propagate to non-experts. In 20 years a hacker-news poster will likely post the same comment, referencing the 2017 Transformer paper to enforce the id