Neural Network Universality

The cluster focuses on the universal approximation theorem and neural networks' theoretical ability to approximate any function, including nonlinear, arithmetic, or algorithmic ones, while debating practical limitations and efficiency.

📉 Falling 0.3x AI & Machine Learning
3,262
Comments
19
Years Active
5
Top Authors
#5321
Topic ID

Activity Over Time

2007
6
2009
7
2010
9
2011
12
2012
24
2013
32
2014
90
2015
133
2016
216
2017
264
2018
247
2019
274
2020
295
2021
254
2022
287
2023
392
2024
483
2025
230
2026
7

Keywords

AFAIK e.g NN github.io ML i.e FP arxiv.org MNIST PAC neural neural networks function networks approximation neural network universal linear theorem neurons

Sample Comments

Yajirobe May 16, 2023 View on HN

Who is to say that brains aren't just regression based function approximators?

Houshalter Feb 12, 2015 View on HN

Neural networks are universal function approximators, not Turing machines. They can theoretically learn any series of "if...then..." functions with enough neurons. But there are a lot of functions they can't represent very efficiently or without absurdly large numbers of neurons and training data.

vikramkr Sep 3, 2023 View on HN

You're looking for the universal approximation theorem. It's one of those cases where they can do anything in theory so the question is more are we chasing a turning tarpit or not, where everything is possible but nothing is easy

mtsolitary Dec 14, 2023 View on HN

Is this not a trivial consequence of universality of NNs?

nerdponx Jul 18, 2017 View on HN

What do you mean by counterfactuals? NNs are function approximation algorithms, in any geometry. No ifs ands or buts about it.

woeirua Mar 30, 2022 View on HN

Well, the theory around neural nets strongly suggests that enough nonlinear activation functions combined in the right way should be able to learn any function, including basic arithmetic. Now, whether or not you have the right approach to training the network to get the right set of weights is a different story...

ska Mar 14, 2016 View on HN

Stronger than that - you can think of neural networks as universal function approximators. So this is just a particular function to approximate.See the suggestively named "Universal approximation theorem" for details.

argonaut Dec 30, 2016 View on HN

You have to be a little careful here. Neural networks are not "very general computation techniques." A dot product and a rectified linear function (or some other function of choice) are not "general computation techniques" in the sense you seem to use. They are a very specific set of operations. And the fact that you can show that two layers of these operations is a universal approximator is a red herring: decision trees and k nearest neighbors are also universal approximator

bjourne May 23, 2023 View on HN

You might be interested in the universal approximation theorem.

mholt Sep 1, 2017 View on HN

Neural networks are function approximators. So if you 1) know an algorithm that is really computationally complex but not highly random and 2) have a lot of inputs and outputs of that algorithm, you can usually train a neural network to approximate a closed-form formula of the algorithm. It boils down to a bunch of matrix-multiplies and some standard non-linear functions in between.