Neural Network Universality
The cluster focuses on the universal approximation theorem and neural networks' theoretical ability to approximate any function, including nonlinear, arithmetic, or algorithmic ones, while debating practical limitations and efficiency.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Who is to say that brains aren't just regression based function approximators?
Neural networks are universal function approximators, not Turing machines. They can theoretically learn any series of "if...then..." functions with enough neurons. But there are a lot of functions they can't represent very efficiently or without absurdly large numbers of neurons and training data.
You're looking for the universal approximation theorem. It's one of those cases where they can do anything in theory so the question is more are we chasing a turning tarpit or not, where everything is possible but nothing is easy
Is this not a trivial consequence of universality of NNs?
What do you mean by counterfactuals? NNs are function approximation algorithms, in any geometry. No ifs ands or buts about it.
Well, the theory around neural nets strongly suggests that enough nonlinear activation functions combined in the right way should be able to learn any function, including basic arithmetic. Now, whether or not you have the right approach to training the network to get the right set of weights is a different story...
Stronger than that - you can think of neural networks as universal function approximators. So this is just a particular function to approximate.See the suggestively named "Universal approximation theorem" for details.
You have to be a little careful here. Neural networks are not "very general computation techniques." A dot product and a rectified linear function (or some other function of choice) are not "general computation techniques" in the sense you seem to use. They are a very specific set of operations. And the fact that you can show that two layers of these operations is a universal approximator is a red herring: decision trees and k nearest neighbors are also universal approximator
You might be interested in the universal approximation theorem.
Neural networks are function approximators. So if you 1) know an algorithm that is really computationally complex but not highly random and 2) have a lot of inputs and outputs of that algorithm, you can usually train a neural network to approximate a closed-form formula of the algorithm. It boils down to a bunch of matrix-multiplies and some standard non-linear functions in between.