AGI Feasibility Debate
The cluster debates the possibility of achieving artificial general intelligence (AGI) or human-level intelligence with computers, questioning if human cognition is computable on Turing machines, limited by brain complexity, hardware, or software issues.
Activity Over Time
Top Contributors
Keywords
Sample Comments
The fact that our current technology (neural networks, genetic algorithms, etc.) is fundamentally different from how the human brain works. It does not have a capacity for abstract thought, emotion, creativity, etc. which are all essential to human intelligence. They can do specialized task, but we are many years away from general intelligence, and that will not come just by investing more in our current technology. If we "invent" general intelligence, it will look very different from
We've yet to build a single machine that is intellectually capable beyond our own understanding.
How about this one: computational complexity doesn't imply AGI is impossible, it implies that human intelligence isn't all that wonderfully miraculous.
er.. how long till humans show general intelligence without any apparent failures..?
I don’t think anyone had demonstrated AGI is a foregone conclusion. I’m not sure it is possible with a Turing machine. We do not think in any manner like a Turing machine or any computer ever conceived. If we do, no one has provided any evidence of such a claim. Humans can make complex insights with hardly any training and on very few calories.
Look, if Evolution can give rise to intelligence, which is nothing more than a drawn out process of trial and error, then perhaps creating intelligence is easy. The major roadblock standing in our way is a lack of computing power. The Human mind still vastly outstrips our fastest computers. Of course, there will be technical challenges in creating machines with human level intelligence, but once we have super fast computers I think it will be possible.
We appear to exist in a mechanical universe of laws. There's really no way it's possible both that humans exist and that we can't replicate or approximate enough parts of the human brain enough to match or better human reasoning.If the human part bothers you, substitute animals instead. They do much of what we need to various, usually somewhat lesser levels.This line of thinking is reinforced by the results we've had so far. Why should we assume there's a limit if
String AI is not a hardware problem. It's not a matter of lack of computational power. It is a software and modeling problem. If you had an AI algorithm and a model, you could still run it on any Turning machine. It would just take a lot longer (perhaps years or decades or more) to compute a single thought on current hardware instead of real time or faster than real time on some super fast future hardware.There are people (like Roger Penrose) who argued that intelligence and consciousness ar
I don’t think AGI is necessarily impossible, but I’m not convinced that it’s possible to achieve in a way that gets around the constraints of human intelligence. The singularity idea is basically the assumption that AGI will scale the same way computers have scaled over the past several decades, but if AGI turns out to require special hardware and years of training the same way we do, it’s not obvious that it’s going to be anything more remarkable than we are.
It's not hard to tell that it has not produced human-level intelligence, so the process outlined by naasking has not yet run into an insurmountable problem.