AGI Feasibility Debate

The cluster debates the possibility of achieving artificial general intelligence (AGI) or human-level intelligence with computers, questioning if human cognition is computable on Turing machines, limited by brain complexity, hardware, or software issues.

➡️ Stable 0.6x AI & Machine Learning
4,785
Comments
20
Years Active
5
Top Authors
#8788
Topic ID

Activity Over Time

2007
4
2008
41
2009
67
2010
60
2011
93
2012
84
2013
86
2014
185
2015
288
2016
325
2017
360
2018
256
2019
272
2020
159
2021
232
2022
344
2023
751
2024
549
2025
607
2026
22

Keywords

RAM AI CPU AGI IMO ttapress.com GAI i.e PREDICTION NOBODY intelligence human human level ai agi brain human intelligence level computational impossible

Sample Comments

nv-vn May 18, 2017 View on HN

The fact that our current technology (neural networks, genetic algorithms, etc.) is fundamentally different from how the human brain works. It does not have a capacity for abstract thought, emotion, creativity, etc. which are all essential to human intelligence. They can do specialized task, but we are many years away from general intelligence, and that will not come just by investing more in our current technology. If we "invent" general intelligence, it will look very different from

ngngngng Aug 25, 2023 View on HN

We've yet to build a single machine that is intellectually capable beyond our own understanding.

mcguire Feb 22, 2021 View on HN

How about this one: computational complexity doesn't imply AGI is impossible, it implies that human intelligence isn't all that wonderfully miraculous.

marzetti Feb 5, 2023 View on HN

er.. how long till humans show general intelligence without any apparent failures..?

dinkumthinkum Dec 29, 2022 View on HN

I don’t think anyone had demonstrated AGI is a foregone conclusion. I’m not sure it is possible with a Turing machine. We do not think in any manner like a Turing machine or any computer ever conceived. If we do, no one has provided any evidence of such a claim. Humans can make complex insights with hardly any training and on very few calories.

jostmey Nov 22, 2014 View on HN

Look, if Evolution can give rise to intelligence, which is nothing more than a drawn out process of trial and error, then perhaps creating intelligence is easy. The major roadblock standing in our way is a lack of computing power. The Human mind still vastly outstrips our fastest computers. Of course, there will be technical challenges in creating machines with human level intelligence, but once we have super fast computers I think it will be possible.

kadoban Sep 1, 2020 View on HN

We appear to exist in a mechanical universe of laws. There's really no way it's possible both that humans exist and that we can't replicate or approximate enough parts of the human brain enough to match or better human reasoning.If the human part bothers you, substitute animals instead. They do much of what we need to various, usually somewhat lesser levels.This line of thinking is reinforced by the results we've had so far. Why should we assume there's a limit if

super_mario Oct 20, 2011 View on HN

String AI is not a hardware problem. It's not a matter of lack of computational power. It is a software and modeling problem. If you had an AI algorithm and a model, you could still run it on any Turning machine. It would just take a lot longer (perhaps years or decades or more) to compute a single thought on current hardware instead of real time or faster than real time on some super fast future hardware.There are people (like Roger Penrose) who argued that intelligence and consciousness ar

philwelch Apr 1, 2022 View on HN

I don’t think AGI is necessarily impossible, but I’m not convinced that it’s possible to achieve in a way that gets around the constraints of human intelligence. The singularity idea is basically the assumption that AGI will scale the same way computers have scaled over the past several decades, but if AGI turns out to require special hardware and years of training the same way we do, it’s not obvious that it’s going to be anything more remarkable than we are.

mannykannot Jan 8, 2023 View on HN

It's not hard to tell that it has not produced human-level intelligence, so the process outlined by naasking has not yet run into an insurmountable problem.