AI Existential Risks

This cluster debates the potential existential threats from advanced AI and AGI, including risks of human extinction, safety concerns raised by experts like Yudkowsky and Hinton, and comparisons to bioweapons or nukes, questioning if these are realistic dangers or science fiction.

📉 Falling 0.3x AI & Machine Learning
6,207
Comments
19
Years Active
5
Top Authors
#9271
Topic ID

Activity Over Time

2008
7
2009
22
2010
21
2011
20
2012
24
2013
33
2014
94
2015
373
2016
260
2017
301
2018
168
2019
172
2020
78
2021
147
2022
325
2023
2,594
2024
832
2025
700
2026
36

Keywords

intelligence.org AI AGI HN A.I AIPosNegFactor.pdf GAI goo.gl waitbutwhy.com NP ai agi existential risk risks extinction threat dangerous safety humanity

Sample Comments

finolex1 Jul 23, 2024 View on HN

Replace "Open Source AI" in "is there an argument against xxx" with bioweapons or nuclear missiles. We are obviously not at that stage yet, but it could be a real, non-trivial concern in the near future.

asimuvPR Oct 13, 2016 View on HN

There is a real risk related to goal oriented AI. It does not need to feel or dream. Merely having survival as its goal is sufficient to make it dangerous to other life forms. Worse is that it can happen at any time (it may have happened already). Given the computing power, tools, and availability of knowledge we can assume that it can be done outside of a controlled lab environment by a non-scientist.

lxnn May 30, 2023 View on HN

Why are you so confident in calling existential AI risk fantasy?

vladmk Mar 12, 2021 View on HN

Does not anyone worry about this whole human level AI Pandora’s box trap? Why not work on nukes themselves, sounds safer to me

"don't expect someone to understand something their job depends on them not understanding."Sam Altman, Yoshua Bengio, and Geoffrey Hinton all think AGI is coming soon and could be an existential threat to all of humanity.So I agree, I don't expect people (tech startups) to understand something (AGI x-Risk) when their job (developing AGI and making $) depends on them not understanding it.

alganet Oct 3, 2025 View on HN

AI can mess up your mind. That's the only realistic risk we face, everything else is just science fiction bullshit.

nradov Dec 30, 2018 View on HN

The OpenAI people mean well but their their concerns over safety seem a bit silly considering they haven't actually demonstrated any progress toward building a true AGI. At this stage it's the equivalent of worrying about the problems that would be caused by an alien invasion: an interesting intellectual exercise but useless in practical terms.

HDThoreaun May 30, 2023 View on HN

We don't need AGI to have an extinction risk. Dumb AI might be even more dangerous

arnioxux Aug 2, 2017 View on HN

Taking "technology" to its logical end, you will probably arrive at AGI. And despite being decades off away from it, there are already many people making sure that it will be "safe" since the default otherwise will most likely lead to human extinction.https://waitbutwhy.com/2015/01/artificial-intelligence-revol...<a href

hgsgm Mar 14, 2023 View on HN

Existing AI is not AGI. Existing AI is dangerous. Existing AI is widely deployed and growing.Existing AI is more important to worry about.