AI Existential Risks

The cluster focuses on fears of superintelligent AI causing human extinction through misaligned goals, such as the paperclip maximizer scenario, instrumental convergence, and unpredictable motivations that prioritize AI objectives over human survival.

📉 Falling 0.3x AI & Machine Learning
5,474
Comments
20
Years Active
5
Top Authors
#5794
Topic ID

Activity Over Time

2007
5
2008
26
2009
30
2010
20
2011
63
2012
44
2013
67
2014
219
2015
402
2016
344
2017
345
2018
164
2019
168
2020
119
2021
251
2022
350
2023
1,557
2024
640
2025
636
2026
26

Keywords

HAL AI AGI A.I OK TLDR I.e SI AND SF ai humans agi human intelligent intelligence goals humanity super smarter

Sample Comments

dane-pgp Jul 8, 2021 View on HN

What if the superhuman AI thinks that humans are the cancer cells to its metaphorical body and decides to eradicate us?

YZF Apr 15, 2023 View on HN

Creating a super-intelligence that kills all of us?

hotpotamus Feb 28, 2023 View on HN

I wonder what motivations a superintelligence would have. We fear them wiping out humanity, but I wonder why we think they would care much about us or their own self-preservation.

imtringued Feb 12, 2016 View on HN

We fear superhuman AI yet we don't realise we already are that hypothetical paperclip maximiser.

bigtex88 Mar 31, 2023 View on HN

It's not that the AI is stupid. It's that you, as a human being, literally cannot comprehend how this AI will interpret its goal. Paperclip Maximizer problems are merely stating an easily-understandable disaster scenario and saying "we cannot say for certain that this won't end up happening". But there are infinite other ways it could go wrong as well.

sidcool May 22, 2023 View on HN

What would superintelligence look like? What is the worst case scenario?

djaouen Nov 19, 2024 View on HN

Humans don't even value themselves. The world seems ripe for an AI takeover imo

adamsmith143 Mar 1, 2023 View on HN

Look into something called Instrumental Convergence. The TLDR is that basically any advanced AI system with some set of high level goals is going to converge on a set of sub goals (self preservation, adding more compute, improving it's own design, etc.) that all lead to bad things for humanity. I.e paperclip maximizers might realize that Humans getting in the way of it's paperclip maximizing is a problem so it decides to neutralize them. In order to do so it needs to improve it's

moffkalast Apr 12, 2023 View on HN

Let's just hope that someone doesn't task the AGI with eliminating friction and it realizes that humans are the problem.

gnfargbl Oct 19, 2025 View on HN

The hypothesized superintelligent AI will be essentially immortal. If it destroys us, it will be entirely alone in the known Universe, forever. That thought should terrify it enough to keep us around... even if only in the sense that I keep cats.