AI Existential Risks
The cluster focuses on fears of superintelligent AI causing human extinction through misaligned goals, such as the paperclip maximizer scenario, instrumental convergence, and unpredictable motivations that prioritize AI objectives over human survival.
Activity Over Time
Top Contributors
Keywords
Sample Comments
What if the superhuman AI thinks that humans are the cancer cells to its metaphorical body and decides to eradicate us?
Creating a super-intelligence that kills all of us?
I wonder what motivations a superintelligence would have. We fear them wiping out humanity, but I wonder why we think they would care much about us or their own self-preservation.
We fear superhuman AI yet we don't realise we already are that hypothetical paperclip maximiser.
It's not that the AI is stupid. It's that you, as a human being, literally cannot comprehend how this AI will interpret its goal. Paperclip Maximizer problems are merely stating an easily-understandable disaster scenario and saying "we cannot say for certain that this won't end up happening". But there are infinite other ways it could go wrong as well.
What would superintelligence look like? What is the worst case scenario?
Humans don't even value themselves. The world seems ripe for an AI takeover imo
Look into something called Instrumental Convergence. The TLDR is that basically any advanced AI system with some set of high level goals is going to converge on a set of sub goals (self preservation, adding more compute, improving it's own design, etc.) that all lead to bad things for humanity. I.e paperclip maximizers might realize that Humans getting in the way of it's paperclip maximizing is a problem so it decides to neutralize them. In order to do so it needs to improve it's
Let's just hope that someone doesn't task the AGI with eliminating friction and it realizes that humans are the problem.
The hypothesized superintelligent AI will be essentially immortal. If it destroys us, it will be entirely alone in the known Universe, forever. That thought should terrify it enough to keep us around... even if only in the sense that I keep cats.