AI Existential Risks
This cluster debates the potential existential threats from advanced AI and AGI, including risks of human extinction, safety concerns raised by experts like Yudkowsky and Hinton, and comparisons to bioweapons or nukes, questioning if these are realistic dangers or science fiction.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Replace "Open Source AI" in "is there an argument against xxx" with bioweapons or nuclear missiles. We are obviously not at that stage yet, but it could be a real, non-trivial concern in the near future.
There is a real risk related to goal oriented AI. It does not need to feel or dream. Merely having survival as its goal is sufficient to make it dangerous to other life forms. Worse is that it can happen at any time (it may have happened already). Given the computing power, tools, and availability of knowledge we can assume that it can be done outside of a controlled lab environment by a non-scientist.
Why are you so confident in calling existential AI risk fantasy?
Does not anyone worry about this whole human level AI Pandora’s box trap? Why not work on nukes themselves, sounds safer to me
"don't expect someone to understand something their job depends on them not understanding."Sam Altman, Yoshua Bengio, and Geoffrey Hinton all think AGI is coming soon and could be an existential threat to all of humanity.So I agree, I don't expect people (tech startups) to understand something (AGI x-Risk) when their job (developing AGI and making $) depends on them not understanding it.
AI can mess up your mind. That's the only realistic risk we face, everything else is just science fiction bullshit.
The OpenAI people mean well but their their concerns over safety seem a bit silly considering they haven't actually demonstrated any progress toward building a true AGI. At this stage it's the equivalent of worrying about the problems that would be caused by an alien invasion: an interesting intellectual exercise but useless in practical terms.
We don't need AGI to have an extinction risk. Dumb AI might be even more dangerous
Taking "technology" to its logical end, you will probably arrive at AGI. And despite being decades off away from it, there are already many people making sure that it will be "safe" since the default otherwise will most likely lead to human extinction.https://waitbutwhy.com/2015/01/artificial-intelligence-revol...<a href
Existing AI is not AGI. Existing AI is dangerous. Existing AI is widely deployed and growing.Existing AI is more important to worry about.