AI Alignment Problem
The cluster discusses the AI alignment problem, debating its definition, feasibility, challenges in aligning superintelligent AI or AGI with human values and goals, and related issues like human misalignment.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Alignment to _what_? Humans aren't aligned without AI, what exactly will AI be aligned to?
Alignment is more about the AI doing what you want and not good or evil. Probably not a good idea to reach a premature conclusion like "this is complete nonsense" before understanding the basics.
Who asked for this? This really seems antithetical to AI/human goal alignment.
Yes, that is the big alignment question. Nobody really knows. Opinions differ greatly on this topic, but most have a significant level of concern.We can't say that AI would necessarily be driven by selfish motives. However, that actually isn't the main concern. It is the fact that we can not perceive the means by which it may execute any task. It is like all the old tales of a genie of the lamp that grants wishes. However, they don't turn out as you expect.For example, ask t
Read up about the "alignment problem". It is one that ai researchers cannot Crack and its literally what you wrote. The idea that we could never in any way define goals align with us.
thus is the AI alignment problem
People should read this when they think about AI “Alignment”Can’t even have a singular aligned person with full confidence
Let's focus on alignment before focusing on AGI. Let's not be goobers and extinct ourselves.
How are OpenAI expected to align a hyper-intelligent entity if they can't even align themselves....
Can you please elaborate further on "AI/human goal alignment"?