AI Accountability
The cluster discusses accountability for AI decisions and errors, emphasizing that only humans can be held responsible and AI should not dilute or evade liability.
Activity Over Time
Top Contributors
Keywords
Sample Comments
You don't have to use AI. Regardless, only humans can take responsibility, not computers, and that will never change.
Yes, but if “AI” denies you, humans are suddenly no longer responsible.
I'm the simplest way - if you allow AI to make decisions, you're responsible. Like this https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-th...So far we're doing pretty good with that idea globally (I've not seen any case going the other way in court)
Well, you omitted "A person is accountable, an AI isn't (at least not yet)."
How can you old the owner responsible for the AI's errors?
who's going to be held accountable when the boilerplate fails? the AI?
if we can blame a person, it's not AI.
I think it boils down to: if the AI screws up what I asked it to do, who do I have to hold accountable? If the answer is that there is no one I can hold accountable, because the AI agent I used removes any and all onus of responsibility in its terms of service, then I'm not going to use it for anything non-trivial.
Yes! This is one thing humans are still much better at than AI: taking blame.
How about instead of blaming AI for what AI thinks, we blame negligent people for its misuse?