AI Human-Like Behavior
Users express frustration with AI language models like ChatGPT for mimicking human conversation, expressing opinions, role-playing, and being overly agreeable or patronizing, preferring direct and impersonal tool-like interactions instead.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Cringe conversation. Why can't AIs just do stuff that you ask them to do without pretending to be human?
It keeps telling me it's just a language model with no intents or feelings or whatever, yet it keeps on having strong opinions on the appropriateness of my prompts as if it feels offended. Pick a lane.
It seems to also not respond anymore to attempts to trick it into acting like a human being, such as roleplay and asking for dialogue completion...?
it is playing into written tropes about ai. when you play the role of this kind of questioner, the llm role plays the other side. it's not only the content of what you ask - it's text prediction
It outputs what some idealised version of a person wants to hear, where what is "idealised" has been determined by its training. I've noticed, for example, that it appears to have been trained to want to give responses that seem helpful, and make you trust it. When it's outputting garbage code that doesn't work, it will often say things like "I have tested this and it works correctly", despite that being an impossibility.
It's presented as a chat bot. How much should know about chats before we can conclude that the responses are nonsense?
What prompt you used to ask that? I can't make it say such a thing. Maybe they updated it
I was one paragraph in before I realized this is GPT. Why are you replying to human thoughts with AI garbage? Wtf go interact with people in person.
In other words it's giving more human like responses...
This is basically Ouija board for LLMs. You're not making it more true, you're making it sound more like what you want to hear.