ChatGPT Understanding Debate
Cluster focuses on debates about whether ChatGPT and GPT models possess true understanding, comprehension, or knowledge, or if they merely statistically predict responses from training data without genuine cognition.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Not according to the article. Chatgpt could be argued to have knowledge. It cannot be argued to have understanding (yet).
ChatGPT just matches the most statistically-likely reply based on a huge corpus of internet discussions, it doesn't actually have any ideas.
ChatGPT is a language model, it does not "understand" anything, concepts, words or otherwise. It is even programmed to give you a canned response saying something roughly equal to this when asking it about its comprehension abilities.ChatGPT has no more capacity for understanding concepts than any other computer program, it's just very finely tuned to emit responses that make it appear as if it does.
It's pretty much the truth. What the ChatGPT is good at is "keeping in mind" various associations between words that occurred in the session so far. To keep those associations some internal structure bound to get conjured. It doesn't mean the transformer understands anything or can do any kind of reasoning, despite the fact that it can mimic a bit how reasoning output looks like and even get it right sometimes if the context is fairly close to something it seen in the trainin
ChatGPT does not fulfill that definition because it does not have any āmental representationā; it has no mind with which to form a āmental modelā. It emulates understanding ā quite well in many scenarios ā but there is nothing there to possess understanding; it is at bottom simply a very large collection of numbers that are combined arithmetically according to a simple algorithm.
The fact that you ask that is in many ways the difference. You feel thereās a limitation in your knowledge of the term āunderstandā and its use in this context and would like clarification before youāre more certain, either way. At some point either enough information arrives to convince you, or you decide itās not true. Whatever that process and internal states are, is something GPT canāt do. Itāll 100% confidently produce something and be fully rewarded that it chose tokens that humans would
It has not been trained to specifically to produce this text. It was trained randomly on random text from the internet.The production of this text is a side effect of that training.You will note that this text is more detailed and nuanced then what most humans can produce. A human when asked to prove he understands what a human is via text is unlikely do a better job then this. You realize when I asked chatGPT about this I deliberately emphasized it to produce more exact details for it be
How can we tell whether or not GPT-3 has understanding of the data?
I have to say it because a lot of HN users try to convince me that GPT has understanding. We actually have 0 ML models that demonstrate understanding. Frankly, we don't know if this is even possible yet. Sure, in the future I believe we'll have AGI, but an AGI would likely recognize that my statement is contextualized around the current environment and fitting common human speech patters rather than making an absolute immutable statement.
It's confabulating (or hallucinating) meaning. GPT has no understanding of anything. Given an input prompt (tell me what this made up language sentence means for example), it's reaching into its training set and stochastically parroting a response. There is no cognition or understanding of language.