Chomsky vs LLMs
The cluster centers on debates about Noam Chomsky's theories of universal grammar and language acquisition, particularly whether large language models (LLMs) challenge or refute his views on human language learning.
Activity Over Time
Top Contributors
Keywords
Sample Comments
You may be interested by Chomsky’s work on Universal Grammar theory https://en.m.wikipedia.org/wiki/Universal_grammar
You've mistaken the battlefield. This isn't about descriptive grammar. It's about the decades-long dominance of Chomsky's entire philosophy of language.His central argument has always been that language is too complex and nuanced to be learned simply from exposure. Therefore, he concluded, humans must possess an innate, pre-wired "language organ"—a Universal Grammar.LLMs are a spectacular demolition of that premise. They prove that with a vast enough dataset,
Linguists e.g. Chomsky: No you can't. Proof is left as an exercise.
In which way is he vague? He basically reinvented a Turing Machine with human language and brought linguistics around to the idea that, yes, language isn't something that's vaguely "out there" tabula-rasa-style, it's built into our genetics at a very fundamental level. Fundamental enough that he tied linguistics DIRECTLY to math and from there to programming. The Chomsky Heirarchy is no joke.Your link relating to statistical models is only a tiny, tiny part of Chomsky's fundamental arguments
The article actually doesn't say that LLMs are causing us to rethink human language acquisition. It says that LLMs are causing us to rethink whether Chomsky's universal grammar is a good candidate for a necessary substrate for language acquisition.For example, it says:> Universal grammar enthusiasts have long argued that no other biological species can have language that has grammar and recursive composability, because they don’t have universal grammar.That seems like a rea
I think they're joking about Chomsky's work with formal languages (https://en.wikipedia.org/wiki/Chomsky_hierarchy).
The article said LLMs make Chomsky incorrect about how humans learn language.I posted quotes of his indicating that isn't true. Because it isn't true. It's absurdly obvious it isn't true.Everyone replying is talking about other things instead of the topic. LLMs have added zero to the learning about how humans learn language.They've added other value.
"Expert in (now-)ancient arts draws strange conclusion using questionable logic" is the most generous description I can muster.Quoting Chomsky:> These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.> One is that the LLM systems are designed in such a way that they cannot tell us anything about language, learnin
Chomsky hasn't made any reversal, Norvig misrepresents what Chomsky has said.> Chomsky is now unhappy that LLMs can learn /any/ language, even unnatural ones, and therefore aren't good tools for understanding human languageThat's also not what he has said: He said they aren't useful for understanding the human language faculty, in other words, understanding how people are able to have language. As he says it obviously can't be the same way as LL
I believe Chomsky mentioned two studies in one of the 4 episodes in the "Closer to Truth" series, you'll have to search the transcript for the exact timestamp.The first is an fMRI study that shows that the brain doesn't engage the language processing centers when trying to understand a made-up non-structural language (i.e. a "statistical language") but does when trying to learn a made-up structural language.The second is about a man who had brain damage except