Chomsky vs LLMs

The cluster centers on debates about Noam Chomsky's theories of universal grammar and language acquisition, particularly whether large language models (LLMs) challenge or refute his views on human language learning.

➡️ Stable 0.6x AI & Machine Learning
2,467
Comments
19
Years Active
5
Top Authors
#4095
Topic ID

Activity Over Time

2008
14
2009
31
2010
50
2011
103
2012
136
2013
80
2014
55
2015
146
2016
201
2017
158
2018
85
2019
81
2020
118
2021
166
2022
128
2023
409
2024
194
2025
302
2026
10

Keywords

e.g LLM DIRECTLY ALGOL MIT IMO en.m UG i.e sciencedirect.com chomsky grammar language languages llms universal humans formal learn language statistical

Sample Comments

dgellow Nov 23, 2019 View on HN

You may be interested by Chomsky’s work on Universal Grammar theory https://en.m.wikipedia.org/wiki/Universal_grammar

rfv6723 Jan 16, 2026 View on HN

You've mistaken the battlefield. This isn't about descriptive grammar. It's about the decades-long dominance of Chomsky's entire philosophy of language.His central argument has always been that language is too complex and nuanced to be learned simply from exposure. Therefore, he concluded, humans must possess an innate, pre-wired "language organ"—a Universal Grammar.LLMs are a spectacular demolition of that premise. They prove that with a vast enough dataset,

akasakahakada Jun 14, 2024 View on HN

Linguists e.g. Chomsky: No you can't. Proof is left as an exercise.

Cacti Nov 2, 2012 View on HN

In which way is he vague? He basically reinvented a Turing Machine with human language and brought linguistics around to the idea that, yes, language isn't something that's vaguely "out there" tabula-rasa-style, it's built into our genetics at a very fundamental level. Fundamental enough that he tied linguistics DIRECTLY to math and from there to programming. The Chomsky Heirarchy is no joke.Your link relating to statistical models is only a tiny, tiny part of Chomsky's fundamental arguments

jameshart Dec 10, 2023 View on HN

The article actually doesn't say that LLMs are causing us to rethink human language acquisition. It says that LLMs are causing us to rethink whether Chomsky's universal grammar is a good candidate for a necessary substrate for language acquisition.For example, it says:> Universal grammar enthusiasts have long argued that no other biological species can have language that has grammar and recursive composability, because they don’t have universal grammar.That seems like a rea

throw16180339 Dec 24, 2024 View on HN

I think they're joking about Chomsky's work with formal languages (https://en.wikipedia.org/wiki/Chomsky_hierarchy).

AndyNemmity Dec 10, 2023 View on HN

The article said LLMs make Chomsky incorrect about how humans learn language.I posted quotes of his indicating that isn't true. Because it isn't true. It's absurdly obvious it isn't true.Everyone replying is talking about other things instead of the topic. LLMs have added zero to the learning about how humans learn language.They've added other value.

paulsutter May 25, 2025 View on HN

"Expert in (now-)ancient arts draws strange conclusion using questionable logic" is the most generous description I can muster.Quoting Chomsky:> These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.> One is that the LLM systems are designed in such a way that they cannot tell us anything about language, learnin

foobarqux Jul 12, 2023 View on HN

Chomsky hasn't made any reversal, Norvig misrepresents what Chomsky has said.> Chomsky is now unhappy that LLMs can learn /any/ language, even unnatural ones, and therefore aren't good tools for understanding human languageThat's also not what he has said: He said they aren't useful for understanding the human language faculty, in other words, understanding how people are able to have language. As he says it obviously can't be the same way as LL

foobarqux Feb 19, 2023 View on HN

I believe Chomsky mentioned two studies in one of the 4 episodes in the "Closer to Truth" series, you'll have to search the transcript for the exact timestamp.The first is an fMRI study that shows that the brain doesn't engage the language processing centers when trying to understand a made-up non-structural language (i.e. a "statistical language") but does when trying to learn a made-up structural language.The second is about a man who had brain damage except