Chomsky and AI

General discussion about learning languages
User avatar
sfuqua
Black Belt - 1st Dan
Posts: 1644
Joined: Sun Jul 19, 2015 5:05 am
Location: san jose, california
Languages: Bad English: native
Samoan: speak, but rusty
Tagalog: imperfect, but use all the time
Spanish: read
French: read some
Japanese: beginner, obsessively studying
Language Log: https://forum.language-learners.org/vie ... =15&t=9248
x 6314

Chomsky and AI

Postby sfuqua » Mon Mar 20, 2023 2:10 am

I hope I make sense here, because Chomsky's recent, comments on AI, annoyed the living daylights out of me.
Chomsky recently wrote an editorial for the New York Times, where he seems to think that he has proven that AI cannot think.
I hate to put a link to an article which may be behind a paywall,https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html This article has been commented on across the Internet, and I can't wait for AI's to start using it to prove that they are not intelligent.

I think his argument is total hogwash. His argument seems to be that humans have an innate language acquisition device and that AI can't communicate because they don't have it. He ignores the fact that they can communicate, which is easy to demonstrate. He claims that humans have a unique system of reasoning and that AI's can't reason because they don't have this unique human reasoning device because they are not human. Circular logic.

Of course, they are not human, but what exact;y does he mean by the qualities that he calls uniquely human? Chomsky has pulled out his "magic language acquisition device" for decades now, without properly defining it. Every time fact come along which disprove one of his views about the unique qualities of humans, he changes his definition and smugly retreats to his ivory tower.

One has to ask what would convince Chomsky that he had met an intelligent AI? I assert that there is nothing that could ever convince him.
His argument that AIs don't reason well is accurate, but does that disqualify them as being intelligent? If he thinks that one has to have brilliant, creative reasoning to qualify as human, this would disqualify most freshman English students from being human.

I find current AIs to be fascinating, not because they are just like people, but because they are an alien intelligence that has recently arrived in the universe. Of course, they are not human. Exactly how they are different is a fascinating question. How similar are they to humans, what can they do better and what are they weak at?

Sorry to make this a long post, but ChatGPT said it better than Chomsky. :lol:

"What are the differences between how a human brain works and how an artificial neural network works?"

The human brain and artificial neural networks are both capable of processing information, but there are some key differences in how they work:

Structure: The human brain is made up of billions of interconnected neurons, while an artificial neural network is made up of layers of interconnected nodes or artificial neurons.

Learning: The human brain is capable of unsupervised learning, which means it can learn from unstructured data without any external guidance. Artificial neural networks, on the other hand, rely on supervised learning, where they are trained on labeled data and require external feedback to adjust their parameters.

Flexibility: The human brain is highly adaptable and can learn new things quickly, even when presented with novel situations. Artificial neural networks, while capable of learning new things, require a significant amount of data to be trained on in order to perform well.

Memory: The human brain has a complex memory system that allows us to remember information over long periods of time, recall information when needed, and make associations between different pieces of information. Artificial neural networks typically rely on external memory systems and do not have the same capacity for complex memory.

Energy consumption: The human brain is highly energy-efficient, consuming only about 20 watts of power. Artificial neural networks, however, can require significant amounts of energy to operate, particularly when processing large amounts of data.

Overall, while artificial neural networks are inspired by the structure and function of the human brain, they still have many differences in how they process information and learn.

As an AI language model, I can say that artificial neural networks are not capable of "thinking" in the way that humans do. While artificial neural networks can perform complex calculations and pattern recognition tasks, they do not have the consciousness or subjective experience that humans possess. Artificial neural networks are designed to process data and generate output based on a set of predetermined rules and algorithms, which is very different from human thought processes.

However, artificial neural networks can mimic certain aspects of human cognition and behavior, such as recognizing faces, interpreting language, or playing games. They achieve this through a process of learning and adaptation, where they adjust their internal weights and connections based on feedback from their environment.

So while artificial neural networks are not conscious and do not possess the same kind of intelligence as humans, they can be used as powerful tools for solving complex problems and performing tasks that would be difficult or impossible for humans to do on their own.
4 x
荒海や佐渡によこたふ天の川

the rough sea / stretching out towards Sado / the Milky Way
Basho[1689]

Sometimes Japanese is just too much...

User avatar
Le Baron
Black Belt - 3rd Dan
Posts: 3578
Joined: Mon Jan 18, 2021 5:14 pm
Location: Koude kikkerland
Languages: English (N), fr, nl, de, eo, Sranantongo,
Maintaining: es, swahili.
Language Log: https://forum.language-learners.org/vie ... 15&t=18796
x 9561

Re: Chomsky and AI

Postby Le Baron » Mon Mar 20, 2023 3:16 am

sfuqua wrote:His argument that AIs don't reason well is accurate, but does that disqualify them as being intelligent? If he thinks that one has to have brilliant, creative reasoning to qualify as human, this would disqualify most freshman English students from being human.

Well...the problem is that the AIs, powered by enormous computing power and accurate operations, are not supposed to be prone to the same foibles and failures as humans (something of a reverse to Chomsky's argument), but they fail at things even those Freshman English students don't even when they've been out all night on the razz.

To be honest I find the fevered excitement around ChatGTP and like AI to be exaggerated and premature. I feel like I've had a very good go hammering it with lots of different conversations and I find them wanting and worse than talking to a street-sweeper who has a smartphone and can google answers. In fact a lot depends on the questioner and questions asked and how far the questioner is willing to press and contradict. To the point where you find yourself doing your own work. At least with the street-sweeper you can access someone or 'something' behind the activity. After a while when 'interacting' it really dawns on you that it's just a programmed model that can string together repetitive and superficial prose patterns to reconstitute existing information.

Some people seemed to be fooled, even if only temporarily, because it is extremely compelling. The question which asks: 'how are they different from humans?' seems to me misplaced. Nothing in AI even looks like 'thinking'. It isn't 'thinking' it is just presenting things.

When I hear that AI just decided itself, to do something a certain way, to e.g. write a book on a certain topic with a certain approach, rather than someone actually human asking them to do so, I will agree the AI is something even more compelling. Until then...
6 x
Pedantry is properly the over-rating of any kind of knowledge we pretend to.
- Jonathan Swift

User avatar
sfuqua
Black Belt - 1st Dan
Posts: 1644
Joined: Sun Jul 19, 2015 5:05 am
Location: san jose, california
Languages: Bad English: native
Samoan: speak, but rusty
Tagalog: imperfect, but use all the time
Spanish: read
French: read some
Japanese: beginner, obsessively studying
Language Log: https://forum.language-learners.org/vie ... =15&t=9248
x 6314

Re: Chomsky and AI

Postby sfuqua » Mon Mar 20, 2023 3:39 am

I think that my old experiences in graduate school with artificial intellegence 25 years ago makes me more impressed with them. Measured against old attempts at artificial intelligence, they are awesome. For some tasks, like writing college freshman 500-word essays, they are better than humans, or at least as good and much faster.

I think it is a glass half empty or half full situation.

And Chomsky has annoyed me for 40 years, with his invocation of brain structures which must arise from magic or spirit.

And ChatGPT did pass the Turing test. https://www.mlyearning.org/chatgpt-passes-turing-test/#:~:text=Since%20OpenAI's%20ChatGPT%20is%20considered,did%20pass%20the%20Turing%20test.

That was the holy grail back in my old days in graduate school.

Added later:
We assume that a human is thinking and feeling like we do.
How do we know? All we can do is observe what they say and do, and reflect on what we experience when we say and do similar things. We can't really know what the other person experiences.

It is unfair to expect an AI to do better than humans at every mental task.
I am just amazed that they work at all.


Write a funny birthday message to my fellow teacher that will be different from and better than all the other messages from other teachers.
Here's a funny birthday message for your fellow teacher that will make them feel extra special:

"Happy Birthday to my partner in crime, my co-conspirator of education! You make teaching fun and I'm so grateful to have you as a fellow teacher. On your special day, I hope you get all the coffee, all the chocolate, and all the appreciation you deserve. May your students behave, your lessons be engaging, and your desk be clutter-free. Here's to another year of surviving and thriving in the teaching world. Cheers to you!"
ChatGPT

This borders on creative. It's certainly more creative than me.
5 x
荒海や佐渡によこたふ天の川

the rough sea / stretching out towards Sado / the Milky Way
Basho[1689]

Sometimes Japanese is just too much...

lichtrausch
Blue Belt
Posts: 518
Joined: Thu Jul 23, 2015 3:21 pm
Languages: English (N), German, Japanese, Mandarin, Korean
x 1405

Re: Chomsky and AI

Postby lichtrausch » Mon Mar 20, 2023 4:30 am

IMO the astute way to track progress in AI is to stick with well-defined benchmarks, rather than vague concepts like "real thinking" or "real intelligence" for which the goalposts continue to move as AI gets more impressive.

Image

*the model did not have specific training for these exams.
2 x

User avatar
sfuqua
Black Belt - 1st Dan
Posts: 1644
Joined: Sun Jul 19, 2015 5:05 am
Location: san jose, california
Languages: Bad English: native
Samoan: speak, but rusty
Tagalog: imperfect, but use all the time
Spanish: read
French: read some
Japanese: beginner, obsessively studying
Language Log: https://forum.language-learners.org/vie ... =15&t=9248
x 6314

Re: Chomsky and AI

Postby sfuqua » Mon Mar 20, 2023 4:38 am

And the languages it can communicate in...
0 x
荒海や佐渡によこたふ天の川

the rough sea / stretching out towards Sado / the Milky Way
Basho[1689]

Sometimes Japanese is just too much...

User avatar
leosmith
Brown Belt
Posts: 1353
Joined: Thu Sep 29, 2016 10:06 pm
Location: Seattle
Languages: English (N)
Spanish (adv)
French (int)
German (int)
Japanese (int)
Korean (int)
Mandarin (int)
Portuguese (int)
Russian (int)
Swahili (int)
Tagalog (int)
Thai (int)
x 3157
Contact:

Re: Chomsky and AI

Postby leosmith » Mon Mar 20, 2023 4:41 am

lichtrausch wrote:IMO the astute way to track progress in AI is to stick with well-defined benchmarks
Why is it so bad at English? Is it made in Laos?
2 x
https://languagecrush.com/reading - try our free multi-language reading tool

User avatar
ryanheise
Green Belt
Posts: 459
Joined: Tue Jun 04, 2019 3:13 pm
Location: Australia
Languages: English (N), Japanese (beginner)
x 1681
Contact:

Re: Chomsky and AI

Postby ryanheise » Mon Mar 20, 2023 7:59 am

sfuqua wrote:I hope I make sense here, because Chomsky's recent, comments on AI, annoyed the living daylights out of me.
Chomsky recently wrote an editorial for the New York Times, where he seems to think that he has proven that AI cannot think.


Chomsky has been doing the rounds recently, with several interviews a day over the past few month, on linguistics, consciousnes, AI and education.

Now even the experts who understand these language models the best remain unable to reverse engineer a trained model and understand how the particular arrangement of artificial neurons that arose from the training process actually produces the external behaviour that it does. The best experts remain ignorant. Although Chomsky's ignorance is really on another level, and so to support his conclusions, he uses necessarily over-simplistic arguments.

Chomsky claims that what these language models are doing has very little to do with the way language processing works in the brain (while admitting that he knows neither how the brain does it, nor how the AI does it), and he thinks this is easy to prove simply based on the fact that these language models will work equally well on impossible languages as they do on real languages. By "impossible languages", he presumably means languages that don't have a consistent grammar. Here, the neural network will still manage to find the patterns and be able to model that even though there is no consistent grammar. My response would be that "real language" only follows consistent rules to a point, and then you end up with tonnes of exceptions that don't follow the strict rules. Within these exceptions, there are still patterns to be found in the "real language", and they are not going to follow strict Chomskyan grammatical rules. You need something more powerful than grammar to be able to model these patterns. The fact that grammar doesn't do it, but both artificial AND real neural networks can model these patterns suggest to me that the latter two have far more in common than anything Chomsky has proposed to model what's going on in language processing.

Obviously, no AI expert suggests that we've cracked the Broca's and Wernicke's areas or anything like that. If there is any sort of differentiation between the types of neurons found in different regions of the brain, AI is not really at a stage where it is replicating that. Basically, the very same artificial neurons are infinitely adaptable to infinitely many tasks, be that language, vision, etc. To some extent, this is also the case in the human brain where after brain damage in one area, it sometimes happens that another part of the brain adapts and takes over, but if there is anything advantageous about the differentiation of neurons in different parts of the brain, that is something AI has yet to replicate. But at the same time, the fact that neurons can adapt to different tasks is another fact renders Chomsky's argument above useless, because even if he were correct that language models work equally well on what he characterises as "impossible languages", that could simply be analogous to neurons showing their adaptability to a new task.

In his recent interviews, he also presents the view that ChatGPT is nothing more than a sophisticated plagiarism device, and has little application outside of that. I think this mainly shows a lack of imagination. ChatGPT, and now the newly released GPT 4, have plenty of novel applications beyond the obvious use case of chatting and content generation. As it turns out, these models happen to outperform the previous state of the art on a whole range of Natural Language Processing tasks that lots of applications require, including summarisation, translation, tokenisation, lemmatisation, natural language understanding (e.g. in order to understand the intent behind a search query), and so on.

So let's imagine for example a new web application that uses AI to design a logo for your business based on some requirements, and there's an AI model you've found that works well for that, but it requires a particular kind of prompt that is not terribly easy for a user to input. What you can do now as the app developer is take one of these large language models, such as ChatGPT or GPT 4, and use it to provide a language interface to the user. The language model's purpose is then to "understand" what the user wants, and then translate it into the prompt that will work best for the logo generation model. But there are basically limitless applications of a large language model.

And of course as members of this forum know, many of us have find it quite helpful for language learning. In Japanese, I find it quite helpful to paste in a sentence and ask ChatGPT to tokenise it for me and explain it word by word, including the conjugations. To perform this task, ChatGPT is actually integrating a number of different NLP tasks that have traditionally required a number of separate models to accomplish, so it's quite a feat that ChatGPT can do it all under one model (understanding the question, doing all the tokenisation, lemmatisation, translation, etc. and then producing the explanations). At the same time, it's BECAUSE this is all done in one model that it performs so well at the individual NLP tasks. This is due to the transfer learning effect where learning to be good at one task actually helps the same model to be good at another task.

I find current AIs to be fascinating, not because they are just like people, but because they are an alien intelligence that has recently arrived in the universe. Of course, they are not human. Exactly how they are different is a fascinating question. How similar are they to humans, what can they do better and what are they weak at?


In all of this, we need to remember that these language models are not at all like humans. It's not like ChatGPT went to school as a teenager and had a social life and experiences from its senses, or even any perception of passing time whatsoever. OpenAI took a short cut and trained it to perform well at the tasks it was intended to perform. We don't really know what it would take for an AI to become like a human, sentient and all, but we can at least understand that ChatGPT can't be it because it lacks almost everything that makes up human experience. ChatGPT is good at language, but less good at other things.

Sorry to make this a long post, but ChatGPT said it better than Chomsky. :lol:

"What are the differences between how a human brain works and how an artificial neural network works?"

The human brain and artificial neural networks are both capable of processing information, but there are some key differences in how they work:

Structure: The human brain is made up of billions of interconnected neurons, while an artificial neural network is made up of layers of interconnected nodes or artificial neurons.

Learning: The human brain is capable of unsupervised learning, which means it can learn from unstructured data without any external guidance. Artificial neural networks, on the other hand, rely on supervised learning, where they are trained on labeled data and require external feedback to adjust their parameters.


Another example of ChatGPT being good at language, but less good at other things, such as knowledge. From what I've read, Google's LaMDA was trained according to a different methodology that may result it in being more reliable for knowledge, but really I see these language models being useful more as a language interface to other systems, at least in the current generation. Google mentioned in a paper that their intention was to integrate their language model with an external knowledge system, such as Google search itself, so that the language model would primarily be used to understand the question, but then the external knowledge system would be used to find the answer, and then the language model would then be engaged again to communicate the findings back to the user. I believe the new Bing is now adopting the same approach with their integration between GPT 4 and Bing search results.

But back to the made up ChatGPT answer above, no, "unsupervised learning" is also a big thing in AI as well.

Although, if ChatGPT can get it right 80% of the time, and users are aware that it's not 100%, there is still utility there.

Le Baron wrote:Some people seemed to be fooled, even if only temporarily, because it is extremely compelling. The question which asks: 'how are they different from humans?' seems to me misplaced. Nothing in AI even looks like 'thinking'. It isn't 'thinking' it is just presenting things.


Not "human" thinking, but since the advent of computers, computer scientists have been generalising these concepts. Words like "language", "understanding", "thinking" and "intelligence" all have specific meanings in the human sense, but we now also have digital analogues of these (pardon the pun!), and so the concepts have become broader, with the human instantiations being a subcategory.

So When we say that ChatGPT doesn't really "understand", we would mean that in the "human" sense, but there is still a sense in which ChatGPT does "understand". Because in computer science, there is a field of study known as natural language understanding. When you look at the progress that has been made in machine translation, the oldest translation models simply did a literal word-for-word translation without understanding the overall meaning of the sentence, and so it would get things very wrong. What has allowed these translation systems to get better is precisely that they have become better at understanding the overall meaning of the sentence, and even beyond that, the overall context of the sentence within the larger piece of text that it appeared. Or take something even much simpler than that: Before we even started building machine learning models for this sort of stuff, early systems like UNIX came up with a command-line interface that was capable of understanding basic commands. It's not human understanding, but it is a kind of understanding that doesn't include consciousness as a component.

Now to your example of "thinking", Google's PALM language model has demonstrated capabilities on reasoning in being able to solve never-before-seen logic problems, and being able to explain to you the reasoning steps as to how it arrived at the conclusion. This is not a human thinking thinking up a solution to the problem, but rather a non-sentient computer solving the exact same kind of problem that would typically involve human thinking. That is a kind of thinking, but just not human thinking.

Just in closing, the term "Artificial Intelligence" itself is simply defined as methods that allow a computer to perform tasks that would typically require human intelligence. The goal of taking these previously human-only concepts and then generalising them to include other types of "language", other types of "understanding" etc., is simply to reflect our expanding progress in science. We're at a point now where the computer scientists are borrowing insights from the neuroscientists and the neuroscientists are also borrowing insights from the computer scientists. This is the utility of being able to generalise a concept across two different fields of study.
7 x

User avatar
tastyonions
Black Belt - 1st Dan
Posts: 1606
Joined: Sat Jul 18, 2015 5:39 pm
Location: Dallas, TX
Languages: EN (N), FR, ES, DE, IT, PT, NL, EL
x 3991

Re: Chomsky and AI

Postby tastyonions » Mon Mar 20, 2023 9:06 am

To me the question of whether GPT and the like are “really” intelligent or “truly” understand concepts is not that interesting. GPT makes output that gets “close enough” to human-like that it will soon start majorly transforming the economy, imo. Why pay five copywriters, marketers, lawyers when you can pay one of them to use a chatbot to write a first draft and then check it and make a few appropriate tweaks to the output and call it a day? We’re in for some huge changes in the next couple decades.
8 x

Cainntear
Black Belt - 3rd Dan
Posts: 3525
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8792
Contact:

Re: Chomsky and AI

Postby Cainntear » Mon Mar 20, 2023 10:08 am

The irony is that Chomsky's nonsense has resulted in natural language processing being held back decades.

Generative grammars were immediately countered by Tesnière's valency grammars, but because Chomsky was American and computers mostly were too (what nationality is Apple? Microsoft? Commodore and Atari?), Tesnière was seen as a side issue rather than having the right approach. It took a long time to get through that.

It seems pretty galling that the guy who's indirectly responsible for the poor progress in Ai language programming to call out AI for not being able to do stuff yet...!

sfuqua wrote:I think that my old experiences in graduate school with artificial intellegence 25 years ago makes me more impressed with them. Measured against old attempts at artificial intelligence, they are awesome. For some tasks, like writing college freshman 500-word essays, they are better than humans, or at least as good and much faster.

I think it is a glass half empty or half full situation.

And Chomsky has annoyed me for 40 years, with his invocation of brain structures which must arise from magic or spirit.

And ChatGPT did pass the Turing test. https://www.mlyearning.org/chatgpt-passes-turing-test/#:~:text=Since%20OpenAI's%20ChatGPT%20is%20considered,did%20pass%20the%20Turing%20test.

That was the holy grail back in my old days in graduate school.

Added later:
We assume that a human is thinking and feeling like we do.
How do we know? All we can do is observe what they say and do, and reflect on what we experience when we say and do similar things. We can't really know what the other person experiences.

It is unfair to expect an AI to do better than humans at every mental task.
I am just amazed that they work at all.

Write a funny birthday message to my fellow teacher that will be different from and better than all the other messages from other teachers.
Here's a funny birthday message for your fellow teacher that will make them feel extra special:

"Happy Birthday to my partner in crime, my co-conspirator of education! You make teaching fun and I'm so grateful to have you as a fellow teacher. On your special day, I hope you get all the coffee, all the chocolate, and all the appreciation you deserve. May your students behave, your lessons be engaging, and your desk be clutter-free. Here's to another year of surviving and thriving in the teaching world. Cheers to you!"
ChatGPT

This borders on creative. It's certainly more creative than me.

Yes, but it proves that the AI has no human thought, because it has taken the written idea of a "clutter-free desk" as perfect, and doesn't realise how much that sounds like a nightmare to a real teacher. :lol:
2 x

Online
User avatar
Iversen
Black Belt - 4th Dan
Posts: 4780
Joined: Sun Jul 19, 2015 7:36 pm
Location: Denmark
Languages: Monolingual travels in Danish, English, German, Dutch, Swedish, French, Portuguese, Spanish, Catalan, Italian, Romanian and (part time) Esperanto
Ahem, not yet: Norwegian, Afrikaans, Platt, Scots, Russian, Serbian, Bulgarian, Albanian, Greek, Latin, Irish, Indonesian and a few more...
Language Log: viewtopic.php?f=15&t=1027
x 14999

Re: Chomsky and AI

Postby Iversen » Mon Mar 20, 2023 11:13 am

I first read Chomsky's opus magnum from 1957 in the 80s during my study years, and I could see that there were some useful ideas (though not all as original as his fans claimed), but a frail basis - and then I found Tesnière and saw at least one grammarian who had grasped something that Chomsky didn't, namely how sentences function. However there was also a question of style, and it seemed as if Chomsky tried to make a grammar that would function in one of the primitive computers of his time (which could explain that he tried to introduce the meaning of words and sentences as late in the process as possible) - and then it was less important whether it fitted into human brains who depend on things having meanings.

Then he proclaimed that there was an inborn language mechanism and even sketched some universals which should be the result of the functioning of that mechanism. One universal (the one about inbedding) has been discussed, but it is hard to find even the original list with the other universals- simply because nobody cares. The notion of an inborn mechanism had a kernel of truth - humans can do some things with symbols and complex structures which even the most advanced among other species can't (not even the gorilla Koko), and our brain must be fashioned in a way to prmit this. The error of Chomsky was probably to think that this mechanism already produced linguistic structures - maybe even some from the mechanics devised by Chomsky himself. But nothing has so far suggested how that human predisposition for using languages is implemented. By the way: if you want to see universals then go to Greenberg, who based his rules on empirical studies.

And then AI happened :lol: :lol: :lol: . Actually it was already symptomatic that none of the big translation systems were based on Chomsky's ideas - they were based on endless amounts of comparisons between parallel texts, and that's already one step in the direction of AI. And now we apparently have machines that can collect enough snippets of language from the internet to compose grammatically impeccable fake news ... well, who would have thought that just a few years ago, except maybe some mad science fiction authors? The collection of methods the chatbots use essentially constitute a language mechanism as the one postulated by Chomsky, they just don't do things as he thought they should have done - and that's what he can't accept. They don't construct a formal grammar (though maybe you could ask a bot to compile one!), but can make sentences that are grammatical - well that's what you expect from babies. And after that the step to claim that they are intelligent is not that big (maybe a capacity for lateral thinking and a better mechanism to seek trustworthy information would be enough). Actually the criteria for denying that the bots already are intelligent would also hit some school children and politicians ...
5 x


Return to “General Language Discussion”

Who is online

Users browsing this forum: No registered users and 2 guests