Chomsky and AI

General discussion about learning languages
User avatar
Le Baron
Black Belt - 3rd Dan
Posts: 3578
Joined: Mon Jan 18, 2021 5:14 pm
Location: Koude kikkerland
Languages: English (N), fr, nl, de, eo, Sranantongo,
Maintaining: es, swahili.
Language Log: https://forum.language-learners.org/vie ... 15&t=18796
x 9558

Re: Chomsky and AI

Postby Le Baron » Mon Mar 20, 2023 1:45 pm

ryanheise wrote:[So When we say that ChatGPT doesn't really "understand", we would mean that in the "human" sense, but there is still a sense in which ChatGPT does "understand". Because in computer science, there is a field of study known as natural language understanding. When you look at the progress that has been made in machine translation, the oldest translation models simply did a literal word-for-word translation without understanding the overall meaning of the sentence, and so it would get things very wrong. What has allowed these translation systems to get better is precisely that they have become better at understanding the overall meaning of the sentence, and even beyond that, the overall context of the sentence within the larger piece of text that it appeared. Or take something even much simpler than that: Before we even started building machine learning models for this sort of stuff, early systems like UNIX came up with a command-line interface that was capable of understanding basic commands. It's not human understanding, but it is a kind of understanding that doesn't include consciousness as a component.

I'd contend that it doesn't understand in any real sense. The redefining of words to meet a certain concept of understanding is not really legitimate. The idea being touted is a challenge or perhaps meeting human understanding. At this point I would posit a cat or a dog or a crow or any actually intelligent animal as a better candidate. Not just something turning out creditable essays from having the computing power to sift through millions of examples. These are the effects, the products of intelligence and understanding, not the thing itself.

When we pit a mere electronic calculator against the average person, the latter's failure to understand a particular equation and the calculator's operational capability of solving one quickly is not 'understanding' for the calculator. It's like a bolt fitting into a nut, it was specifically designed to that end. The ability for a human to finally 'grasp' something without having their head opened and the neurons rearranged into a more receptive arrangement for the problem at hand, really is 'understanding'.
6 x
Pedantry is properly the over-rating of any kind of knowledge we pretend to.
- Jonathan Swift

User avatar
tastyonions
Black Belt - 1st Dan
Posts: 1602
Joined: Sat Jul 18, 2015 5:39 pm
Location: Dallas, TX
Languages: EN (N), FR, ES, DE, IT, PT, NL, EL
x 3975

Re: Chomsky and AI

Postby tastyonions » Mon Mar 20, 2023 2:24 pm

It doesn't matter much for the future whether GPT really understands something as long as it can reliably spit out content that looks a lot like the product of understanding.
2 x

User avatar
Le Baron
Black Belt - 3rd Dan
Posts: 3578
Joined: Mon Jan 18, 2021 5:14 pm
Location: Koude kikkerland
Languages: English (N), fr, nl, de, eo, Sranantongo,
Maintaining: es, swahili.
Language Log: https://forum.language-learners.org/vie ... 15&t=18796
x 9558

Re: Chomsky and AI

Postby Le Baron » Mon Mar 20, 2023 3:15 pm

tastyonions wrote:It doesn't matter much for the future whether GPT really understands something as long as it can reliably spit out content that looks a lot like the product of understanding.

It does matter though. This is precisely a problem that we find ourselves in all the time and always did when people simply cobble together reports and essays and the boiling-down of ideas, simplifications etc where there is little more than a semblance of meaning involved. It's also the problem of automated information 'analysis' which turns out to fail to read important nuances influenced from outside the direct input.

To 'look like' understanding strikes me as very bad and inadequate. It's like a façade which 'looks like' something. Like a film set looks like a row of shops, but isn't a row of shops.
2 x
Pedantry is properly the over-rating of any kind of knowledge we pretend to.
- Jonathan Swift

User avatar
tastyonions
Black Belt - 1st Dan
Posts: 1602
Joined: Sat Jul 18, 2015 5:39 pm
Location: Dallas, TX
Languages: EN (N), FR, ES, DE, IT, PT, NL, EL
x 3975

Re: Chomsky and AI

Postby tastyonions » Mon Mar 20, 2023 3:26 pm

Sure, it matters in some moral sense in that it's always better to operate on true information than on falsehoods. But does it matter to the bottom line? Think like a CEO or a bean-counter. Will the money you save over years and decades by laying off half your employees really be outweighed by the occasional slipup from wonky AI work?

And that's why you keep on a few specialists or experts: they can wrangle the AI with the right prompts and check over its output so nothing glaringly terrible slips through.
1 x

Cainntear
Black Belt - 3rd Dan
Posts: 3522
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8783
Contact:

Re: Chomsky and AI

Postby Cainntear » Mon Mar 20, 2023 4:17 pm

This thread sent me look up Chomsky in Children's Minds by Margaret Donaldson, and what should I find but a piece of paper marking the page I was looking for.

It recounts the story of investigating language by asking for a translation...
Donaldson wrote:Heinz Werner tells the story of an explorer who was interested in the language of a North American Indian tribe and who asked a native speaker to translate into his language the sentence: 'The white man shot six bears today.' The Indian said it was impossible. The explorer was puzzled and asked him to explain. 'How can I do that?' said the Indian. 'No white man could shoot six bears in one day.'
To Western adults, and especially to Westerm adult linguists, languages are formal systems. A formal system can be manipulated in a formal way.


This was a pretty mind-blowing thing to me, and I believe I only bought the book because it was quoted in the Open University English course I was taking.

The OU (IIRC) presented it as a clear opposition to "colorless green ideas sleep furiously" -- Chomsky set up a circular argument, because he only really could create the sentence because he spent so long looking at language divorced from meaning.

Iversen wrote:And then AI happened :lol: :lol: :lol: . Actually it was already symptomatic that none of the big translation systems were based on Chomsky's ideas - they were based on endless amounts of comparisons between parallel texts, and that's already one step in the direction of AI.

Indeed. And yet I still blame Chomsky for this. When I studied an introduction to natural language processing at the turn of the century, there was still a divorce of meaning from syntax, all based on Chomsky's thinking.

Generate grammars were still king, and the big example of the superficial structural difference between passive and active sentences meant that loads of people were heading down the wrong path. Actually, I think it may even have been Chomsky's influence that led to me dropping AI completely and switching to straight Computer Science for my degree, because what they were teaching us was so wrong that I couldn't internalise it (like I often say, I intuitively knew Tesnière's valency grammars were the right way to do things, before I'd even seen Tesnière's name, never mind his writing).

I firmly believe that if Tesnière had been working in the US, his theory would have been implemented in computers, and we would have got better translators earlier, and wouldn't have been stuck relying on brute force and stupendous numbers of statistical calculations to do it.
They don't construct a formal grammar (though maybe you could ask a bot to compile one!), but can make sentences that are grammatical - well that's what you expect from babies.

It's phenomenally inefficient though -- computers can only do good language after absorbing more real language than most people would hear in a lifetime.

Even some of the less common languages have more written material than anyone could hope to read in a lifetime, and even having absorbed it all, AIs still can't get the language right.
2 x

User avatar
ryanheise
Green Belt
Posts: 459
Joined: Tue Jun 04, 2019 3:13 pm
Location: Australia
Languages: English (N), Japanese (beginner)
x 1681
Contact:

Re: Chomsky and AI

Postby ryanheise » Mon Mar 20, 2023 4:43 pm

Le Baron wrote:I'd contend that it doesn't understand in any real sense. The redefining of words to meet a certain concept of understanding is not really legitimate. The idea being touted is a challenge or perhaps meeting human understanding. At this point I would posit a cat or a dog or a crow or any actually intelligent animal as a better candidate. Not just something turning out creditable essays from having the computing power to sift through millions of examples. These are the effects, the products of intelligence and understanding, not the thing itself.


No computer scientist claims that the artificial things ARE the real things, only that they are computer analogues of real things. Adding a sense to the definition of a word is a natural consequence of how any field of science progresses, and you really shouldn't concern yourself with what words computer scientists who specialise in that field choose to use unless you are trying to also join the conversation about developments within that field, at which point you kind of need to appreciate the definitions used within that field to appreciate the points made in the discussion.

A large part of what computer scientists do every day is develop computational analogues of real world dynamic processes, because we happen to have uses for these types of processes in the computer world, and we tend to name these analogues after the real things. Nouns get a prefix like "artificial" or "computer" (e.g. "artificial neuron", "computer vision"). Verbs don't get prefixes, just because it's cumbersome to have to do that to verbs. But that now means that within the field of computer science, these verbs gain additional senses that apply specifically in that context. In the case of the verb "understand", computer scientists have not changed what it means for a human to understand something, but have only added a sense of what it means for a computer to understand something. The computer sense of "understand" is analogous to the human sense of "understand", but not the same thing.

So what does "understanding" mean in the field of "Natural Language Understanding"? It means for a computer to analyse what a sentence or utterance or piece of text means. This has a number of important applications for computers, such as for human/computer interaction, and in building question answering systems. If you're ever used Google Assistant, Siri, or Alexa, and you ask it a question, the first step is for the computer to be able to understand the question (by which is meant to determine the meaning of the question), and then it will go on to do further processing based on what the question means. If the assistant gives you a very wrong or off topic answer, we would say that the assistant misunderstood the question. So yes, there is a sense in which computers understand natural language, it has a specific meaning within computer science. I find it less interesting to argue over whether a definition or sense of a word is legitimate (since many words have different senses and are used by different people in different contexts with different meanings, and that's fine). What should be more important is understanding the intended meaning of the sentence ;-)
7 x

User avatar
Iversen
Black Belt - 4th Dan
Posts: 4776
Joined: Sun Jul 19, 2015 7:36 pm
Location: Denmark
Languages: Monolingual travels in Danish, English, German, Dutch, Swedish, French, Portuguese, Spanish, Catalan, Italian, Romanian and (part time) Esperanto
Ahem, not yet: Norwegian, Afrikaans, Platt, Scots, Russian, Serbian, Bulgarian, Albanian, Greek, Latin, Irish, Indonesian and a few more...
Language Log: viewtopic.php?f=15&t=1027
x 14993

Re: Chomsky and AI

Postby Iversen » Mon Mar 20, 2023 6:08 pm

You can of course say things that are impossible - not only Chomsky but also a number of poets have succeeded in doing so. But Chomsky had a purpose when he formulated a grammatically correct sentence with inbuilt contradictions that sabotaged the usual banal interpretations - he wanted to kick any reference to the meaning out of his grammar. And there is something alluring in aiming for a grammar based on observable things (words and combinations of words). But the meanings are lurking behind all the fine constructs.

The native who refused to believe that a white man could shoot six bears in a day and that translating a certain sentence into his language therefore was impossible was obviously wrong - the white man could have a machine gun and the bears could be be tied with metal wires to iron poles, and then the translation into the language of the native would suddenly be possible (or what?) ... so maybe it would be simpler just to ask why "to happen" only is allowed in the third person? The reason is also semantical in its essence - something about lack of influence on making some occurrence occur (to occur "by hap"), but now this has been formulated as a grammatical rule. If you absolutely want to show that you have an impact on the world you must use a circumlocution ("to make things happen"). Or you could claim that you are poet.

So the syntactical rules are ultimately shaped by hidden meanings, but the origin of these rules can be buried in layers of sound changes and folk etymologies and hapax and simple errors that once spread like wildfires. So the -s on 3.person singular of English verbs definitely has a meaning, namely to indicate that it's the 3.person singular which is intended, but we have forgotten why it is like that. And therefore we just write into our tables that the -s is there and that "to happen" only exists in the 3.person - end of discussion, now it's grammar.

As for AI: when I first heard about it some years ago the explanation was that you let a computer run through thousands upon thousands of experiments which mostly fail, but then the computer learns which ones are the acceptable ones and build its next steps on that - and the process is so complicated that no human can follow it in detail. But suddenly a robot can walk up stairs or write a poem, and that's not just a question of using an algorithm to pick up existing information. It needs an algorithm to get started, but what happens after that is presumably still a black box to us.

And then I think: is that really so different from what humans do? Maybe we do have a mechanism for building world pictures and implicite grammars, and it's just really efficient? I could postulate that even an AI supercomputer still hasn't got the thing we call self awareness - but just being able to formulate sentences that seem to answer specific questions is such a big step (and within a few years!) that I wouldn't be surprised to see computer systems that are aware that they are computers within my lifetime.
2 x

User avatar
Le Baron
Black Belt - 3rd Dan
Posts: 3578
Joined: Mon Jan 18, 2021 5:14 pm
Location: Koude kikkerland
Languages: English (N), fr, nl, de, eo, Sranantongo,
Maintaining: es, swahili.
Language Log: https://forum.language-learners.org/vie ... 15&t=18796
x 9558

Re: Chomsky and AI

Postby Le Baron » Mon Mar 20, 2023 8:15 pm

ryanheise wrote:So what does "understanding" mean in the field of "Natural Language Understanding"? It means for a computer to analyse what a sentence or utterance or piece of text means. This has a number of important applications for computers, such as for human/computer interaction, and in building question answering systems. If you're ever used Google Assistant, Siri, or Alexa, and you ask it a question, the first step is for the computer to be able to understand the question (by which is meant to determine the meaning of the question), and then it will go on to do further processing based on what the question means. If the assistant gives you a very wrong or off topic answer, we would say that the assistant misunderstood the question. So yes, there is a sense in which computers understand natural language, it has a specific meaning within computer science. I find it less interesting to argue over whether a definition or sense of a word is legitimate (since many words have different senses and are used by different people in different contexts with different meanings, and that's fine). What should be more important is understanding the intended meaning of the sentence

I think the word 'meaning' is being wrongly employed here. A computer is not analysing meaning, it is analysing structure and only that. It is instructed to recognise things, not what they 'mean'. None of the AIs I have interacted with in the last few months display understanding of meaning, but simple recognition of things it has been instructed to recognise. As a whole idea the AI is constantly confused and confounded and often just wrong in recognising the meaning of even simple contexts. It can't recognise things which aren't properly pointed out to it. Memory is an important factor and this is weak in the AI models. It also appears to work in a narrow linear way and it is only by questions, using actual thought, which forces it to 'search' for extra, specific knowledge. The get-out clause given is 'I am a learning model', yet it seems to never learn some things. That's not really a sign of understanding.

To have intelligence and understanding I expect intention or determination beyond mere outside instructions. I know this kind of talk annoys people favourable to this sort of thing, but rather than being a 'Luddite' I am only being a sceptical critic. I think it is very relevant to consider the meaning and sense of words or they can employed willy-nilly; whilst their general sense of meaning can be employed to lend a quality to something where it doesn't really exist.
2 x
Pedantry is properly the over-rating of any kind of knowledge we pretend to.
- Jonathan Swift

Cainntear
Black Belt - 3rd Dan
Posts: 3522
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8783
Contact:

Re: Chomsky and AI

Postby Cainntear » Mon Mar 20, 2023 9:30 pm

Iversen wrote:You can of course say things that are impossible - not only Chomsky but also a number of poets have succeeded in doing so.

Perhaps, but the point as I recall it of the book that set me to reading Donaldson was the argument that it's all a consequence of schooling and a focus on language as a subject. We do not naturally reason about language, we just say stuff -- people who have never been to school (it is claimed) struggle to say things they can't actually imagine.
If But Chomsky had a purpose when he formulated a grammatically correct sentence with inbuilt contradictions that sabotaged the usual banal interpretations - he wanted to kick any reference to the meaning out of his grammar. And there is something alluring in aiming for a grammar based on observable things (words and combinations of words). But the meanings are lurking behind all the fine constructs.

I had significant difficulty in actually parsing "colorless green ideas..." and it was presented to me as proof that grammar didn't have inherent meaning, and there I was struggling with the grammar because I couldn't connect to meaning....

The native who refused to believe that a white man could shoot six bears in a day and that translating a certain sentence into his language therefore was impossible was obviously wrong - the white man could have a machine gun and the bears could be be tied with metal wires to iron poles, and then the translation into the language of the native would suddenly be possible (or what?)

The point was that if people can't imagine it, they can say it. Like if I say "a purple-furred monster with a trumpet-shaped nose and 6 tentacles instead of arms", I'm actually imagining it. And I couldn't actually say that without imagining it (unlike ChatGPT)
... so maybe it would be simpler just to ask why "to happen" only is allowed in the third person?

Have you never happened upon structures like this...? :twisted:

The problem is that that's metalanguage and relies on knowledge of the language as a formal system, so no: it's not simpler.
So the syntactical rules are ultimately shaped by hidden meanings, but the origin of these rules can be buried in layers of sound changes and folk etymologies and hapax and simple errors that once spread like wildfires.

Indeed, and there's no real need to ask "why?" but good cause to just accept that it is. Chomsky's problem was that he didn't do either, and instead treated happening as being 3rd person only as not being grammar. The guy was a fruitcake.

As for AI: when I first heard about it some years ago the explanation was that you let a computer run through thousands upon thousands of experiments which mostly fail, but then the computer learns which ones are the acceptable ones and build its next steps on that - and the process is so complicated that no human can follow it in detail.

That's machine learning, not AI.

But we call it AI in common parlance because that's sci-fi.
And we call it AI in marketing because it sells well.
0 x

User avatar
sfuqua
Black Belt - 1st Dan
Posts: 1644
Joined: Sun Jul 19, 2015 5:05 am
Location: san jose, california
Languages: Bad English: native
Samoan: speak, but rusty
Tagalog: imperfect, but use all the time
Spanish: read
French: read some
Japanese: beginner, obsessively studying
Language Log: https://forum.language-learners.org/vie ... =15&t=9248
x 6314

Re: Chomsky and AI

Postby sfuqua » Mon Mar 20, 2023 9:36 pm

AIs can create. They do already. :D

I find it interesting to live in a time when parts of physics seem to be solving some of the mysteries of quantum mechanics by invoking information theory, and AIs, which are produced in a rather haphazard random way, write poetry and create art. 8-)
It makes me believe that we humans may be on the brink of a deeper insight into the underlying structure of the universe. :o

I certainly hope so, because I am bored with Netflix. :lol:
Perhaps whatever divine spark of understanding and insight that we humans have is something that develops naturally with complex neural networks.
Produced with a 3 sentence prompt:
small_smart_brave_anime_heroine.jpeg
You do not have the required permissions to view the files attached to this post.
Last edited by sfuqua on Mon Mar 20, 2023 9:47 pm, edited 2 times in total.
3 x
荒海や佐渡によこたふ天の川

the rough sea / stretching out towards Sado / the Milky Way
Basho[1689]

Sometimes Japanese is just too much...


Return to “General Language Discussion”

Who is online

Users browsing this forum: Radioclare and 2 guests