What does the (distant) future of language acquisition look like?

General discussion about learning languages
User avatar
EGP
White Belt
Posts: 31
Joined: Sun Mar 28, 2021 8:36 pm
Location: Australia
Languages: English (N), Macedonian (B2), German (A1)
x 51
Contact:

Re: What does the (distant) future of language acquisition look like?

Postby EGP » Tue Apr 27, 2021 9:46 pm

Came across another article on this topic. I'm starting to think that with this: what seems like an endless pandemic, I should be looking at getting into NLP. It said $3.6 billion is the global education market for AI by 2023. And they have gotten the programming of it down to a 1:1 ratio. So programme it for an hour for an hour lesson. It can learn by itself too. I think I remembered the points right:

https://www.analyticsinsight.net/develo ... -ais-role/
0 x
I research English grammar and vocabulary in corpora.

Cainntear
Black Belt - 3rd Dan
Posts: 3469
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8665
Contact:

Re: What does the (distant) future of language acquisition look like?

Postby Cainntear » Wed Apr 28, 2021 6:10 pm

EGP wrote:Came across another article on this topic. I'm starting to think that with this: what seems like an endless pandemic, I should be looking at getting into NLP. It said $3.6 billion is the global education market for AI by 2023. And they have gotten the programming of it down to a 1:1 ratio. So programme it for an hour for an hour lesson. It can learn by itself too. I think I remembered the points right:

https://www.analyticsinsight.net/develo ... -ais-role/

That's pretty mindblowing. I do wonder what levels of abstraction it can achieve. They mention English grammar, and I'm wondering if that's specifically grammatical analysis for people who already know English rather than teaching English grammar as a language skill.

Still, though, kind of incredible.
1 x

Beli Tsar
Green Belt
Posts: 384
Joined: Mon Oct 22, 2018 3:59 pm
Languages: English (N), Ancient Greek (intermediate reading), Latin (Beginner) Farsi (Beginner), Biblical Hebrew (Beginner)
Language Log: https://forum.language-learners.org/vie ... =15&t=9548
x 1294

Re: What does the (distant) future of language acquisition look like?

Postby Beli Tsar » Thu Apr 29, 2021 8:41 am

Iversen wrote:First step is to tell GT as many words with as many of their possible translations as possible. I use the program to make bilingual study printouts so I see a lot of translations from (and to) a wide array of languages, and most problems seem to be caused by a lack of vocabulary - or in a weaker form: lack of knowledge about certain meanings, which any decent dictionary could have told it. But often a certain word is totally missing from GT's vocabulary, and that can happen even with words you would have expected it to know.

The third step (according to my previous rant) is then to ask the machine to choose the most relevant translation in a given context, and there the systematic analysis of contexts plays a role - but more subtle shades of meaning will be hard to communicate to a machine that doesn't have feelings and doesn't have a life outside the virtual world.

However the point is not whether it can 'feel' the difference between for instance "pour que" and "parce que" as long as it can choose the correct translations in a given target language. And since there is a lot of difference between an intention and a reason most languages have some way of indicating which one is meant - GT just has to identify the different possibilities and choose the most logical one, given the context - like for instance whether there is a subjunctive in the subordinate phrase (and there the markers I mentioned might come in handy).

The need for some deep understanding of the semantics only arises at stage four, where a machine is told to construct new utterances.

I've got a weak understanding of machine learning - conversations with a good friend who worked on it at Google, and a little reading - but as I understand it this is roughly how people first tried building natural language processing software, and it didn't work at all. The complexity just scales too much, and it turns out to be impossible to feed it enough information to really make well-assessed choices.

And that's why today's methods - centering on brute-force feeding of a huge corpus to software that can match patterns - took over. Translation is so much harder than it looks on the surface, which is why it took so long to arrive, and why Alexa/Siri etc. still can't process natural language in any real way.
1 x
: 0 / 50 1/2 Super Challenge - Latin Reading
: 0 / 50 1/2 Super Challenge - Latin 'Films'

Cainntear
Black Belt - 3rd Dan
Posts: 3469
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8665
Contact:

Re: What does the (distant) future of language acquisition look like?

Postby Cainntear » Fri Apr 30, 2021 8:05 am

Iversen wrote:GT just has to identify the different possibilities and choose the most logical one, given the context
...
The need for some deep understanding of the semantics only arises at stage four, where a machine is told to construct new utterances.

(My emphasis)
What you've done here is pick the most difficult, complex, challenging part of the system and make it sound like something small. The reason your first 3 steps aren't done the way you suggest is because they lead to a dead end -- you can't finish the process because computers done understand context well enough for stage 4.
1 x

User avatar
Iversen
Black Belt - 4th Dan
Posts: 4768
Joined: Sun Jul 19, 2015 7:36 pm
Location: Denmark
Languages: Monolingual travels in Danish, English, German, Dutch, Swedish, French, Portuguese, Spanish, Catalan, Italian, Romanian and (part time) Esperanto
Ahem, not yet: Norwegian, Afrikaans, Platt, Scots, Russian, Serbian, Bulgarian, Albanian, Greek, Latin, Irish, Indonesian and a few more...
Language Log: viewtopic.php?f=15&t=1027
x 14962

Re: What does the (distant) future of language acquisition look like?

Postby Iversen » Fri Apr 30, 2021 6:26 pm

Well, how do humans choose the best translation out of several possible ones? It must be done by drawing on semantical fields established through thousands of hours of receiving and decoding input plus a little bit of clever reasoning. Could a machine do the same thing - i.e. composing semantical fields and then applying a little bit of logic to each task? I actually think both things are possible, even though they haven't been implemented fully yet. I do know that 'classical' translation programming with lots of human input (rules, words) was abandoned in favour of brute force culling from bilingual texts plus some clever statistical analysis. With faster processing speed and access to immense data collections it should be possible to integrate association formation into this process.

Of course nobody in their sane mind would hire humans to manually attach associations to the words in the big belly of mighty GT, so it would take some kind of automatic collection process to attach a number of frequent words to each word in the collection. After all that must be the idea behind association formation. How much logical analysis is necessary to do this in a smart way instead of just using brute force? I don't know, but I suspect that it is a thing that AI could accomplish - first in rather uncomplicated contexts, later hopefully in more complex settings.

But there is a problem known as pattern (or 'Gestalt') recognition. Let me first take a simple example: a machine can recognize the letter Q by comparing it to known instances of this letter. A knowledgeable human will have noticed that there is an oval (a closed shape) with a little thingy attached at the lower right, and he/she can extend this to acknowledging a rectangle with a little thingy attached at the lower right as the letter Q (OK, a somewhat misshapen Q or a Q subjected to the whims of a designer, but still a Q as long as there isn't any more likely interpretation). Can you make a computer think in terms of shapes then it can deal with more complicated (or alternatively shaped) objects than it can with raw comparison, bit by bit. So part of the association field must be the observation that roofs are on top of houses, but their actual shape can differ. In other words things at the top of houses are likely to be roofs, unless they look like chimneys. The shape of dogs is an even more complicated example, but I have read somewhere that the feat of distinguishing dogs from cats already has been accomplished by some computer nerds and their machines.

So in other words the associative fields should not be mere collocations of words that often occur together, but there must be some level of relation analysis involved. And if that can be provided then I do think that machines eventually can learn to understand human utterances, instead of just being able to translate them.

The final step would be to make the machines (and behind them a cohort of connected big brother megamachines) active. But what might a computer want to tell you, apart from simple things like weather reports and parking penalties and the location of your child? And from another field: do computers really want to write symphonies? Or if they do, do they want to making their works pleasing to humans rather than other computers? Or maybe they reason that it would be easier to please cockroaches than humans, and then we are doomed. To avoid this we may have to be permanently connected to the big brains out there so that they recognize us as essential for some part of their functioning - like for example as pattern recognizers, since that's one of the few areas where we still surpass the computers.

EDB.JPG
You do not have the required permissions to view the files attached to this post.
1 x

User avatar
EGP
White Belt
Posts: 31
Joined: Sun Mar 28, 2021 8:36 pm
Location: Australia
Languages: English (N), Macedonian (B2), German (A1)
x 51
Contact:

Re: What does the (distant) future of language acquisition look like?

Postby EGP » Fri Apr 30, 2021 11:54 pm

I didn't read into the very long articles that were referred to in the last article I shared. I quickly got lost in it. It was almost as a computer may have written half of it. :lol:

For that reason, I didn't totally understand or buy the idea of AI learning to teach by themselves. Not saying it is not possible in the future though. I get that a human can teach them how to teach certain things. But it is quite a huge step again to suggest AI would work out what should be learnt next. Curriculum/learning design will largely be a human thing I hope. Following that curriculum, no problems.

I imagine an AI following a textbook approach.

1. Now students, open your book to page 10.

2. Discuss with your partner about their favourite hobby for 5 minutes.
(timer starts on the projector.)

3. Times up.

4. Calls on data about students that are present. "Jim, what did your partner tell you?"

5. Jim answers.

6. Voice recognition hears the words "collect stamps'

7. retrieval of images of stamp collecting on the projector.

8. Looks into the corpus related to stamp collecting and today's grammar point.

9. projects examples for all to see.

10. focus on forms etc.
0 x
I research English grammar and vocabulary in corpora.

User avatar
einzelne
Blue Belt
Posts: 804
Joined: Sat Mar 17, 2018 11:33 pm
Languages: Russan (N), English (Working knowledge), French (Reading), German (Reading), Italian (Reading on Kindle)
x 2882

Re: What does the (distant) future of language acquisition look like?

Postby einzelne » Sun May 02, 2021 3:16 pm

"...computer engineers, just like many neuroscientists, go to great lengths to filter out "background noise" and "stray" electrical fields from their binary signal.
This is a big difference between computers and brains. For computers, spontaneous fluctuations create errors that crash the system, while for our brains, it's a built-in feature. "

As the science fiction author Ted Chiang writes, "experience is algorithmically incompressible."

https://www.salon.com/2021/04/30/why-ar ... -dead-end/
1 x

User avatar
EGP
White Belt
Posts: 31
Joined: Sun Mar 28, 2021 8:36 pm
Location: Australia
Languages: English (N), Macedonian (B2), German (A1)
x 51
Contact:

Re: What does the (distant) future of language acquisition look like?

Postby EGP » Mon May 03, 2021 5:01 am

Nice article, but not sure what you want to say by directly quoting that and not commenting on the topic.

I believe the main point is that to learn you must play, freely associate and for humans that can be sleep. AI doesn't do that. But this would be what the previous article was touching on too. Making mistakes is important too. It is the background noise, the bits and pieces we didn't quite intend to utilise.
0 x
I research English grammar and vocabulary in corpora.

Cainntear
Black Belt - 3rd Dan
Posts: 3469
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8665
Contact:

Re: What does the (distant) future of language acquisition look like?

Postby Cainntear » Tue May 04, 2021 11:14 am

einzelne wrote:As the science fiction author Ted Chiang writes, "experience is algorithmically incompressible."

But that's the point of statistical translation -- we're not trying to write an algorithm, we're trying to develop an experiential model, as close as possible to what humans do.

Is it a perfect model? No.
Are we still too "idealised" in our model? Yes.

But the whole thing is evolving pretty quickly and there are plenty of mainstream AI techniques that build in things like noise functions and random chance to get results that algorithms and pure mathematics can't achieve.
1 x


Return to “General Language Discussion”

Who is online

Users browsing this forum: TimButterfield and 2 guests