AI stuff (was: How to properly do L-R method)

General discussion about learning languages
Cainntear
Black Belt - 3rd Dan
Posts: 3538
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8813
Contact:

AI stuff (was: How to properly do L-R method)

Postby Cainntear » Fri Mar 15, 2024 4:20 pm

Right, I figure I've probably already dragged the thread far enough off-topic, but I don't want to shut up yet, so I figure I'd best start a new one!!
Ug_Caveman wrote:
Cainntear wrote:brains are bad at emulating computers... so what sort of emulation are you going to get it you try to use a computer making an imperfect emulation of a human brain making an imperfect emulation of a computer?

Don't be so sure ;)

This is actually more interesting than you might think (even aside from the little joke of the modem sound effects!)

What's interesting about this is that the establishment in Frank Herbert's universe conflated the concept of AI and computers, because the two things were still pretty new when he was writing, and were easily confused. There was a lot of talk of AI being where computers were heading, as though there was going to be a massive paradigm shift, but that never really happened. Computers continued to evolve in the path they were already on before the 1960s, with no serious work being made to make a shift to a model that emulated human neurones on a hardware level.

The mentats weren't trying to simulate a computer than simulated a computer, are rather to simulate a computer that... was a computer. Herbert didn't see the difference, and a lot of people still don't. I started university shortly after Deep Blue bet Kasparov at chess. The press then were talking about it as AI. My lecturers even back when it was recent news were stressing that it's not AI -- it's just pattern matching. It wasn't even attempting to model human thought, which was why it could do better. The story has resurface because it's still misinterpreted as a milestone in AI. It was not. Deep Blue wasn't doing anything new -- it was just a computer with enough memory and processing speed to work through a number of probable states that were scored by how likely each player was to win.
In fact, I seem to recall that one of the most interesting things coming out of this was the discovery that the number of "lookahead" steps likely to result in a computer win over a human player was actually smaller than everyone expected at the time. People had been predicting Deep Blue would lost because of the low number of lookaheads it could do.
3 x

Cainntear
Black Belt - 3rd Dan
Posts: 3538
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8813
Contact:

Re: AI stuff (was: How to properly do L-R method)

Postby Cainntear » Fri Mar 15, 2024 4:43 pm

emk wrote:
Cainntear wrote:Yup, and this is why AI is a fool's errand.

I'm pretty sure we are never going to agree on what AI is good at, what it's bad at, or where it's going.

GPT-3.5 is an fairly skilled improv actor playing a specific character, "the helpful and harmless assistant." Somehow, playing the assistant character actually allows it to do certain kinds of useful work. I can tell it, "You're a subtitle translator helping
language learners," and it will produce surprisingly reasonable translations. Not perfect ones, but more than good enough for me to use. And this happens even though GPT-3.5 was never trained to be a translator. It just saw humans translating, and somehow built up a translation system. Just in case, you know, someone happened to ask it to pretend to be a translator.

Yes, but it ignores a massive problem: it isn't the lack of precision and correctness, it's that there is no audit trail. We have gone out of our way to force companies to be accountable and use systems to mitigate human biases, and now we're introducing something that is simulating biases and exaggerating them, and cannot follow a corporate handbook to take the steps that humans are expected to. (eg CV/résumé selection bots that will reject a CV with an African name at the top, then recommend a CV with a more stereotypically white, English name.)
People keep shrugging this sort of thing off.

Now, in practical terms, I can use GPT-3.5-Turbo to translate 22 minutes of easy television in 3 minutes of server time. As far as anyone can figure out, the early, unoptimized versions of GPT-3.5 ran on about US$80,000 worth of hardware, but they've reduced that with the GPT-3.5-Turbo models. Translating an episode costs me about $0.03. The speech-to-text, which is also pretty good, costs me about $0.15/episode using Whisper-1. Again, not flawless, but more than good enough for my purposes.

And this is why AI is very, very dangerous.
The fact that it's a negligible cost means that the errors are considered a low cost to pay by the accounting people. Avoiding or even just getting rid of the errors would be vastly expensive by comparison, so we're on a race to the bottom.
If you pay subtitlers to translate your content, you're trying to compete with other firms that use AI.
But as subtitles go AI, the worst affected people will be the deaf and partially deaf, because transcriptions are a poor alternative to carefully crafted subtitles.
But they won't be the only people affected. The experience for viewers with full hearing will be degraded, because subbed translations are staggeringly important. That means that world cinema is going to be harder and harder to justify financially, and we're just going to dig ourselves deeper into Hollywood's pocket.
And when they don't know the answer, they fall back on their improv training. But if you ask them for example sentences, that falls squarely within their improv abilities.

The problem is that while "I don't know" and "not sure" are exceptionally common in human speech, they're very rarely written down. The internet is full of questions asked where the answer is known -- there are very few FAQs that have an "I don't know" answer.

I don't think "improvising" is really the right word. The computer doesn't know that it doesn't know, in the same way an actor might

But once you understand these weak points, you can absolutely get a GPT model to help with language learning.

...but...
Le Baron wrote:My main objection around AI is that it can't really guide you and instead the learner is guiding the AI to guide the learner! This puts a fairly major limitation in its usefulness. The interface between technology and human use is still fairly weak.

Few people seem to want to address this.

Exactly. AI is a tool that needs a skilled operator, because the operator needs to at least know enough to be able to identify when the computer is wrong. This is, perhaps surprisingly, a problem that gets worse as AIs get better. People will become increasingly confident in the AI's output and will become less and less capable of analysing it.
2 x

Ug_Caveman
Green Belt
Posts: 464
Joined: Fri Nov 16, 2018 2:58 am
Location: England
Languages: English (N), Dutch (A2 - July 2021), working towards B1
x 1093

Re: AI stuff (was: How to properly do L-R method)

Postby Ug_Caveman » Fri Mar 15, 2024 4:59 pm

On the subject of Dune... One of my favourite things from the Villeneuve adaption is how much depth is added to the Sardaukar compared to the 1984 adaption (note I've not read the books beyond the first one so please don't spoil too much for me!:)

I think the way they speak is rather interesting. There's clearly parallels to English, just with a much harsher phonology. As for the throat singing though... Not so sure I was able to get any of it.
1 x
Languages: English (N), Dutch (passed A2 exam in May 2021, failed B1 in May 2023 - never sit an exam when you have food poisoning!)

Seeking: Linguaphone Polish and Linguaphone Afrikaans

User avatar
emk
Black Belt - 1st Dan
Posts: 1708
Joined: Sat Jul 18, 2015 12:07 pm
Location: Vermont, USA
Languages: English (N), French (B2+)
Badly neglected "just for fun" languages: Middle Egyptian, Spanish.
Language Log: viewtopic.php?f=15&t=723
x 6744
Contact:

Re: AI stuff (was: How to properly do L-R method)

Postby emk » Fri Mar 15, 2024 6:05 pm

I posted some thoughts on AI and language learning in the original thread.

emk wrote:Now, in practical terms, I can use GPT-3.5-Turbo to translate 22 minutes of easy television in 3 minutes of server time. As far as anyone can figure out, the early, unoptimized versions of GPT-3.5 ran on about US$80,000 worth of hardware, but they've reduced that with the GPT-3.5-Turbo models. Translating an episode costs me about $0.03. The speech-to-text, which is also pretty good, costs me about $0.15/episode using Whisper-1. Again, not flawless, but more than good enough for my purposes.

I'm pretty sure I can also get GPT to handle requests of the form, "Explain what the words '...' mean in this sentence, and give me two examples of how to use them." Although in this case, I might need to cough up the money for GPT-4-Turbo. (Migaku actually has this working surprisingly well in their flash card creator—about 50% better than I would expect GPT-3.5-Turbo to do without very clear instructions.)

For actual practical demos (and some examples of where the tech fails), see my "How not to learn Spanish" log. This gives me better subtitles than the average French DVD publisher provides (not hard), and perfectly reasonable translations (certainly better than Google Translate).

Cainntear wrote:Deep Blue wasn't doing anything new -- it was just a computer with enough memory and processing speed to work through a number of probable states that were scored by how likely each player was to win.

Deep Blue is ancient history at this point. It's like comparing the Wright brothers' airplane to a modern air-superiority fighter. They both fly, and they're both heavier than air, but trying to use one to draw inferences about the other is likely to mislead more than it helps.

One modern system for deterministic board games is Alpha Go Zero. Go can't be tackled with brute-force search like chess, because the board is too big and the pieces all have equal weight. But Alpha Go Zero taught itself the game from scratch, with no examples of human play as input. Within three days, it could beat the world champion.

But Go is still too easy, because players can see the whole board. Current research focuses on strategy games where players have imperfect information.

LLMs (Large Language Models) like GPT are a weird offshoot. They are essentially trained to be "improv actors", able to play different characters and write in different styles. But someone told them, "Hey, I want you to play the role a helpful assistant who follows instructions." (And gave it plenty of examples.) And suddenly the model started beating state-of-the-art performance on a wide variety of tasks, ones that previously needed specialized tools. This was a pretty shocking development.

The free GPT-3.5 is pretty easy to break, and when it breaks, it falls back on sheer improv. GPT-4 is noticeably more robust, and it does much better on college level exams. But even GPT-4 cannot reliably plan and execute multi-step tasks. And it has no memory and no real internal monologue. And it can't learn new tricks by interacting with the world, because the underlying model is read-only. It learned about the world by reading books and looking at photos. Frankly, given these limitations, it performs pretty well.

Here is a lashed-up mix of several different AI models trying to interact with the world and follow instructions from a human. This is about as good as these models get if they need to plan and interact with reality, and I'm sure that this video is the best run of 10 (at least).

But all the limitations are being worked on. Thousands of very smart academics have just overhauled their research programs, we're seeing ludicrous investments in specialized chips, and an entire industry of well-funded companies are trying to catch up to OpenAI.

Cainntear wrote:And this is why AI is very, very dangerous.

I do not truly fear an AI that makes lots of easily-spotted mistakes. Instead, I fear the first AI that doesn't. If we're talking Dune, I have a lot of sympathy for the Butlerians.

We can use the existing tools of politics and government to deal with unreliable AIs, if we get our act together. We are not ready, however, to deal with an AI that could flawlessly perform complex tasks and carry out goals. I would strongly prefer we not build one until we've gotten a lot wiser, and carefully thought through the consequences.

But as subtitles go AI, the worst affected people will be the deaf and partially deaf, because transcriptions are a poor alternative to carefully crafted subtitles.

Honestly, as a student of French, I don't buy this argument. When I was learning to listen to French, the majority of French DVDs had no subtitles at all. And when I did get subtitles, they normally had a very loose relationship to the spoken dialog. As recently as 12 years ago, few French publishers cared at all.

My Whisper-1 results with intermediate Spanish TV are producing much more accurate subtitles than those all but 4 of the episodes originally came with. I'll take a few errors here and there over hand-crafted subtitles that don't match the audio at all.

It's not like I was going to hire translators just to produce subs. Especially if I wasn't even allowed to share them with other people.

And if I were deaf, I'd be trying to get Whisper to transcribe real-life lectures, and display them on the inside of my glasses. I'd trade in France's mediocre attempts at subtitles for semi-reliable real-time transcription in a heart beat, I'm pretty sure.

Exactly. AI is a tool that needs a skilled operator, because the operator needs to at least know enough to be able to identify when the computer is wrong. This is, perhaps surprisingly, a problem that gets worse as AIs get better. People will become increasingly confident in the AI's output and will become less and less capable of analysing it.

This observation is exactly correct, at least until the output gets good enough that nobody cares about the errors.

In the stuff that I am doing, I am largely working around this by focusing on approaches where an observant learner will be able to identify and ignore most of the errors, and where the sheer volume of correct examples will outweigh any mistakes. I'm building command-line tools for language-learning hobbyists doing extensive watching. Not selling courses to schools.

I've watched a number of programmers using GitHub CoPilot, an AI coding assistant. It's interesting how the results are affected by skill level:

  1. People who can barely code at all can actually now glue together dodgy programs that enable them to automate things. I actually see this as a win. It's buggy but empowering.
  2. Junior programmers can sometimes get lost in a maze of subtly broken code, when perhaps they could be learning to be precise and accurate instead.
  3. Skilled programmers write a short comment, then they wait half a second for CoPilot to implement the function, and then they spot most errors at a glance. Those errors get pruned and CoPilot gets asked to generate something better. At full speed, it's impressive. But the bottleneck is proofreading and designing automated QA.
If I could communicate one idea: This stuff is very real, if not especially reliable at the moment. And it's going to get better. If we're clever, we can do some neat tricks with it right now. But we need to extrapolate ahead 10-20 years and really start thinking about the larger issues. These are conversations that the broad public should get some informed say in, not just a few billionaire tech execs.
6 x

User avatar
Le Baron
Black Belt - 3rd Dan
Posts: 3578
Joined: Mon Jan 18, 2021 5:14 pm
Location: Koude kikkerland
Languages: English (N), fr, nl, de, eo, Sranantongo,
Maintaining: es, swahili.
Language Log: https://forum.language-learners.org/vie ... 15&t=18796
x 9573

Re: AI stuff (was: How to properly do L-R method)

Postby Le Baron » Fri Mar 15, 2024 6:31 pm

I'm finding it very hard to be enthusiastic. I'll shrink it back down to AI application to language learning, because whilst I have broader views about AI in society (and why I think it should be highly regulated and in some cases simply banned), what is really relevant here is outcomes in language learning.

I'd like to see and know about real benefits in getting people to their goals. And specifically clear demonstrations that AI is not just moving people in that direction, but perhaps faster and with less effort as is often claimed. If it isn't just all about speed and effort reduction, then I want to know why ordinary learning isn't suitable enough.

Since it's already acknowledged now that the learner is leading the assistant in this scenario I'd like to know how this especially benefits a learner. I can state now that I don't think it really benefits or even effects the things you have to do to learn ('acquire') a language. No AI is going to obviate the task of us having to read lots of books and listen to lots of audio content and when push comes to shove actually talking to people. I'm not anti-technology. There are obviously useful things e.g. automation of making subs from audio for transcripts, using Anki, analysing for useful vocabulary. Though really you could do none of this and still learn your chosen languages, as people have done for over 1000 years.

All my adult life has been during the tech revolution and in this time I've seen associated social organisation - much of which is now in the hands of 'smart' technology - actually go backwards and problems of communication not just remaining unsolved, but being made worse.

Most of all it's so damn boring.
1 x
Pedantry is properly the over-rating of any kind of knowledge we pretend to.
- Jonathan Swift

Cainntear
Black Belt - 3rd Dan
Posts: 3538
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8813
Contact:

Re: AI stuff (was: How to properly do L-R method)

Postby Cainntear » Fri Mar 15, 2024 7:28 pm

emk wrote:
Cainntear wrote:Deep Blue wasn't doing anything new -- it was just a computer with enough memory and processing speed to work through a number of probable states that were scored by how likely each player was to win.

Deep Blue is ancient history at this point. It's like comparing the Wright brothers' airplane to a modern air-superiority fighter. They both fly, and they're both heavier than air, but trying to use one to draw inferences about the other is likely to mislead more than it helps.

Which is kind of beside the point, because the Wright flyer was a plane (although arguably only took off because of the wing-in-ground effect). Deep Blue was not an AI, but it was talked about as being AI, and is now again talked about as a milestone in AI. Deep Blue was also not the first chess playing computer, and it did not predate AI.

The fact that Deep Blue is now being talked about again as a milestone in AI is like calling Concorde a milestone in the development of helicopter flight.

LLMs (Large Language Models) like GPT are a weird offshoot. They are essentially trained to be "improv actors", able to play different characters and write in different styles. But someone told them, "Hey, I want you to play the role a helpful assistant who follows instructions." (And gave it plenty of examples.) And suddenly the model started beating state-of-the-art performance on a wide variety of tasks, ones that previously needed specialized tools. This was a pretty shocking development.

But arent you anthropomorphising here...? They're performing statistical inference to determine what is the likely human response to a certain query in a certain situation, and then to make it look more realistic, they throw in a few random numbers.

The free GPT-3.5 is pretty easy to break, and when it breaks, it falls back on sheer improv. GPT-4 is noticeably more robust, and it does much better on college level exams. But even GPT-4 cannot reliably plan and execute multi-step tasks. And it has no memory and no real internal monologue. And it can't learn new tricks by interacting with the world, because the underlying model is read-only. It learned about the world by reading books and looking at photos. Frankly, given these limitations, it performs pretty well.

It didn't learn about the world at all. It knows what a human is likely to say, not why. Yes, given these limitations, its performance is incredibly impressive, and I'm not saying otherwise. I'm just saying the limitations are a problem that we shouldn't be accepting.

But all the limitations are being worked on. Thousands of very smart academics have just overhauled their research programs, we're seeing ludicrous investments in specialized chips, and an entire industry of well-funded companies are trying to catch up to OpenAI.

And all the big research is being done in private enterprise because the data gathering would get you laughed out of a research ethics board meeting. The research is therefore directed to profitable commercial ends, instead of working on specific subtasks and then later building a full system.

Cainntear wrote:And this is why AI is very, very dangerous.

I do not truly fear an AI that makes lots of easily-spotted mistakes. Instead, I fear the first AI that doesn't.

....
...
...?

Erm... I think that was my point, really. I was talking about the danger of future AIs having a high enough accuracy that mistakes will be trusted.

But even now we're already seeing some of those "easily-spotted" mistakes slipping past people who don't know what they're doing.

As AIs get better, their mistakes will become harder to spot, and when you repeat a falsehood, it can slip into folk belief.

But even on a subtler level, the most complex human stuff is all mediated by us having fundamentally similar neurology, and things that humans do don't always make sense. We understand each other through empathy -- thinking what could make us act the same way. Current AIs aren't really attempting to emulate the structure of the human brain, so any "thought" that they do is alien thought, so they're always going to be stuck superimposing dumb mimickery of human behaviour.

But as subtitles go AI, the worst affected people will be the deaf and partially deaf, because transcriptions are a poor alternative to carefully crafted subtitles.

Honestly, as a student of French, I don't buy this argument. When I was learning to listen to French, the majority of French DVDs had no subtitles at all. And when I did get subtitles, they normally had a very loose relationship to the spoken dialog. As recently as 12 years ago, few French publishers cared at all.

As a student of French, you want transcriptions to supplement your hearing. You are getting better served, but that could be at the deaf community's expense. Transcriptions presented as subtitles aren't proper subtitles. My fear is that subtitling will die completely as transcription becomes practically free. Web video has already had an issue that most YouTube videos aren't subtitled, but now that the YouTube transcriber is free (for major languages) who is going to bother going to the effort now? The deaf community are further marginalised.

My Whisper-1 results with intermediate Spanish TV are producing much more accurate subtitles than those all but 4 of the episodes originally came with. I'll take a few errors here and there over hand-crafted subtitles that don't match the audio at all.

Again, great for a language learner, not so good for the deaf person or the person who doesn't know Spanish and just wants to watch a Mexican film. Fully accurate transcripts (or translations thereof) take a long time to read, drawing your eyes away from the action, which is why subtitles are often "wrong" by some measures.
But now we're getting the worst of both worlds, because those "few errors here and there" are actually making the reading experience slower than the perfect transcription, which is already slower than hand-crafted subtitles. Correcting the errors takes time and thought, so it's actually really hard to enjoy a film when you're just stuck reading all the time.
And if I were deaf, I'd be trying to get Whisper to transcribe real-life lectures, and display them on the inside of my glasses. I'd trade in France's mediocre attempts at subtitles for semi-reliable real-time transcription in a heart beat, I'm pretty sure.

That's (a) a pretty different scenario from TV (so I don't know why you seem to be comparing them) and (b) a lot of hypothetical stuff you can't know for sure; I mean, how do you know what the hell you'd want if you were deaf? It's like me saying "if I was religious, I would say that there was no god, cos, like, I know there's no god."
1 x

bombobuffoon
Yellow Belt
Posts: 85
Joined: Sat Mar 02, 2024 10:33 am
Languages: English N-C1
Finnish A0-A1
x 179

Re: AI stuff (was: How to properly do L-R method)

Postby bombobuffoon » Fri Mar 15, 2024 8:52 pm

Le Baron wrote:I'm finding it very hard to be enthusiastic. I'll shrink it back down to AI application to language learning, because whilst I have broader views about AI in society (and why I think it should be highly regulated and in some cases simply banned), what is really relevant here is outcomes in language learning.

I'd like to see and know about real benefits in getting people to their goals. And specifically clear demonstrations that AI is not just moving people in that direction, but perhaps faster and with less effort as is often claimed. If it isn't just all about speed and effort reduction, then I want to know why ordinary learning isn't suitable enough.

Since it's already acknowledged now that the learner is leading the assistant in this scenario I'd like to know how this especially benefits a learner. I can state now that I don't think it really benefits or even effects the things you have to do to learn ('acquire') a language. No AI is going to obviate the task of us having to read lots of books and listen to lots of audio content and when push comes to shove actually talking to people. I'm not anti-technology. There are obviously useful things e.g. automation of making subs from audio for transcripts, using Anki, analysing for useful vocabulary. Though really you could do none of this and still learn your chosen languages, as people have done for over 1000 years.

All my adult life has been during the tech revolution and in this time I've seen associated social organisation - much of which is now in the hands of 'smart' technology - actually go backwards and problems of communication not just remaining unsolved, but being made worse.

Most of all it's so damn boring.


I share the sentiment that AI is very boring. I actually think its a dead end. It keeps getting flogged but its really not up to anything at all.

Having said that I do use it for language learning with some large caveats.
I have been trying out ChatGPT with voice and TalkPal AI.

My experience have been that these are in fact not to be treated as AI, because they are actually search tools. Search engines, that are good at searching for fuzzy questions and giving inaccurate results. When you think of them that way they become useful. Otherwise they at best a waste of time if being used as language teachers.

However where I have found use is things like providing dynamic scripts to respond to, getting me to talk out loud. Now I could use a book (and I do, and the material in books is far superior) to do some talking or reading practice but the dynamic scripts forces me to adapt a bit. It also introduces new vocabulary. I think if they improved the rubbish bot voices it may be useful for listening. Its good for generating exercise and sample sentences, with the massive caveat of it sounds truly unrealistic and ridiculous. No human would ever talk the way it spits out those sentences.

And in general its really awful at detecting errors, understanding voices, understanding sentences. Its like it works pretty well for practice if you already fluent, but if you are learning its really limited. You have to understand the language pretty well before you use those things as they are full of inaccuracies and lies. They really are not too far off those telephone robots everyone hates to be honest.

I still think its useful for me, just forcing me to start speaking. To help me get over my fear of making mistakes. Learning how to form conversation topics. So its more of a therapist than a teacher.
2 x

User avatar
Severine
Yellow Belt
Posts: 68
Joined: Sat Dec 10, 2016 10:00 pm
Location: Vancouver, Canada
Languages: English (N), Latin (Adv.), Ancient Greek (Adv.) French (Adv.), Spanish (Int.), Russian (Int.), Italian (Rusty Int.), Mandarin (Beg.)
Language Log: https://forum.language-learners.org/vie ... 15&t=20198
x 310

Re: AI stuff (was: How to properly do L-R method)

Postby Severine » Fri Mar 15, 2024 10:09 pm

Cainntear wrote:We have gone out of our way to force companies to be accountable and use systems to mitigate human biases, and now we're introducing something that is simulating biases and exaggerating them, and cannot follow a corporate handbook to take the steps that humans are expected to. (eg CV/résumé selection bots that will reject a CV with an African name at the top, then recommend a CV with a more stereotypically white, English name.)
People keep shrugging this sort of thing off.


This is more important than many people realize. It's already impacting real people. There's a good documentary on Netflix called 'Coded Bias' that I would recommend to anyone interested.

Going back to the topic of language learning, one thing I have noticed among my students who have tried to use AI to help with their language learning is that AI intervention is reinforcing existing gaps in ability and education, not equalizing them. This is one of my main concerns with AI as a language learning tool, or rather one of my biggest disappointments.

I teach a very diverse group of adults that includes people from all stations of life from all over the world. Different native languages, ages ranging from mid-20s to late 70s, wildly differing levels of experience and comfort with technology, and varied professional and educational backgrounds - everything from farmhands with a third-grade reading level to retired university professors. To say they are not benefitting equally from the new AI language learning tools would be a comical understatement.

Because the learner must direct the tutor/tool in the case of AI, outcomes vary greatly based on the quality of the learner's understanding of language acquisition, skill and experience with learning, and comfort with technology in general and AI specifically. One of my students is a retired CS professor from Venezuela who taught himself German in the 90s from a handful of books and tapes; he is using AI to great effect, and for him, it's a marvel and a success story. But for every student like him, I have ten who tried using AI and came away confused or discouraged or both.

The question we need to be asking to make AI truly useful for language learning, in my opinion, is: how do we make it useful for someone who has zero idea of how to successfully learn a language? I am sure smart people are working on this question, but we are definitely not there yet. Not even close.

The problem is that we foolish humans are handing AI too many responsibilities in the educational realm too quickly, before that problem is solved. People are churning out apps and building "AI tutors" without any serious pedagogical foundation. I fear that the inexorable quest for cost-cutting will lead to many learners who currently benefit from human instruction and interaction, especially publicly funded programs, being shunted off toward AI solutions that meet the needs only of the most resourceful and the most well-resourced.
4 x
French ..... Read : 0 / 10000 Watch : 0 / 18000
Latin ........ Read : 0 / 5000 Watch : 0 / 9000
Russian .... Read : 0 / 2500 Watch : 0 / 4500
Mandarin .. Read : 0 / 2500 Watch : 0 / 4500

User avatar
tastyonions
Black Belt - 1st Dan
Posts: 1624
Joined: Sat Jul 18, 2015 5:39 pm
Location: Dallas, TX
Languages: EN (N), FR, ES, DE, IT, PT, NL, EL
x 4047

Re: AI stuff (was: How to properly do L-R method)

Postby tastyonions » Sat Mar 16, 2024 2:02 am

I’ve been assuming that most of the “AI tutors” out now are just GPT or one of its competitors with the thinnest of skins stretched over it. Eventually there will probably be bots trained specifically for the domain of language learning that don’t require as much direction from the student but that’s not reality yet.
1 x

bombobuffoon
Yellow Belt
Posts: 85
Joined: Sat Mar 02, 2024 10:33 am
Languages: English N-C1
Finnish A0-A1
x 179

Re: AI stuff (was: How to properly do L-R method)

Postby bombobuffoon » Sat Mar 16, 2024 10:11 am

tastyonions wrote:I’ve been assuming that most of the “AI tutors” out now are just GPT or one of its competitors with the thinnest of skins stretched over it. Eventually there will probably be bots trained specifically for the domain of language learning that don’t require as much direction from the student but that’s not reality yet.


It appears we are stuck in the never ending cycle of "when chatGPT version x comes out those problems will be solved".
0 x


Return to “General Language Discussion”

Who is online

Users browsing this forum: No registered users and 2 guests