How to properly do L-R method, any help for my attempt appreciated.

General discussion about learning languages
User avatar
Iversen
Black Belt - 4th Dan
Posts: 4792
Joined: Sun Jul 19, 2015 7:36 pm
Location: Denmark
Languages: Monolingual travels in Danish, English, German, Dutch, Swedish, French, Portuguese, Spanish, Catalan, Italian, Romanian and (part time) Esperanto
Ahem, not yet: Norwegian, Afrikaans, Platt, Scots, Russian, Serbian, Bulgarian, Albanian, Greek, Latin, Irish, Indonesian and a few more...
Language Log: viewtopic.php?f=15&t=1027
x 15065

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby Iversen » Sat Jan 27, 2024 10:05 am

engpolusap wrote:Using Google Translate is currently my main way of being able to write in Polish. Both the real time translation, ability to edit and audio input/output is very helpful. There are times I have to correct the translation even with my limited knowledge as it doesn't always translate accurately.


I wouldn't count this as writing in Polish - though there may be some potential for inspiration in correcting translations from a base language (possibly your native language) to a target language, for instance Polish. Doing corrections is obviously an active activity, but doing 'free' writing or even manual translations would be even more active.

When I write in my own weak languages like for instance Polish (which I can read at at least the average wikipedia level, but not speak) I do a rough sketch first, and then I rewrite it at least once, using whatever sources of information I have at my disposal like green sheets, dictionaries and grammars - and yes, I have have in some cases let Google translate propose solutions to something I have tried to express, but I don't trust it blindly, and I trust my ability to notice and memorize all the niceties in a good translation even less. Having struggled first makes you more observant.

My main use of Google translate is to produce bilingual texts. Here my aim is to study an original text in a weak or mediocre target language so the translation will always without exception be from the target language towards something else - but if I feel I don't need a lot of help then I may choose another mediocre language - or in some cases even a language which I haven't studied, but which resembles something I know well - like Frisian, which resembles Dutch and Platt and Old Norse. And then I can't avoid also to look at the translation even though I know that I can't totally trust it (heck, I dont even trust myself! :lol: ).

The third use which I only have started to use recently is listening to the reading aloud of either an original snippet of text or its translation (max one sentence, mostly less). I use it for intensive listening, and even though I probably ought to listen to real humans instead the machine voices are an acceptable alternative at my level and always available- and they are devoid of the distracting histrionics of ebooks or films. In earlier times I used other synthethizers, but stopped doing so when my old computer refused to cooperate with them or they metamorphosed into something less useful. I know from from my Romance and Germanic languages that my pronunciation will slide towards anything I hear around me during travels after a couple of days, so I count on the same effect to take place with the languages that are less advanced right now.
0 x

Doitsujin
Green Belt
Posts: 404
Joined: Sat Jul 18, 2015 6:21 pm
Languages: German (N)
x 807

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby Doitsujin » Sat Jan 27, 2024 4:40 pm

engpolusap wrote:I had an idea for bilingual OS GUI language display which you can read about here:
You can download the Microsoft termbases for all supported languages from Microsoft and the Apple termbases from Apple (free Apple developer account required).
You can also search for specific MS UI terms online. IIRC, MSDN subscribers also used to be able to download glossaries for specific languages.
0 x

Cainntear
Black Belt - 3rd Dan
Posts: 3538
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8813
Contact:

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby Cainntear » Sun Jan 28, 2024 9:33 pm

Iversen wrote:
engpolusap wrote:Using Google Translate is currently my main way of being able to write in Polish. Both the real time translation, ability to edit and audio input/output is very helpful. There are times I have to correct the translation even with my limited knowledge as it doesn't always translate accurately.


I wouldn't count this as writing in Polish - though there may be some potential for inspiration in correcting translations from a base language (possibly your native language) to a target language, for instance Polish. Doing corrections is obviously an active activity, but doing 'free' writing or even manual translations would be even more active.

Actually, I'd go a step further with this.

Krashen always said that learning declarative knowledge about language trains you to "monitor" your own output.

So far, I don't actually disagree with him.

He said that you couldn't take that declarative monitor and turn it into procedural knowledge -- never ever; no way, no how.

Here's where I disagree: the brain learns better when there's certainty -- if you doubt your own answer, you're not telling your brain to remember. If I need to use declarative knowledge to function then I should: in the immediate term, I get done that which I need to get done; in the longer term, I've given my brain data on what is a suitable output given what it intends to express.


Sorry... got a bit side-tracked there.

So the relevance of this is that if you're using Google Translate, you're not producing language. Correct GT's errors is something you can do with declarative knowledge, and so you're basically training a monitor. But if you can do that, your monitor is already good, so why should you train it any more?

I kind of think your spontaneous language production is kind of more important now. Hell, if all your doing is making yourself more likely to detect your own errors, that's harding going to encourage you to speak more!!
0 x

User avatar
Iversen
Black Belt - 4th Dan
Posts: 4792
Joined: Sun Jul 19, 2015 7:36 pm
Location: Denmark
Languages: Monolingual travels in Danish, English, German, Dutch, Swedish, French, Portuguese, Spanish, Catalan, Italian, Romanian and (part time) Esperanto
Ahem, not yet: Norwegian, Afrikaans, Platt, Scots, Russian, Serbian, Bulgarian, Albanian, Greek, Latin, Irish, Indonesian and a few more...
Language Log: viewtopic.php?f=15&t=1027
x 15065

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby Iversen » Sun Jan 28, 2024 9:46 pm

Cainntear wrote:the brain learns better when there's certainty -- if you doubt your own answer, you're not telling your brain to remember. (...)
I kind of think your spontaneous language production is kind of more important now. Hell, if all your doing is making yourself more likely to detect your own errors, that's harding going to encourage you to speak more!!


I also think that immediate confirmation is much more efficient than delayed or no confirmation. That's one of my reasons for using bilingual texts and also one of my grievances concerning oldfashioned paper textbooks. But that doesn't solve the basic problem, namely that you have to produce something actively yourself to learn to produce utterances on the fly, everything else is just preparations - or at best steps to learning a language passively. So the thing that is needed in the active situation situation is immediate confirmation or the opposite - and not corrections one day later as in the old black school. You could of course run your own free productions through Google or some other translation program (or pay a human teacher), but if Google proposes something different you still don't know whether your own formulations were totally off the mark. It could actually also be GT..

Maybe there already is some AI thingy out there that can tell you on a scale from 1 to 10 how much off the mark you are and precisely why, but I haven't found such a thing yet. I do use technology, like for instance translation programs and language skips in Wikipedia, but maybe I haven't searched hard enough for simultaneous assessment software like the calculated scores you can see in chess programs. I know that there are grammar check functions at least for English in some office packages, but I have never used it. And the spelling control functions I have met in such programs are more a nuisance than a help.

So sometimes you just have to take the plunge and hope that your preparations are sufficient (and that you have time to correct your worst errors before anybody else discovers them).
3 x

wherahiko
White Belt
Posts: 12
Joined: Wed Jun 01, 2022 11:23 pm
Languages: English (N), Spanish (learning now), French, Italian, German, Latin (previous study, keeping up active listening)
x 16

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby wherahiko » Tue Mar 05, 2024 12:48 am

How are you getting on with L-R, a few months on? I stumbled across the L-R method on these forums a bit over a year ago and it has completely changed my approach to language learning. Previously, I'd experimented with AJATT / Antimoon, having come earlier from the opposite approach (Benny Lewis's 'speak from day one', and before that, school-based learning). L-R has massively reinvigorated my passion for language learning and allowed me to make serious progress in comprehension. I haven't experienced the extremely rapid learning aYa describes, but I also haven't done it at the levels of intensity she did (I've typically done up to 3-4 hours a day, but much more often 1-2).

To me, L-R offers the best chance of accessing what Krashen calls optimal input (comprehensible, compelling, rich, and abundant). The one thing that might give pause (assuming you're already convinced about CI) is its heavy reliance on L1. I'm not so sure this is a problem: properly used, one is not translating from L2 into L1 in L-R; rather, one is using the L1 text to know in advance (even by a few split seconds) what the L2 text means. It only serves to make the L2 text compherensible. As the audio is far more engaging than the written text, it's still possible to immerse oneself in the story rather than thinking about 'form'.

As for your questions:

Biofacticity wrote:The problem is I don't understand what is the purpose of parallel text if the steps never mention anything about parallel texts. Are they necessary and how to use the parallel texts if the original instructions rely on not having parallel bilingual pages of long books.

In my case I want to try and do it with Game of Thrones. The book is 650 pages (first part) and the the audio is 19hours long. It might be the wrong book to attempt something like this, but you can make some suggestions (though Ill probably have to do step 1 first, since I haven't read that many novels, let alone reread.)



I don't use parallel texts, simply because I'd rather spend the time doing more L-R rather than making them. Instead, I just read L1 and listen to L2. (You'd think it would be easy for a software programme to display two ePubs in parallel columns, but I haven't found one that can do this.) Where I've thought in the past parallel texts would be useful are (1) checking the spelling of an unfamiliar word (but this relates more to conscious learning, so I'm not fussed about it now); (2) more importantly, once I get close to natural listening, I'd rather not have to look at the L1 text all the time. With a parallel text it would be possible to look at L2 and hear L2, but glance at L1 when needed. aYa call this Stage 2 L-R, but it makes more sense to me to do it after Stage 3. In the absence of a parallel text, I'm experimenting with just listening to L2 at this stage with no text; I guess this is what Krashen call comprehensible (rather than transparent) input. I also didn't use interlinear texts, but so far I've only done L-R with languages I already had an intermediate level in.

Regarding books for German: I started with Heidi and then went on to Der Herr der Ringe (The Lord of the Rings) and then the German translation of Elena Ferrante's L'amica geniale (Meine geniale Freundin). All nice, long books!

I hope L-R is going well for you. I'm keen to hear your experiences!
2 x

rvilcheszamora
Posts: 3
Joined: Thu Mar 14, 2024 6:36 pm
Languages: I'm spanish native speaker and also I've passed b2 level, trying to get c1-c2 level.

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby rvilcheszamora » Thu Mar 14, 2024 7:15 pm

Hi Everyone. Just a simple quick question: What's the experience with this method to people who have B2 level in order to get C1-C2 level?

In my case, I'm trying to get the C1 level (through CAE Cambridge Exam) but listening part is so hard for me that I sometimes I think to give up. It's so fustrating.

I think this method could be something new to try.

Thank you very much.
0 x

User avatar
emk
Black Belt - 1st Dan
Posts: 1708
Joined: Sat Jul 18, 2015 12:07 pm
Location: Vermont, USA
Languages: English (N), French (B2+)
Badly neglected "just for fun" languages: Middle Egyptian, Spanish.
Language Log: viewtopic.php?f=15&t=723
x 6744
Contact:

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby emk » Thu Mar 14, 2024 7:59 pm

Iversen wrote:Maybe there already is some AI thingy out there that can tell you on a scale from 1 to 10 how much off the mark you are and precisely why, but I haven't found such a thing yet.

Not that I'm aware of. The more advanced GPT-like models are a bit too "human-like" in this respect. They can produce valid output themselves, but they can't reliably take your writing and correct it. This was also the case for 80% of the untrained humans offering corrections on Lang-8 back in the day, too—they would say things like "I wouldn't write it like that, but I don't know why." Skilled correctors were precious.

A more practical exercise might be to write a 50-word journal entry in English, ask the AI to translate it into your target language, and then study the AI output intensively.
0 x

Cainntear
Black Belt - 3rd Dan
Posts: 3538
Joined: Thu Jul 30, 2015 11:04 am
Location: Scotland
Languages: English(N)
Advanced: French,Spanish, Scottish Gaelic
Intermediate: Italian, Catalan, Corsican
Basic: Welsh
Dabbling: Polish, Russian etc
x 8813
Contact:

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby Cainntear » Fri Mar 15, 2024 7:08 am

emk wrote:
Iversen wrote:Maybe there already is some AI thingy out there that can tell you on a scale from 1 to 10 how much off the mark you are and precisely why, but I haven't found such a thing yet.

Not that I'm aware of. The more advanced GPT-like models are a bit too "human-like" in this respect. They can produce valid output themselves, but they can't reliably take your writing and correct it. This was also the case for 80% of the untrained humans offering corrections on Lang-8 back in the day, too—they would say things like "I wouldn't write it like that, but I don't know why." Skilled correctors were precious.

Yup, and this is why AI is a fool's errand.

AI is specifically a technical term for a computer system that calculated stuff by making a rather naive simulation of biological neural processing -- it was developed to test theories about neurology, and it's been promoted beyond ours station because of the very sci-fi sounding name.

Expert humans go out of their way to think more logically -- ie to think like computers. Getting a computer to do an acceptable facsimile of untrained human thinking is one thing, even if it is massively computationally inefficient, but getting a computer to emulate a brain that is trying to emulate a computer is just crazy. Computers are bad at emulating brains ; brains are bad at emulating computers... so what sort of emulation are you going to get it you try to use a computer making an imperfect emulation of a human brain making an imperfect emulation of a computer?

In the case of language, the skilled teacher is meshing language thought with intuitive thought and latched onto their natural reactions. When I was correcting student errors I was really analysing what they were thinking because my brain was like theirs so I could replicate the thinking that made the mistake. However, the computer doesn't have the same brain structure as a human, so has no way to replicate thinking behind the mistake. Apart from by being given a immeasurably huge number of fully worked examples, but these don't exist.

Sorry for going into an off-topic rant....
0 x

Ug_Caveman
Green Belt
Posts: 464
Joined: Fri Nov 16, 2018 2:58 am
Location: England
Languages: English (N), Dutch (A2 - July 2021), working towards B1
x 1093

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby Ug_Caveman » Fri Mar 15, 2024 7:58 am

Cainntear wrote:brains are bad at emulating computers... so what sort of emulation are you going to get it you try to use a computer making an imperfect emulation of a human brain making an imperfect emulation of a computer?

Don't be so sure ;)
0 x
Languages: English (N), Dutch (passed A2 exam in May 2021, failed B1 in May 2023 - never sit an exam when you have food poisoning!)

Seeking: Linguaphone Polish and Linguaphone Afrikaans

User avatar
emk
Black Belt - 1st Dan
Posts: 1708
Joined: Sat Jul 18, 2015 12:07 pm
Location: Vermont, USA
Languages: English (N), French (B2+)
Badly neglected "just for fun" languages: Middle Egyptian, Spanish.
Language Log: viewtopic.php?f=15&t=723
x 6744
Contact:

Re: How to properly do L-R method, any help for my attempt appreciated.

Postby emk » Fri Mar 15, 2024 12:42 pm

Cainntear wrote:Yup, and this is why AI is a fool's errand.

I'm pretty sure we are never going to agree on what AI is good at, what it's bad at, or where it's going.

GPT-3.5 is an fairly skilled improv actor playing a specific character, "the helpful and harmless assistant." Somehow, playing the assistant character actually allows it to do certain kinds of useful work. I can tell it, "You're a subtitle translator helping
language learners," and it will produce surprisingly reasonable translations. Not perfect ones, but more than good enough for me to use. And this happens even though GPT-3.5 was never trained to be a translator. It just saw humans translating, and somehow built up a translation system. Just in case, you know, someone happened to ask it to pretend to be a translator.

Now, in practical terms, I can use GPT-3.5-Turbo to translate 22 minutes of easy television in 3 minutes of server time. As far as anyone can figure out, the early, unoptimized versions of GPT-3.5 ran on about US$80,000 worth of hardware, but they've reduced that with the GPT-3.5-Turbo models. Translating an episode costs me about $0.03. The speech-to-text, which is also pretty good, costs me about $0.15/episode using Whisper-1. Again, not flawless, but more than good enough for my purposes.

I'm pretty sure I can also get GPT to handle requests of the form, "Explain what the words '...' mean in this sentence, and give me two examples of how to use them." Although in this case, I might need to cough up the money for GPT-4-Turbo. (Migaku actually has this working surprisingly well in their flash card creator—about 50% better than I would expect GPT-3.5-Turbo to do without very clear instructions.)

If anyone wants to judge for themselves, there are lots of examples in my Spanish log, including some places where it screwed up. And my code is available if anyone wants to try to replicate my results.

But everything I'm using GPT-3.5-Turbo to do is something that I've carefully chosen to be (mostly) within its abilities as an improv actor. I'm not asking it to make lists or to do math or to give in-depth grammar explanations. And I'm definitely not asking it to give corrections. In any of these cases, the model struggles, and falls back on pure improv acting. It can pretend to be grammarian, but it's just making stuff up at that point. GPT-4-Turbo does a little better; it will sometimes catch genuine mistakes that I've made on technical subjects. "I'm pretty sure you've got that derivative backwards, given the problem we've been talking about." But I can't count on it. None of the GPT models were ever trained to solve problems; they were trained to do improv acting. And the limited problem solving "fell out" somehow.

If you look at the state-of-the-art algorithms for natural language processing (NLP) as of 2019, the GPT models will outperform a significant fraction of them. Literally all you need to do is ask (carefully), and the improv actor will shrug, and beat the previous state of the art. With no special training in that problem. These models are flawed, but they're also a huge deal.

(Personally, I don't think we're ready to deal with the fallout of what happens if the $0.03/episode translator actually learns how to reliably solve a much wider range of problems. And it isn't as simple as, "Well, just teach all the displaced workers to code" like your average pundit says, because the AIs are better at coding than almost anything else. And manual labor jobs may eventually be vulnerable, too. GPT-4V can look at a photo of your fridge, and propose meals you can cook with the ingredients. And robotics may be coming along eventually, though it has further to go. Still, there's no way we'll make good decisions about these systems unless we understand their strengths and weaknesses accurately.)

And finally, to bring this thread back on topic, GPT models essentially learn via a combination of massive input and cloze cards. Listening/Reading would be a very plausible way to train them, if you added a "predict the next word" component. Part of the reason that GPT models can't give you good grammar advice is that they learned the grammar through osmosis. It's mostly "procedural" knowledge for them. If you ask them to explain some grammar rule, you're calling on their much weaker declarative knowledge. And when they don't know the answer, they fall back on their improv training. But if you ask them for example sentences, that falls squarely within their improv abilities.

But once you understand these weak points, you can absolutely get a GPT model to help with language learning.
4 x


Return to “General Language Discussion”

Who is online

Users browsing this forum: Klara and 2 guests