changed:a 2 week long anki experiment
- PeterMollenburg
- Black Belt - 3rd Dan
- Posts: 3242
- Joined: Wed Jul 22, 2015 11:54 am
- Location: Australia
- Languages: English (N), French (B2-certified), Dutch (High A2?), Spanish (~A1), German (long-forgotten 99%), Norwegian (false starts in 2020 & 2021)
- Language Log: https://forum.language-learners.org/vie ... 15&t=18080
- x 8068
Re: changed:a 2 week long anki experiment
Good luck Cavesa! Gamify the hell out of it and try to rest as much as possible (as well as eat healthily) to max your chances of firing on more cylinders than you have been lately. You need energy! Hope you get into France!
0 x
-
Online
- Black Belt - 4th Dan
- Posts: 4989
- Joined: Mon Jul 20, 2015 9:46 am
- Languages: Czech (N), French (C2) English (C1), Italian (C1), Spanish, German (C1)
- x 17754
Re: changed:a 2 week long anki experiment
rdearman wrote:Perhaps I'm confused (wouldn't be the first time) but you just want to create a crap load of cards from PDF's or you want to read the PDF's and selectively pull information from the cards? If it is the first then I recommend you:....
A couple thousand sentences shouldn't take more than a 30 minutes or an hour. I'm a little grey on what you're trying to achieve here.
It is the second. Random words are worthless to me, I need to learn the important words and master the content enough to regurgitate it into a hyperimportant exam in some form. So, no shortcut for me.
Yesterday was a good day:
10.4., 3 pomodoros, 154 cards. Not bad, 2 and half topics treated.
Today will be better, I am still hoping to get over 200 cards in a day.
0 x
- rdearman
- Site Admin
- Posts: 7260
- Joined: Thu May 14, 2015 4:18 pm
- Location: United Kingdom
- Languages: English (N)
- Language Log: viewtopic.php?f=15&t=1836
- x 23317
- Contact:
Re: changed:a 2 week long anki experiment
Cavesa wrote:rdearman wrote:Perhaps I'm confused (wouldn't be the first time) but you just want to create a crap load of cards from PDF's or you want to read the PDF's and selectively pull information from the cards? If it is the first then I recommend you:....
A couple thousand sentences shouldn't take more than a 30 minutes or an hour. I'm a little grey on what you're trying to achieve here.
It is the second. Random words are worthless to me, I need to learn the important words and master the content enough to regurgitate it into a hyperimportant exam in some form. So, no shortcut for me.
Yesterday was a good day:
10.4., 3 pomodoros, 154 cards. Not bad, 2 and half topics treated.
Today will be better, I am still hoping to get over 200 cards in a day.
OK, I would also point out that I did a bit of "Subject Specific" vocabulary using the AntConv software. I was thinking a second exercise you might want to try is convert all the PDF's to text, then concatenate all the files together and use AntConv to find the low frequency words and use that to create dictionary cards.
So for example if your PDF's are all medical, those words are likely to be the least frequent in the list. So you could take the top 1000 lowest frequency words. Look them up and paste the definition into an anki card. Regardless! Good luck.
0 x
: Read 150 books in 2024
My YouTube Channel
The Autodidactic Podcast
My Author's Newsletter
I post on this forum with mobile devices, so excuse short msgs and typos.
My YouTube Channel
The Autodidactic Podcast
My Author's Newsletter
I post on this forum with mobile devices, so excuse short msgs and typos.
-
Online
- Black Belt - 4th Dan
- Posts: 4989
- Joined: Mon Jul 20, 2015 9:46 am
- Languages: Czech (N), French (C2) English (C1), Italian (C1), Spanish, German (C1)
- x 17754
Re: changed:a 2 week long anki experiment
rdearman wrote:OK, I would also point out that I did a bit of "Subject Specific" vocabulary using the AntConv software. I was thinking a second exercise you might want to try is convert all the PDF's to text, then concatenate all the files together and use AntConv to find the low frequency words and use that to create dictionary cards.
So for example if your PDF's are all medical, those words are likely to be the least frequent in the list. So you could take the top 1000 lowest frequency words. Look them up and paste the definition into an anki card. Regardless! Good luck.
The problem is, that the most difficult stuff is not the least frequent. It is often the opposite, as you learn a lot of things about it. Also, I cloze delete the word, for another card parts of the information about it, usually the related numbers, the pharmacology (for example the word Aspirine is typically a deleted word, yet it is very frequent. Or I delete very frequent numbers like 50, while quizzing myself on epidemiology of stuff, target values of various things in the labs, dosage of something etc. ). Another challenge would be more than one word names.
For my planned German cards (in the autumn), it might work better, but not perfectly. It may be weird, but I don't always find the rarest words the trickiest, I sometimes struggle with something more common just as badly. And my experiment will cover not only vocabulary, but also grammar. So, prepositions are definitely not low frequency words, yet they will be one of the commonly cloze deleted things for sure too. That is just one example.
A good remark: the Tags do not show while ankiing on the phone. So, it is important to keep writing the subject on every card. Becase on the computer, I can see the tag "HTA" and just see "prise en charge" as a clear question, with the relevant few sentences with gaps. On the phone, I just see "prise en charge" and have no clue what the hell am I treating here
1 x
- coldrainwater
- Blue Belt
- Posts: 689
- Joined: Sun Jan 01, 2017 4:53 am
- Location: Magnolia, TX
- Languages: EN(N), ES(rusty), DE(), FR(studies)
- Language Log: https://forum.language-learners.org/vie ... =15&t=7636
- x 2398
Re: changed:a 2 week long anki experiment
I like your experiment. Without going into too much detail, it meshes with how I want to try anki/srs for pure language learning purposes. Separately and more in line with this journal topic, in my youth, I habitually created mental models of material that were later fodder for a mass of mainly heuristic/intuitive personal learning methods. One thing I enjoy doing with big docs (or even groups of docs) is to reduct the text and material iteratively, imagining the document(s) as a concentric ring of unknowns that diminish in breath/scope and increase in density/intensity toward a high interest core, until at last there is nothing left to learn. By learn, in this context, I meant that I expected to answer any relevant test question confidently and correctly over the matter, before the erasure of time kicks in. This has a concrete analog for me in the way of physical notes/cards in the sense that at most I would lazy-limit myself to writing down the quid of whatever memory chunk I wanted to own, deriving the rest only at critical junctures (like during a test should the occasion force me to make the effort). Rather than thinking of each information pass as a simple rereading or repetition (which works better for some fields and test formats than others), I tended to judgmentally and agressively remove (prefer the word delete) chunks of material that I deem too easy to spend time on, focusing instead on the novel and looking for anything to wax heuristic over. I let my imagination take hold then rather at less helpful (or less steriotypical) stages. Knowns were looked at almost with disdain, recognizing the fine line between passive and active recognition, ever skirting and usually not crossing into the realm of immediate active recall), especially since those items tend to directly yield boredom. A healthy dose of arrogance is super helpful in the sense that I wanted my unconscious mind to do as much work on my behalf as possible and so I put a ton of faith in it. That leaves me free to focus on what I do not yet understand.
This yields 'cards' that are fun to make and a bundle of interest. They evolve over time and the original 'pdf' is always there for a confidence boost anytime I feel I want to blow through the whole document in a day just to see how easy it now looks compared to when I started. When hitting mock exams, I would make sure I could work the entire problem mentally and disect it (pun intended), at least at major joints/junctures without too much brain strain. The method is but one little flawed model of many, but I have a vested interest in seeing it and similar notions work (or at least play nicely with the other kids). As an aside, I enjoy working with tools like AntConv, but find that you have to pick your use case and entry point for that type of analysis uber-carefully. Personally, I have not found it terribly interesting to give it a corpus glob and tell it to go to work. For me it distills to the programatically expected, but does not tend to offer useful information chunks. It works better in my experience to use it for automation and feed it as input once you recognize a pattern you want to isolate. To avoid losing useful data, I also like using AntCont (or substitute text analysis library xxx) as a stripper. It is the patterns that clutter my view and that I don't want that I tend to isolate (much like using any old regex). More like a seek and destroy than a 'seek to array'. Of course I use stuff like a Feynman technique, rewriting in your own words, etc, but fort the sake of this discussion, I am assuming current readers have that as a shared abstraction to which they can easily integrate at will with whatever other methods are in play at the moment. In combination, I think this all can offer good flexibility. Nothing may squash the weight of obligation, but here is to trying. Happy hunting.
This yields 'cards' that are fun to make and a bundle of interest. They evolve over time and the original 'pdf' is always there for a confidence boost anytime I feel I want to blow through the whole document in a day just to see how easy it now looks compared to when I started. When hitting mock exams, I would make sure I could work the entire problem mentally and disect it (pun intended), at least at major joints/junctures without too much brain strain. The method is but one little flawed model of many, but I have a vested interest in seeing it and similar notions work (or at least play nicely with the other kids). As an aside, I enjoy working with tools like AntConv, but find that you have to pick your use case and entry point for that type of analysis uber-carefully. Personally, I have not found it terribly interesting to give it a corpus glob and tell it to go to work. For me it distills to the programatically expected, but does not tend to offer useful information chunks. It works better in my experience to use it for automation and feed it as input once you recognize a pattern you want to isolate. To avoid losing useful data, I also like using AntCont (or substitute text analysis library xxx) as a stripper. It is the patterns that clutter my view and that I don't want that I tend to isolate (much like using any old regex). More like a seek and destroy than a 'seek to array'. Of course I use stuff like a Feynman technique, rewriting in your own words, etc, but fort the sake of this discussion, I am assuming current readers have that as a shared abstraction to which they can easily integrate at will with whatever other methods are in play at the moment. In combination, I think this all can offer good flexibility. Nothing may squash the weight of obligation, but here is to trying. Happy hunting.
3 x
-
- Orange Belt
- Posts: 155
- Joined: Thu May 17, 2018 10:08 pm
- Languages: *
- x 198
Who is online
Users browsing this forum: lingohot and 2 guests