rdearman wrote:Cainntear wrote:The program was really straightforward. It uses a simple algorithm to track the difficulty of prompts and to introduce new elements to be learned. In a text window, it throws question after question at me, and I type in the answers and get immediate feedback. The more I used it, the quicker I got.
I would like to explore this in more detail. I assume you created a program with had the L1 prompt, and the L2 answer in some type of database, such that you could compare your response?
Using your example:
(L1 Data segment contained in DB) My mother is wearing a hat.
(User Response) Ma mère porte un chapeau
(L2 Data segment held in DB) Ma mère porte un chapeau
Or were you generating an English sentence with no reference check included?
It was generating a bunch of equivalent phrases and sentences in Corsican and English.
It prompted in L1, compared the answer to L2 and adjusted the score of all the language items tested either up or down depending on whether the answer was correct or not. The scoring was binary and absolute -- it was either 100% correct or it was wrong, but I'd coded it modularly so that I could replace the scoring algorithm without ripping out the rest of the code.
Unfortunately, hard-coding all the language stuff in Python was leading to a combinatorial explosion in complexity, so I went back to the drawing board and started defining an abstract rules format and a parser that would convert the rules into Prolog assertions, but eventually when I realised I had left out something that was absolutely necessary, I found I couldn't easily retrofit it, and went back to Python, but again starting from the same (or rather a very slightly modified) rule definition format.