StringerBell wrote:Cainntear wrote:Meanwhile, people who insist that they are "visual learners" and "have to see the word written down in order to learn it" typically have very poor pronunciation, if they even attempt to speak at all.
I am a visual learner learner. When learning a new language, I have to see how a word is written so that I can anchor it in my mind. I do this with both Italian and Polish, two languages which I speak and I get constant feedback from native speakers in both languages that my pronunciation is very good, even though I speak Polish very infrequently. When a new Italian word comes up in conversation, if I don't see it written or ask about its spelling, I will not be able to remember it or use it myself. In fact, about 10 seconds later, that word is completely out of my mind as if I'd never encountered it. For this reason, I
never pick up new vocabulary from extensive listening.
However, if I ask about how that word is spelled during the conversation then as the person spells it I can see the letters in my mind, and then there is an extremely high probability that I will remember that word later on.
The thing is... I'm the same. When I'm early on in learning a language, I learn best from written words... but I don't consider myself a visual learner.
What you are learning when you learn a new word is a series of sounds. As a non-fluent learner, you cannot hear these sounds -- your brain won't let you. When you're in a language with a clear and regular phonemic spelling system (Polish is very highly regular, Italian isn't as good, but it's far better than a lot of languages) the written form is a pretty clear and unambiguous representation of the sound of the word.
Now this may seem like a strawman, but imagine you were learning a regional Chinese language that was written in the Chinese script and that had no established Latinisation -- seeing the written word would not help you remember the spoken word. You would need to invent your own phonetic Latinisation, because that would be telling you the sounds.
Going back to your brain not letting you hear sounds:
I'm sure you're familiar with the idea of phonemes -- we all say /l/ slightly differently from each other, and we all individually say /l/ differently in different words depending on the context and contact with other phonemes. Our brains, however, treat all of these different things and just /l/. It's the first stage of processing language: the brain filters out all the complexities of the sound wave and categorises an infinite spectrum of sounds into a finite number of phonemes.
The brain throws data away that it considers irrelevant.Unfortunately, while the difference between two types of L sound may be irrelevant in English, it's not in (for example) Scottish Gaelic, where there's a broad/slender (non-palatal/palatal) distinction and (in some people's accents) a weak/strong distinction, meaning for the one English /l/ there is 2, 3 or potentially even 4 phonemes.
How can your brain learn these phonemes when it's constantly throwing away the data that would tell it they exist?
This notion that there are "auditory learners" that just hear things and learn them is fallacious. Anyone who
does learn from sound alone is using non-language parts of the brain to hunt for data that's missing. A person with a trained musical ear is more capable of semi-consciously analysing the frequency content of something they're listening to and picking out information that they can then go and learn
Even when I watch youtube videos in my native language, I almost always turn off the audio and use the computer generated subtitles, if they are available. When people are speaking, often things go in one ear and out the other, and by the time the video is done, I will have forgotten most of what the person said. However, if instead of listening to the video, I read the subs, then my attention and memory of the content is noticeably better and the experience is more enjoyable. My brain has a strong preference for visual and textual content than auditory content.
It is possible that you're non-neurotypical, and outliers don't prove the general case (which is a bit of an existential question for this forum -- I think in the last couple of years more and more of us having been talking about our own neurodivergent traits). Or maybe it's just a matter of YouTube videos being made by untrained amateurs who are not particularly good at scripting (if they attempt it at all) and producing material that's formulaic, repetitive, low in content, and just generally bad at holding the viewer's full attention...?
When I'm having a conversation with my husband about a complex scientific topic (which happens frequently), if he starts a long verbal explanation, I get lost because I'm trying to picture in my head what he's saying, and then I get hung up on something or he goes faster than my mental picture can handle. If he jots down a simple diagram or even a couple of key words as he's going, then I can use those visuals to process information much faster and remember it. He's the complete opposite; he can listen to massive amounts of very complex information without visuals, process it quickly, and remember it. At the end of the day, we are equally capable of processing the same level and amount of complex information, but I require visuals to do it while he doesn't.
But here hangs a big question about the nature of good teaching: is your husband in any way disadvantaged when presented with a diagram?
My hypothesis about seemingly fundamental learner differences is that it's a matter of tolerance to untuned input -- our ability to "fill in the gaps" of what is presented to us.
Going back to pronunciation and what a trained musician can do that a non-musician can't, the musician picks out the frequencies to try to find the nature of the phoneme... but doesn't need to do this, and would be able to learn from a description of mouth shape. The musician would then be able to use their learned skills of frequency discrimination to learn to identify the phoneme quicker than a non-musician, but that's not "learning style" -- it's "existing knowledge" or "existing skills".
So, when I learn a language, I need visuals. I need to see how a word is written. I need to picture that word in my mind. I do the same with my native language; when I speak, I often see the words that I'm saying in my mind, and if I come across a new word in English, I need to see the spelling of it in order to remember it, and I don't think there is a person on the planet who would say that I have poor pronunciation in my native language. When I spell a word aloud, regardless of the language, I can see it as easily as if I were looking at it written on a paper, so I can spell aloud extremely long words with a high degree of accuracy; however, my husband who is not a visual learner can't see words in his mind at all, so spelling aloud for him is torture.
That's not being "a visual learner", that's being synaesthetic. It's not
hugely rare (and there's a theory that mild synaesthesia is the origin of language and a precondition of language processing) but it's not common enough to be a component of learning styles.
You can define "visual" or "auditory" learner however you want, but there are clear differences between the way he and I process information, and there are clear differences between how people learn in general.
But there's a difference between "processing information" and "learning". I believe that the goal of teaching (including writing instructional material) is to minimise the amount of processing the learners needs to do to receive the teaching input, because it's only after you have successfully received the input that you actually start to learn.
I firmly believe that reducing the amount of unnecessary processing that the learner is required to do reduces the effect of information-processing differences on the quality of learning. All the best teachers I've learned from made learning easier for everyone.