german2k01 wrote:Le Baron wrote:german2k01 wrote:If you can hear it well then you can register the sound correctly subconsciously. Hence, you can replicate it correctly.
This is a typical South-East Asian thing though. Not that she can't hear the word. It's surely interference from her own internalised sound-system. Another similar one is the confusion of V and W among people in India (I don't know if it is localised or what). So that despite recognising the word 'love' perfectly well the same person will say: 'I lowe you'. And also e.g. 'It's vinter' (winter).
Listening doesn't really seem to solve this.
This is what we call fossilization. When people speak more than listen more; these sorts of issues happen.
That's an unproven assertion. Besides, fossilisation doesn't say anything about a mechanism, but about the result.
The subconscious mind does not know how to pronounce it as it has not listened to its correct pronunciation hundred times.
There really is no proof for this. Krashen has never given proof, and his proponents all use it to justify that their failing students are at fault, because they didn't do what was asked of them.
As I see it, it is more likely that their
successful students didn't do what was asked fo them, and that they didn't ask tell their
failing students what the right things to do actually were. For example (re the Chinese "good" student vs the "bad" one):
She said that she watched A LOT of German TV show with SUBTITLES on Netflix and Amazon prime. Natural Listening when listening for hours and paying attention to the sounds of the words in the end will fix it. Matching sounds with their corresponding words has its merits for developing good pronunciation at least being aware of it.
What she was doing was building a phoneme map of the target language. It is a form of intellectual reasoning and it is much to her credit that she did it, but it should not be held against the Chinese guy that he didn't do the same thing. The woman's actions are not a direct result of listening -- that is an oversimplification, and it fails to give her full credit for her own success.
To state it plainly: you have no proof that the act of listening in the absence of conscious analysis would result in better speaking, and you have no proof that teaching conscious awareness of the sound system would not have resulted in better speaking in the absence of listening.
I think this is what Dr. Brown is alluding to it as early speaking hampers acquiring correct sounds of the language. Hence, a long silent period of intensive listening is advisable. This way you are giving an opportunity for your subconscious mind to notice the real and correct pronunciation of words.
I'm not familiar with the name, but Google says he did much of his writing in the US when Krashen was king, and would have retired before the majority of us finished high school. (Google also suggests you might have picked up the name from A.J. Hoge... did you?)
Everything since then that has demonstrated the problems in the philosophy... well it's hardly going to be addressed in his writing.
Neither he nor Krashen have proposed a cognitive method through which a new phoneme map can spontaneously form itself.
We now understand that the brain has processes that extract meaningful stimulus as the first step of sensing it. I could describe a visual example if you want, but skipping straight to language, we don't take raw sound and feed it into a black box that processes language -- we take raw sound and pre-process in by identifying phonemes (or perhaps
likely phonemes), and then the stream of phonemes goes into our language centre.
The problem with the input hypothesis is that it was written before this was established as scientific fact, so while there's reason to given Krashen, Brown etc for some leeway at the time, the resurgence of these theories requires some significant additional evidence.
Basically, you cannot understand a language without using a phoneme map, and if you do not have the target phoneme map, you're let to either build one yourself or use the one you already have. In several Asian languages, R and L are allophones of one phoneme (i.e. phones that don't differ in meaning, only in context) and so there is literally no impetus for a Chinese speaker to spontaneously develop the two phonemes for themself.
In fact, the
only impetus to recognise the phoneme difference comes from
speaking. You can understand a sentence while misidentifying a phoneme, but problems in your phoneme map are
only revealed when you attempt to produce the phoneme, as success and failure both result in useful feedback.
As such, given the current state of neuropsychology in general and neurolinguistics specifically, the onus is on proponents of teaching based on a "listening hypothesis" to say how this could possibly work.
A major part of that would be in depth interviewing of successful learners and attempting to prove that they aren't doing anything that they weren't told to do.