s_allard wrote:Before this debate wanders off, I think it is important to understand the starting point.
This debate has already wandered off -- it wandered so far off that the mods split the conversation in two.
In a post Marais wrote that in a supermarket he hears "bon journée au'voir". I corrected this, saying that it was more likely "bonne journée au r'voir".
And to be frank, I think that was unbelievably rude of you -- your correction was tangential to the discussion at that point and had nothing to do with what Marais was trying to say, and Marais readily admits to having only a basic knowledge of French. It was utterly unnecessary and crassly impolite.
While we all agree that the supermarket employee said "bonne journée", the difference in opinion seems to be what did Marais' ears perceive. Was it [bɔn] or [bõ]? According to tavros, it was the same sound. And what sound is this? We don't know.
Tavros already gave you half the answer to that, and I filled in what was missing. From observation of my beginning French pupils, a French nasalised vowel is clearly perceived as a vowel followed by an N.
In other words, people can't "hear" or perceive the differences in sounds until they can pronounce them "comfortably and reliably". So Marais cannot distinguish between bon and bonne until he can pronounce both words properly.
I think this is totally wrong. The cause of the error is the idea that if people cannot reproduce a sound, they cannot perceive it. For example, a French-speaker hearing an English-speaker say "hold that thought" is actually hearing "hold zat sought".
I believe that the French-speaker hears the -th sound perfectly well and can certainly distinguish between sought and thought or bath and bass long before being able to pronounce the th perfectly.
While you might not be completely wrong, you're definitely overstating it. Beginners clearly can't perceive the differences. If my pupils' problem was lack of ability to articulate, then the most likely error in production would be simply to replace the nasalised vowel with a non-nasalised one -- a simple phonemic substitution.
But my pupils take what should be a single phoneme and in response give me
two phonemes, and I cannot see any way in which that's logically compatible with your hypothesis.
The problem here is that the brain automatically processes incoming language as phonemes, so information is unconsciously stripped from incoming language before we do anything with it. How can we learn from information the brain has filtered out as irrelevant? And the brain has good reason to regard features of pronunciation as irrelevant: it makes us mutually intelligible despite our differences in accent. Consider for example the glottal stop in English. It's a markedly different means of articulating a T than the standard one, but most people aren't consciously aware of whether the person they're speaking to uses it or not, and whether it's [t] or [ʔ], the listener simply perceives it as a phoneme /t/.
This is great for dealing with accents and dialectal variation in first languages, but it is a significant hurdle to overcome when trying to learn a new language -- for instance, if you want to learn Hawaiian, suddenly your brain's assumption that [t] and [ʔ] are allophones is completely wrong.
Now it may well be possible to learn to perceive the phonemes before you learn to articulate them, but you still have to
learn to do it. A phoneme is a “meaningful unit of sound” and here is where the difficulty comes in:
In fact we don't have to hear all the words. We often understand what we hear because we can fill in the blanks based on our interpretation of the context.
… which means that we do not need to perceive all the phonemic distinctions in order to understand the utterance, and there is no impetus for the brain to learn the phonemes.
For example, you don't have to learn to hear the difference between /w/ and /u/ to be able to understand the word “wire” – even if your internal concept of it is /uajr/,that will be enough to successful comprehend the word.
Language is full of redundancy, and beginners are not often required to discriminate between similar phonemes. So how will the input ever show the learner that the sounds are distinct and meaningful?
It can't, and whether we train perception or production first, we as teachers (or self-teachers) need to provide and environment that forces recognition of the distinction. I personally believe that production is the easiest way to do so.
Marais wrote:tastyonions wrote:If I had to wager, I would say that most beginner anglophones probably could notice a difference between "bon" and "bonne" lined up right next to each other in some totally artificial, abstract minimal pair test, but whether they will actually "hear" it in the wild (be conscious of it and be able to report correctly what has been said) is a very different question.
I think 'bon' and 'bonne' are very different sounding, and think it's very easy to pick up on if you pay attention. Not hard at all.
If you pay attention, and
when you pay attention – and therein lies the problem.
When we direct conscious attention to the input, we are more capable of listening to the sound of speech “in the raw”. We can then trick ourselves into believing that our brain can perceive the sound; but as soon as we stop paying conscious attention, the brain falls back onto the tried and trusted strategy of filtering the sound for phonemes.
This is why I've seen several studies supporting the idea that conscious minimal pair discrimination practice is valueless in improving speaking performance – people trained to do minimal pair discrimination get much better at the conscious task of listening for these differences, but this doesn't train the unconscious part of the brain to attend to them.