SPEECH PERCEPTION: AUTONOMOUS VS INTERACTIVE
المؤلف:
John Field
المصدر:
Psycholinguistics
الجزء والصفحة:
P278
2025-10-13
223
SPEECH PERCEPTION: AUTONOMOUS VS INTERACTIVE
As with reading, there has been controversy as to whether speech perception operates on a bottom-up principle, with features built into phonemes, phonemes into syllables, syllables into words– or whether an interactive account is the more correct one. The clash is between those who argue that an autonomous process (one level of processing at a time) streamlines decision making and those who argue that an interactive process enables multiple sources of evidence to be considered at once. See modularity2.
Two considerations are critical:
Top-down effects do not derive simply from contextual cues but can also take the form of, for example, lexical knowledge. For example, a listener’s knowledge of the existence of the word CIGARETTE might mean that they do not perceive the error when a speaker says SHIGARETTE. This is a top-down effect: from higher unit (word) to lower unit (phoneme).
A distinction must be made between the general effects of context upon understanding, which are not disputed; and the question of whether (for example) we actually, at the time of listening, believe that we have heard a particular sound because of our knowledge of higher-level features.
One can distinguish at least four views:
a. The autonomous view that neither lexical knowledge nor sentence context affect how we perceive sounds; they form later parts of the listening process.
b. The bottom-up priority view that some perceptual evidence is necessary before we bring in lexical or semantic information.
c. The lexical effects view that knowledge of words affects how we perceive the sounds of speech but that sentence context does not.
d. A full interactive view that all sources of information influence the way in which we perceive speech.
Several findings favour an interactive account:
The Ganong effect. Ganong (1980) presented listeners with synthesised variants of plosive sounds: for example a range which extended from a core value of [g] to a core value of [k]. He spliced them on to endings which created either a word or a non-word (example: GISS vs KISS). He established the point (see categorical perception) at which the listener might normally be expected to switch from reporting one sound to reporting the other, and found that it shifted when a potential word was involved. Thus, on the GISS/ KISS continuum, the perception of [g] would change earlier than normal to a perception of [k]. This suggested that knowledge of a word affects processing at the lower level of phoneme perception. Ganong’s findings have since been questioned. The original experi ment was repeated, using a more natural speech sample, and the effect was not found. It may be that the Ganong effect only obtains when degraded stimuli are involved.
Sentence context. Garnes and Bond (1976) constructed 16 synthetic stimuli which varied by degrees from BAIT to DATE to GATE. They inserted them into constraining sentence contexts such as ‘Check the time and the...’, and asked subjects to report what they heard. Where there was a good exemplar of /b/, /d/ or /g/, subjects identified the target words accurately, even when they made no sense (‘Check the time and the gate’). But where the exemplars were not good ones, subjects were influenced by the sentence context.
The phoneme restoration effect. Warren (1970) replaced a phonetic segment in certain words (e.g. the [s] in the word legislature) with a coughing sound. When the words were presented in sentences, subjects could not accurately indicate where the cough occurred. They heard the whole word legislature (with the [s] restored) and the cough as a background noise. This appeared to demonstrate top down effects of word knowledge upon processing at phonetic level. Another experiment showed phoneme restoration effects that were apparently due to sentence context. Presented with sentences such as a and b, subjects restored the phoneme that was appropriate to the context.
It was found that the eel was on the orange.
It was found that the eel was on the shoe.
[ indicates location of cough]
The problem with many of the experimental tasks used to investigate this issue is that they are post-perceptual. They do not tell us what subjects thought at the time of processing but only what they reported afterwards. An ‘autonomous’ advocate might argue that the listener hears that the phoneme is missing, but restores it at a later (but separate) stage of processing.
This was tested in a further series of phoneme restoration experiments which checked whether subjects could distinguish between sentences where noise replaced a phoneme and those where noise accompanied it. If they could not, then it would suggest that their perception was indeed affected by top-down influences. The researcher (Samuel, 1990) concluded that lexical knowledge does affect phoneme recognition. But he also found (contrary to the eel findings) that sentence context does not.
See also: Interactive activation, Modularity1, Top-down processing
Further reading: Harley (2001: 224–8)
الاكثر قراءة في Linguistics fields
اخر الاخبار
اخبار العتبة العباسية المقدسة