#3501. Idiosyncratic use of bottom-up and top-down information leads to differences in speech perception flexibility: Converging evidence from ERPs and eye-tracking
October 2026 | publication date |
Proposal available till | 26-05-2025 |
4 total number of authors per manuscript | 0 $ |
The title of the journal is available only for the authors who have already paid for |
|
|
Journal’s subject area: |
Linguistics and Language;
Language and Linguistics;
Speech and Hearing;
Experimental and Cognitive Psychology;
Cognitive Neuroscience; |
Places in the authors’ list:
1 place - free (for sale)
2 place - free (for sale)
3 place - free (for sale)
4 place - free (for sale)
Abstract:
Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions. We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition; and early cue encoding. The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may show idiosyncratic encoding of specific cues, affecting downstream processing.
Keywords:
Categorical perception; Categorization; EEG; Gradiency; Individual differences; N100; P300; Speech perception; Visual analogue scale; Visual World Paradigm
Contacts :