#3722. End-to-end acoustic modelling for phone recognition of young readers
October 2026 | publication date |
Proposal available till | 08-06-2025 |
4 total number of authors per manuscript | 0 $ |
The title of the journal is available only for the authors who have already paid for |
|
|
Journal’s subject area: |
Language and Linguistics;
Linguistics and Language;
Communication;
Modeling and Simulation;
Computer Science Applications;
Computer Vision and Pattern Recognition;
Software; |
Places in the authors’ list:
1 place - free (for sale)
2 place - free (for sale)
3 place - free (for sale)
4 place - free (for sale)
More details about the manuscript: Science Citation Index Expanded or/and Social Sciences Citation Index
Abstract:
Automatic recognition systems for child speech are lagging behind those dedicated to adult speech in the race of performance. This phenomenon is due to the high acoustic and linguistic variability present in child speech caused by their body development, as well as the lack of available child speech data. Young readers’ speech additionally displays peculiarities, such as slow reading rate and presence of reading mistakes, that hardens the task. This work attempts to tackle the main challenges in phone acoustic modelling for young child speech with limited data and improve understanding of strengths and weaknesses of a wide selection of model architectures in this domain. We find that transfer learning techniques are highly efficient on end-to-end architectures for adult-to-child adaptation with a small amount of child speech data. Through transfer learning, a Transformer model complemented with a Connectionist Temporal Classification (CTC) objective function, reaches a phone error rate of 28.1%, outperforming a state-of-the-art DNN–HMM model by 6.6% relative, as well as other end-to-end architectures by more than 8.5% relative. An analysis of the models’ performance on two specific reading tasks (isolated words and sentences) is provided, showing the influence of the utterance length on attention-based and CTC-based models. The Transformer+CTC model displays an ability to better detect reading mistakes made by children, which can be attributed to the CTC objective function effectively constraining the attention mechanisms to be monotonic.
Keywords:
Child speech; Connectionist temporal classification; Low-resource; Phone recognition; Transfer learning; Transformer
Contacts :