Transformers in speech processing: Overcoming challenges and paving the future
Latif, Siddique ; Zaidi, Syed Aun Muhammad ; Cuayahuitl, Heriberto ; Shamshad, Fahad ; Shoukat, Moazzam ; Usama, Muhammad ; Qadir, Junaid
Latif, Siddique
Zaidi, Syed Aun Muhammad
Cuayahuitl, Heriberto
Shamshad, Fahad
Shoukat, Moazzam
Usama, Muhammad
Qadir, Junaid
Supervisor
Department
Computer Vision
Embargo End Date
Type
Journal article
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
The remarkable success of transformers in the field of natural language processing has sparked interest in their potential for mod- elling long-range dependencies within speech sequences. Transformers have gained prominence across various speech-related do- mains, including automatic speech recognition, speech synthesis, speech translation, speech para-linguistics, speech enhancement, spoken dialogue systems, and numerous multimodal applications. However, the integration of transformers in speech processing comes with significant challenges such as managing the high computational costs, handling the complexity of speech variability, and addressing the data scarcity for certain speech tasks. In this paper, we present a comprehensive survey that aims to bridge research studies from diverse subfields within speech technology. By consolidating findings from across the speech technology landscape, we provide a valuable resource for researchers interested in harnessing the power of transformers to advance the field. We identify the challenges encountered by transformers in speech processing while also offering insights into potential solutions to address these issues.
Citation
S. Latif et al., “Transformers in speech processing: Overcoming challenges and paving the future,” Comput Sci Rev, vol. 58, p. 100768, Nov. 2025, doi: 10.1016/j.cosrev.2025.100768
Source
Computer Science Review
Conference
Keywords
Transformer, Speech processing, Automatic speech recognition, Deep learning
Subjects
Source
Publisher
Elsevier
