Analyzing Learned Representations of a Deep ASR Performance Prediction Model

Abstract : This paper addresses a relatively new task: prediction of ASR performance on unseen broadcast programs. In a previous paper, we presented an ASR performance prediction system using CNNs that encode both text (ASR transcript) and speech, in order to predict word error rate. This work is dedicated to the analysis of speech signal embeddings and text em-beddings learnt by the CNN while training our prediction model. We try to better understand which information is captured by the deep model and its relation with different conditioning factors. It is shown that hidden layers convey a clear signal about speech style, accent and broadcast type. We then try to leverage these 3 types of information at training time through multi-task learning. Our experiments show that this allows to train slightly more efficient ASR performance prediction systems that-in addition-simultaneously tag the analyzed utterances according to their speech style, accent and broadcast program origin.
Document type :
Conference papers
Complete list of metadatas

Cited literature [16 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01863293
Contributor : Benjamin Lecouteux <>
Submitted on : Tuesday, August 28, 2018 - 11:58:58 AM
Last modification on : Thursday, April 4, 2019 - 10:18:05 AM
Long-term archiving on : Thursday, November 29, 2018 - 3:22:49 PM

File

emnlp2018.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01863293, version 1

Citation

Zied Elloumi, Laurent Besacier, Olivier Galibert, Benjamin Lecouteux. Analyzing Learned Representations of a Deep ASR Performance Prediction Model. Blackbox NLP Workshop and EMLP 2018, Nov 2018, Bruxelles, Belgium. ⟨hal-01863293⟩

Share

Metrics

Record views

51

Files downloads

22