ImageVerifierCode 换一换
格式:PDF , 页数:74 ,大小:562.85KB ,
资源ID:730886      下载积分:10000 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝扫码支付 微信扫码支付   
注意:如需开发票,请勿充值!
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【http://www.mydoc123.com/d-730886.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(ETSI ES 202 211-2003 Speech Processing Transmission and Quality Aspects (STQ) Distributed speech recognition Extended front-end feature extraction algorithm Compression algorithms _1.pdf)为本站会员(fatcommittee260)主动上传,麦多课文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文库(发送邮件至master@mydoc123.com或直接QQ联系客服),我们立即给予删除!

ETSI ES 202 211-2003 Speech Processing Transmission and Quality Aspects (STQ) Distributed speech recognition Extended front-end feature extraction algorithm Compression algorithms _1.pdf

1、 ETSI ES 202 211 V1.1.1 (2003-11)ETSI Standard Speech Processing, Transmission and Quality Aspects (STQ);Distributed speech recognition;Extended front-end feature extraction algorithm;Compression algorithms;Back-end speech reconstruction algorithmfloppy3 ETSI ETSI ES 202 211 V1.1.1 (2003-11) 2 Refer

2、ence DES/STQ-00030 Keywords performance, speech, transmission ETSI 650 Route des Lucioles F-06921 Sophia Antipolis Cedex - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N 348 623 562 00017 - NAF 742 C Association but non lucratif enregistre la Sous-Prfecture de Grasse (06) N 7803/88 Im

3、portant notice Individual copies of the present document can be downloaded from: http:/www.etsi.org The present document may be made available in more than one electronic version or in print. In any case of existing or perceived difference in contents between such versions, the reference version is

4、the Portable Document Format (PDF). In case of dispute, the reference shall be the printing on ETSI printers of the PDF version kept on a specific network drive within ETSI Secretariat. Users of the present document should be aware that the document may be subject to revision or change of status. In

5、formation on the current status of this and other ETSI documents is available at http:/portal.etsi.org/tb/status/status.asp If you find errors in the present document, send your comment to: editoretsi.org Copyright Notification No part may be reproduced except as authorized by written permission. Th

6、e copyright and the foregoing restriction extend to reproduction in all media. European Telecommunications Standards Institute 2003. All rights reserved. DECTTM, PLUGTESTSTM and UMTSTM are Trade Marks of ETSI registered for the benefit of its Members. TIPHONTMand the TIPHON logo are Trade Marks curr

7、ently being registered by ETSI for the benefit of its Members. 3GPPTM is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners. ETSI ETSI ES 202 211 V1.1.1 (2003-11) 3 Contents Intellectual Property Rights5 Foreword.5 Introduction 5 1 Scope 6 2 Refere

8、nces 7 3 Definitions, symbols and abbreviations .7 3.1 Definitions7 3.2 Symbols8 3.3 Abbreviations .9 4 Front-end feature extraction algorithm.10 4.1 Introduction 10 4.2 Front-end algorithm description.10 4.2.1 Front-end block diagram.10 4.2.2 Analog-to-digital conversion 11 4.2.3 Offset compensatio

9、n .11 4.2.4 Framing.11 4.2.5 Energy measure 11 4.2.6 Pre-Emphasis (PE)11 4.2.7 Windowing (W)12 4.2.8 Fast Fourier Transform (FFT).12 4.2.9 Mel-Filtering (MF) .12 4.2.10 Non-linear transformation.13 4.2.11 Cepstral coefficients .13 4.2.12 Voice Activity Detection (VAD)13 4.2.13 Low-Band Noise Detecti

10、on (LBND) 18 4.2.14 Pre-Processing for pitch and class estimation.19 4.2.15 Pitch estimation 20 4.2.15.1 Dirichlet interpolation .20 4.2.15.2 Non-speech and low-energy frames22 4.2.15.3 Search ranges specification and processing 22 4.2.15.4 Spectral peaks determination 22 4.2.15.5 F0 Candidates gene

11、ration24 4.2.15.6 Computing correlation scores26 4.2.15.7 Pitch estimate selection.28 4.2.15.8 History information update .30 4.2.15.9 Output pitch value.31 4.2.16 Classification 31 4.2.17 Front-end output .32 5 Feature compression algorithm 32 5.1 Introduction 32 5.2 Compression algorithm descripti

12、on32 5.2.1 Input32 5.2.2 Vector quantization.33 5.2.3 Pitch and class quantization33 5.2.3.1 Class quantization .34 5.2.3.2 Pitch quantization34 6 Framing, bit-stream formatting, and error protection.35 6.1 Introduction 35 6.2 Algorithm description.36 6.2.1 Multiframe format 36 6.2.2 Synchronization

13、 sequence.36 6.2.3 Header field 36 ETSI ETSI ES 202 211 V1.1.1 (2003-11) 4 6.2.4 Frame Packet Stream 38 7 Bit-stream decoding and error mitigation.38 7.1 Introduction 38 7.2 Algorithm description.38 7.2.1 Synchronization sequence detection .38 7.2.2 Header decoding .39 7.2.3 Feature decompression .3

14、9 7.2.4 Error mitigation 39 7.2.4.1 Detection of frames received with errors 39 7.2.4.2 Substitution of parameter values for frames received with errors.40 7.2.4.3 Modification of parameter values for frames received with errors .40 8 Server side speech reconstruction 43 8.1 Introduction 43 8.2 Algo

15、rithm description.43 8.2.1 Speech reconstruction block diagram .43 8.2.2 Pitch tracking and smoothing43 8.2.2.1 First stage - gross pitch error correction44 8.2.2.2 Second stage - voiced/unvoiced decision and other corrections .46 8.2.2.3 Third stage - smoothing 47 8.2.2.4 Voicing class correction47

16、 8.2.3 Harmonic structure initialization 48 8.2.4 Unvoiced Phase (UPH) synthesis .48 8.2.5 Harmonic magnitudes reconstruction .48 8.2.5.1 High order cepstra recovery 49 8.2.5.2 Solving front-end equation57 8.2.5.3 Cepstra to magnitudes transformation.61 8.2.5.4 Combined magnitudes estimate calculati

17、on 63 8.2.5.4.1 Combined magnitude estimate for unvoiced harmonics63 8.2.5.4.2 Combined magnitude estimate for voiced harmonics64 8.2.6 All-pole spectral envelope modelling .65 8.2.7 Postfiltering.67 8.2.8 Voiced phase synthesis .68 8.2.9 Line spectrum to time-domain transformation70 8.2.9.1 Mixed-v

18、oiced frames processing 70 8.2.9.2 Filtering very high-frequency harmonics 70 8.2.9.3 Energy normalization71 8.2.9.4 STFT spectrum synthesis 71 8.2.9.5 Inverse FFT.71 8.2.10 Overlap-Add .72 Annex A (informative): Bibliography.73 History 74 ETSI ETSI ES 202 211 V1.1.1 (2003-11) 5 Intellectual Propert

19、y Rights IPRs essential or potentially essential to the present document may have been declared to ETSI. The information pertaining to these essential IPRs, if any, is publicly available for ETSI members and non-members, and can be found in ETSI SR 000 314: “Intellectual Property Rights (IPRs); Esse

20、ntial, or potentially Essential, IPRs notified to ETSI in respect of ETSI standards“, which is available from the ETSI Secretariat. Latest updates are available on the ETSI Web server (http:/webapp.etsi.org/IPR/home.asp). Pursuant to the ETSI IPR Policy, no investigation, including IPR searches, has

21、 been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become, essential to the present document. Foreword This ETSI Standard (ES) has been produced by ETSI Technica

22、l Committee Speech Processing, Transmission and Quality Aspects (STQ). Introduction The performance of speech recognition systems receiving speech that has been transmitted over mobile channels can be significantly degraded when compared to using an unmodified signal. The degradations are as a resul

23、t of both the low bit rate speech coding and channel transmission errors. A Distributed Speech Recognition (DSR) system overcomes these problems by eliminating the speech channel and instead using an error protected data channel to send a parameterized representation of the speech, which is suitable

24、 for recognition. The processing is distributed between the terminal and the network. The terminal performs the feature parameter extraction, or the front-end of the speech recognition system. These features are transmitted over a data channel to a remote “back-end“ recognizer. The end result is tha

25、t the transmission channel does not affect the recognition system performance and channel invariability is achieved. ES 201 108 1 specifies the mel-cepstrum Front-End (FE) to ensure compatibility between the terminal and the remote recognizer. For some applications, it may be necessary to reconstruc

26、t the speech waveform at the back-end. Examples include: Interactive Voice Response (IVR) services based on the DSR of “sensitive“ information, such as banking and brokerage transactions. DSR features may be stored for future human verification purposes or to satisfy procedural requirements. Human v

27、erification of utterances in a speech database collected from a deployed DSR system. This database can then be used to retrain and tune models in order to improve system performance. Applications where machine and human recognition are mixed (e.g. human assisted dictation). In order to enable the re

28、construction of speech waveform at the back-end, additional parameters such as fundamental frequency (F0) and voicing class need to be extracted at the front-end, compressed, and transmitted. The availability of tonal parameters (F0 and voicing class) is also useful in enhancing the recognition accu

29、racy of tonal languages, e.g. Mandarin, Cantonese, and Thai. The present document specifies a proposed standard for an Extended Front-End (XFE) that extends the Mel-Cepstrum front-end with additional parameters, viz., fundamental frequency F0 and voicing class. It also specifies the back-end speech

30、reconstruction algorithm using the transmitted parameters. ETSI ETSI ES 202 211 V1.1.1 (2003-11) 6 1 Scope The present document specifies algorithms for extended front-end feature extraction, their transmission, back-end pitch tracking and smoothing, and back-end speech reconstruction which form par

31、t of a system for distributed speech recognition. The specification covers the following components: a) the algorithm for front-end feature extraction to create Mel-Cepstrum parameters; b) the algorithm for extraction of additional parameters, viz., fundamental frequency F0 and voicing class; c) the

32、 algorithm to compress these features to provide a lower data transmission rate; d) the formatting of these features with error protection into a bitstream for transmission; e) the decoding of the bitstream to generate the front-end features at a receiver together with the associated algorithms for

33、channel error mitigation; f) the algorithm for pitch tracking and smoothing at the back-end to minimize pitch errors; g) the algorithm for speech reconstruction at the back-end to synthesize intelligible speech. NOTE: The components (a), (c), (d), and (e) are already covered by the ES 201 108 1. Bes

34、ides these (four) components, the present document covers the components (b), (f), and (g) to provide back-end speech reconstruction and enhanced tonal language recognition capabilities. If these capabilities are not of interest, the reader is better served by (un-extended) ES 201 108 1. The present

35、 document does not cover the “back-end“ speech recognition algorithms that make use of the received DSR front-end features. The algorithms are defined in a mathematical form, pseudo-code, or as flow diagrams. Software implementing these algorithms written in the C programming language is contained i

36、n the ZIP file es_202211v010101p0.zip which accompanies the present document. Conformance tests are not specified as part of the standard. The recognition performance of proprietary implementations of the standard can be compared with those obtained using the reference C code on appropriate speech d

37、atabases. It is anticipated that the DSR bitstream will be used as a payload in other higher level protocols when deployed in specific systems supporting DSR applications. The Extended Front-End (XFE) standard incorporates tonal information, viz., fundamental frequency F0 and voicing class, as addit

38、ional parameters. This information can be used for enhancing the recognition accuracy of tonal languages, e.g. Mandarin, Cantonese, and Thai. The Extended Front-End (XFE) standard incorporates Voice Activity information as part of the voicing class information. This can be used for segmentation (or

39、end-point detection) of the speech data for improved recognition performance. ETSI ETSI ES 202 211 V1.1.1 (2003-11) 7 2 References The following documents contain provisions which, through reference in this text, constitute provisions of the present document. References are either specific (identifi

40、ed by date of publication and/or edition number or version number) or non-specific. For a specific reference, subsequent revisions do not apply. For a non-specific reference, the latest version applies. Referenced documents which are not found to be publicly available in the expected location might

41、be found at http:/docbox.etsi.org/Reference. 1 ETSI ES 201 108: “Speech Processing, Transmission and Quality Aspects (STQ); Distributed speech recognition; Front-end feature extraction algorithm; Compression algorithms“. 2 ETSI EN 300 903: “Digital cellular telecommunications system (Phase 2+); Tran

42、smission planning aspects of the speech service in the GSM Public Land Mobile Network (PLMN) system (GSM 03.50)“. 3 Definitions, symbols and abbreviations 3.1 Definitions For the purposes of the present document, the following terms and definitions apply: analog-to-digital conversion: electronic pro

43、cess in which a continuously variable (analog) signal is changed, without altering its essential content, into a multi-level (digital) signal DC offset: Direct Current (DC) component of the waveform signal discrete cosine transform: process of transforming the log filterbank amplitudes into cepstral

44、 coefficients fast Fourier transform: fast algorithm for performing the discrete Fourier transform to compute the spectrum representation of a time-domain signal feature compression: process of reducing the amount of data to represent the speech features calculated in feature extraction feature extr

45、action: process of calculating a compact parametric representation of speech signal features which are relevant for speech recognition NOTE: The feature extraction process is carried out by the front-end algorithm. feature vector: set of feature parameters (coefficients) calculated by the front-end

46、algorithm over a segment of speech waveform framing: process of splitting the continuous stream of signal samples into segments of constant length to facilitate blockwise processing of the signal frame pair packet: combined data from two quantized feature vectors together with 4 bits of CRC front-en

47、d: part of a speech recognition system which performs the process of feature extraction magnitude spectrum: absolute-valued Fourier transform representation of the input signal multiframe: grouping of multiple frame vectors into a larger data structure mel-frequency warping: process of non-linearly

48、modifying the scale of the Fourier transform representation of the spectrum ETSI ETSI ES 202 211 V1.1.1 (2003-11) 8 mel-frequency cepstral coefficients: cepstral coefficients calculated from the mel-frequency warped Fourier transform representation of the log magnitude spectrum notch filtering: filt

49、ering process in which the otherwise flat frequency response of the filter has a sharp notch at a pre-defined frequency NOTE: In the present document, the notch is placed at the zero frequency, to remove the DC component of the signal. offset compensation: process of removing DC offset from a signal pre-emphasis: filtering process in which the frequency response of the filter has emphasis at a given frequency range NOTE: In the present document, the high-frequency range of the signal spectrum is pre-emphasized. sampling rate: number of samples of an an

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1