Author of the publication

An Energy-Efficient Speech-Extraction Processor for Robust User Speech Recognition in Mobile Head-Mounted Display Systems.

, , , and . IEEE Trans. Circuits Syst. II Express Briefs, 64-II (4): 457-461 (2017)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

An energy-efficient deep learning processor with heterogeneous multi-core architecture for convolutional neural networks and recurrent neural networks., , , , and . COOL Chips, page 1-2. IEEE Computer Society, (2017)A 141.4 mW Low-Power Online Deep Neural Network Training Processor for Real-time Object Tracking in Mobile Devices., , , , and . ISCAS, page 1-5. IEEE, (2018)An Energy-Efficient Speech-Extraction Processor for Robust User Speech Recognition in Mobile Head-Mounted Display Systems., , , and . IEEE Trans. Circuits Syst. II Express Briefs, 64-II (4): 457-461 (2017)Talaria: Interactively Optimizing Machine Learning Models for Efficient Inference., , , , , , , , , and . CoRR, (2024)A 0.53mW ultra-low-power 3D face frontalization processor for face recognition with human-level accuracy in wearable devices., , , , and . ISCAS, page 1-4. IEEE, (2017)14.2 DNPU: An 8.1TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks., , , and . ISSCC, page 240-241. IEEE, (2017)A 21mW low-power recurrent neural network accelerator with quantization tables for embedded deep learning applications., , and . A-SSCC, page 237-240. IEEE, (2017)A 3.13nJ/sample energy-efficient speech extraction processor for robust speech recognition in mobile head-mounted display systems., , , and . ISCAS, page 1790-1793. IEEE, (2015)14.1 A 126.1mW real-time natural UI/UX processor with embedded deep-learning core for low-power smart glasses., , , , , and . ISSCC, page 254-255. IEEE, (2016)LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16., , , , , and . ISSCC, page 142-144. IEEE, (2019)