Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Invited: Algorithm-Software-Hardware Co-Design for Deep Learning Acceleration., , , , and . DAC, page 1-4. IEEE, (2023)TAAS: a timing-aware analytical strategy for AQFP-capable placement automation., , , , , , and . DAC, page 1321-1326. ACM, (2022)FPGA-aware automatic acceleration framework for vision transformer with mixed-scheme quantization: late breaking results., , , , , , , , , and 2 other author(s). DAC, page 1394-1395. ACM, (2022)Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers., , , , , , , , , and 2 other author(s). CoRR, (2024)Mixed-Cell-Height Placement With Complex Minimum-Implant-Area Constraints., , , , and . IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 41 (11): 4639-4652 (2022)HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers., , , , , , , , , and 1 other author(s). HPCA, page 442-455. IEEE, (2023)Timing-Driven Placement for FPGAs with Heterogeneous Architectures and Clock Constraints., , , , , , and . DATE, page 1564-1569. IEEE, (2021)HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression., , , , , and . CoRR, (2024)Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization., , , , , , , , , and 2 other author(s). FPL, page 109-116. IEEE, (2022)Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training., , , , , , , , , and 5 other author(s). AAAI, page 8360-8368. AAAI Press, (2023)