Author of the publication

CuWide: Towards Efficient Flow-based Training for Sparse Wide Models on GPUs (Extended Abstract).

, , , , , , and . ICDE, page 2330-2331. IEEE, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity Through Low-Bit Quantization., , , , , , , and . CVPR, page 11216-11225. Computer Vision Foundation / IEEE, (2019)Optimizing Dynamic Neural Networks with Brainstorm., , , , , , , , , and 4 other author(s). OSDI, page 797-815. USENIX Association, (2023)Welder: Scheduling Deep Learning Memory Access via Tile-graph., , , , , , , , and . OSDI, page 701-718. USENIX Association, (2023)SparTA: Deep-Learning Model Sparsity via Tensor-with-Sparsity-Attribute., , , , , , , , and . OSDI, page 213-232. USENIX Association, (2022)NeuGraph: Parallel Deep Neural Network Computation on Large Graphs., , , , , , and . USENIX ATC, page 443-458. USENIX Association, (2019)CuWide: Towards Efficient Flow-based Training for Sparse Wide Models on GPUs (Extended Abstract)., , , , , , and . ICDE, page 2330-2331. IEEE, (2021)ROLLER: Fast and Efficient Tensor Compilation for Deep Learning., , , , , , , , , and 5 other author(s). OSDI, page 233-248. USENIX Association, (2022)The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits., , , , , , , , , and . CoRR, (2024)Ladder: Enabling Efficient Low-Precision Deep Learning Computing through Hardware-aware Tensor Transformation., , , , , , , , , and 2 other author(s). OSDI, page 307-323. USENIX Association, (2024)Cocktailer: Analyzing and Optimizing Dynamic Control Flow in Deep Learning., , , , , , , , and . OSDI, page 681-699. USENIX Association, (2023)