From post

Last level cache (LLC) performance of data mining workloads on a CMP - a case study of parallel bioinformatics workloads.

, , и . HPCA, стр. 88-98. IEEE Computer Society, (2006)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Compressing RNNs to Kilobyte Budget for IoT Devices Using Kronecker Products., , , , , , и . ACM J. Emerg. Technol. Comput. Syst., 17 (4): 46:1-46:18 (2021)S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration., , , и . HPCA, стр. 573-586. IEEE, (2022)Debiasing Model Updates for Improving Personalized Federated Training., , , , , , и . ICML, том 139 из Proceedings of Machine Learning Research, стр. 21-31. PMLR, (2021)Searching for Winograd-aware Quantized Networks., , , и . MLSys, mlsys.org, (2020)TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids., , , , , , , и . INTERSPEECH, стр. 4054-4058. ISCA, (2020)Learning Low-precision Neural Networks without Straight-Through Estimator (STE)., и . IJCAI, стр. 3066-3072. ijcai.org, (2019)Tarantula: A Vector Extension to the Alpha Architecture., , , , , , , , , и 1 other автор(ы). ISCA, стр. 281-292. IEEE Computer Society, (2002)Measuring scheduling efficiency of RNNs for NLP applications., , , и . CoRR, (2019)Mobile Machine Learning Hardware at ARM: A Systems-on-Chip (SoC) Perspective., , и . CoRR, (2018)On the effects of quantisation on model uncertainty in Bayesian neural networks., , , и . UAI, том 161 из Proceedings of Machine Learning Research, стр. 929-938. AUAI Press, (2021)