Beliebiger Eintrag,

HISQ inverter on Intel Xeon Phi and NVIDIA GPUs

, , , , und .
(2014)cite arxiv:1409.1510Comment: 7 pages, proceedings, presented at the 32nd International Symposium on Lattice Field Theory (Lattice 2014), June 23 to June 28, 2014, New York, USA.

Zusammenfassung

The runtime of a Lattice QCD simulation is dominated by a small kernel, which calculates the product of a vector by a sparse matrix known as the "Dslash" operator. Therefore, this kernel is frequently optimized for various HPC architectures. In this contribution we compare the performance of the Intel Xeon Phi to current Kepler-based NVIDIA Tesla GPUs running a conjugate gradient solver. By exposing more parallelism to the accelerator through inverting multiple vectors at the same time we obtain a performance 250 GFlop/s on both architectures. This more than doubles the performance of the inversions. We give a short overview of both architectures, discuss some details of the implementation and the effort required to obtain the achieved performance.

Tags

Nutzer

  • @cmcneile

Kommentare und Rezensionen