We will now look at how to perform a general matrix-matrix multiplication (GEMM) with CuBLAS. We will actually try to make something a little more utilitarian than the last few examples we saw in cuBLAS—we will use this as a performance metric for our GPU to determine the number of Floating Point Operations Per Second (FLOPS) it can perform, which will be two separate values: the case of single precision, and that of double precision. Using GEMM is a standard technique for evaluating the performance of computing hardware in FLOPS, as it gives a much better understanding of sheer computational power than using pure clock speed in MHz or GHz.
Level-3 GEMM in cuBLAS for measuring GPU performance
If you need a brief review, recall that we covered ...
Get Hands-On GPU Programming with Python and CUDA now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.