Difference between revisions of "Python GPU programming"

From edegan.com
Jump to navigation Jump to search
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
There are many different ways to work with GPUs using Python. This page explores them!
+
There are many different ways to work with GPUs using Python. This page provides a discussion of the foundations behind working with GPUs, from the fundamental choice between CUDA and OpenCL to what it means to compile a kernel. It then covers the dominant approaches for using CUDA with python.
  
 
==Foundations==
 
==Foundations==
Line 5: Line 5:
 
===CUDA vs. OpenCL===
 
===CUDA vs. OpenCL===
  
At a fundamental level, using a GPU for computing means using [[https://en.wikipedia.org/wiki/CUDA CUDA]], [[https://en.wikipedia.org/wiki/OpenCL OpenCL]], or some other interface (OpenGL compute, Microsoft's DirectCompute, etc.) The big trade-off between CUDA and OpenCL is proprietary performance vs. open-source generality. Usually, I favour the later. However, at this point, the nVIDIA chipsets dominate the market and CUDA (which only runs on nVIDIA) seems to be the obvious choice. There have also been some attempts to make CUDA run on CL.
+
At a fundamental level, using a GPU for computing means using [https://en.wikipedia.org/wiki/CUDA CUDA], [https://en.wikipedia.org/wiki/OpenCL OpenCL], or some other interface (OpenGL compute, Microsoft's DirectCompute, etc.) The big trade-off between CUDA and OpenCL is proprietary performance vs. open-source generality. Usually, I favour the later. However, at this point, the nVIDIA chipsets dominate the market and CUDA (which only runs on nVIDIA) seems to be the obvious choice. There have also been some attempts to make CUDA run on CL.
  
 
===CUDA for C++ or Fortran===
 
===CUDA for C++ or Fortran===
Line 40: Line 40:
  
 
The key things that you need to know are:
 
The key things that you need to know are:
* One kernel is executed at a time on a device
+
* One '''kernel''' is executed at a time on a device
* Many threads execute each kernel - each thread runs the same code but on different data (based on its threadID)
+
* Many '''threads''' execute each kernel - each thread runs the same code but on different data (based on its threadID)
* Threads are grouped into blocks and a kernel runs on a grid of blocks
+
* Threads are grouped into '''blocks''' and a kernel runs on a '''grid''' of blocks
 
* Blocks can't synchronize. They can run concurrently or sequentially.
 
* Blocks can't synchronize. They can run concurrently or sequentially.
* Threads have local memory (registers ~ 1clock cycle), blocks share memory (~10 clock cycles), and kernels have per-device global memory(~100s/1000 clock cycles)
+
* Threads have local memory ('''registers''' ~ 1clock cycle), '''blocks share memory''' (~10 clock cycles), and kernels have '''per-device global memory''' (~100s/1000 clock cycles)
* Per device memory can transfer data to/from the CPU, and includes global, local (for consecutive access by a thread), constant (much faster than other per device), and some specialized memories for graphics (texture and surface).
+
* Per device memory can transfer data to/from the CPU, and includes '''global''', '''local''' (for consecutive access by a thread), '''constant''' (much faster than other per device), and some specialized memories for graphics ('''texture''' and surface).
 
* Transfers from global memory to local registers is in 4,8 or 16 byte units (or can incur a penalty, which slows things down). Threads can talk to constant and texture memory.
 
* Transfers from global memory to local registers is in 4,8 or 16 byte units (or can incur a penalty, which slows things down). Threads can talk to constant and texture memory.
* Blocks should have dimension >=32
+
* Blocks should have dimension >=32 (see warps below).
* A GPU device is a set of SIMT multiprocessors
+
* A GPU device is a set of '''[https://en.wikipedia.org/wiki/Single_instruction,_multiple_threads SIMT multiprocessor]'''
* At each clock cycle, a multiprocessor executes the same instruction on a warp (the number of threads in a warp is the "warp size". It's usually 32. You can find yours by running the deviceQuery utility provided in the samples folder. See [[DIGITS DevBox#Test the installation]].
+
* The number of threads in a '''warp''' is the "warp size". It's usually 32. You can find yours by running the deviceQuery utility provided in the samples folder. See [[DIGITS DevBox#Test the installation]]. Warps are then grouped into blocks.
 +
* At each clock cycle, a multiprocessor executes the same instruction on a warp. Threads within a warp are executed physically in parallel. Warps and blocks are executed logically in parallel.
 +
* Kernel launches are asynchronous - the CPU hands off the kernel and moves on. The kernel only executes and all previous CUDA calls have completed.
  
 
==CUDA and Python==
 
==CUDA and Python==
 +
 +
There's lots of different Ways to use GPUs with Python. Here's a partial list:
 +
 +
===Tensorflow===
 +
 +
Tensors are matrices with a uniform type. Read all about them and tensorflow here: https://www.tensorflow.org/guide/. The highlights are:
 +
*Use it as a library in python, importing methods.
 +
*It plays well with NumPy!
 +
*It sits on top of Keras, allowing custom ANN development.
 +
*Tensorflow has both low level methods (e.g., tf.variable, tf.math, tf.GradientTape.jacobian, etc.) and pre-built estimators (e.g., tf.estimator.LinearClassifier) .
 +
*Crucially, tensorflow lets users build graphs and tf.functions that exist and persist beyond the python interpreter (analogous to kernels)
 +
*You can define custom models and layers for machine learning
 +
 +
===CuPy===
 +
 +
CuPy is a NumPy-compatible array library (it's really a 'drop-in' replacement) accelerated by CUDA: https://cupy.dev/.
 +
*It leverages CUDA-related libraries including cuBLAS, cuDNN, cuRand, cuSolver, cuSPARSE, cuFFT and NCCL
 +
*Lets users build elementwise or reduction kernels, or raw kernels that are defined using raw CUDA source
 +
 +
===Numba===
 +
 +
Numba translates python functions to optimized machine code at runtime: https://numba.pydata.org/
 +
*Define threads, grids, and blocks and manage them with 'facilities like those exposed by CUDA C'.
 +
*Doesn't work with NumPy because of memory management issues.
 +
 +
===PyCUDA===
 +
 +
PyCUDA is a wrapper of CUDA's API for python: https://wiki.tiker.net/PyCuda/
 +
*Essentially gives you access to CUDA's methods, and handles memory allocation and cleanup.
 +
*Doesn't work with CUBLAS. PyCuda is based on the CUDA driver API, which is mutually exclusive with the CUDA runtime API (used by CUBLAS).
 +
*Does reimplement a part of CUBLAS as GPUArray.
 +
*Works with NumPy.
 +
 +
===Other options===
 +
 +
There's also [https://pypi.org/project/pyopencl/ PyOpenCL] (another wrapper but of OpenCL instead), [https://pytorch.org/ PyTorch] (a tensor library that provides a replacement for Numpy for working with GPUs), [https://github.com/Theano/Theano Theono], [https://pypi.org/project/Hebel/ Hebel] (a deep-learning library on top of PyCuda and numpy), [https://caffe2.ai/docs/getting-started.html Caffe2] (which uses CuDNN C++ libraries to provide a deep learning framework) and likely some other things that I have forgotten about.
 +
 +
==Other things==
 +
 +
The NVIDIA CUDA Toolkit 7.5 is available for free on Amazon Linux: https://aws.amazon.com/marketplace/pp/B01LZMLK1K
 +
 +
When developing CUDA code you'll want (both are installed as a part of the CUDA Software Development Kit):
 +
*Nvidia Nsight Eclipse Edition for C/C++
 +
*Nvidia Visual Profiler
 +
 +
There's a (probably pretty bad) book which has chapters on working with CUDA in python: https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781789341072/6/ch06lvl1sec55/configuring-pycuda-on-your-python-ide
 +
 +
===IDEs===
 +
 +
You'll likely want to write your python in an IDE that supports your CUDA development. It looks like PyCharm CE will work with an environment:
 +
https://medium.com/@ashkan.abbasi/quick-guide-for-installing-python-tensorflow-and-pycharm-on-windows-ed99ddd9598. But if we use a docker container, then we'll need the PyCharm PE or Visual Studio (which should work fine with Python):
 +
https://www.analyticsvidhya.com/blog/2020/08/docker-based-python-development-with-cuda-support-on-pycharm-and-or-visual-studio-code/. Visual Studio looks like it allows remote development, using either docker containers or ssh, which might be very nice!
 +
https://devblogs.microsoft.com/python/remote-python-development-in-visual-studio-code/ I'm not sure whether PyCharm CE has this feature, though PE does: https://www.jetbrains.com/help/idea/configuring-remote-python-sdks.html.
 +
 +
===Matlab===
 +
 +
If you want to use Matlab then Garland's GPU workshop is a good place to start: http://www.quantosanalytics.org/calpoly/gpu_workshop/index.html

Latest revision as of 17:17, 30 November 2020

There are many different ways to work with GPUs using Python. This page provides a discussion of the foundations behind working with GPUs, from the fundamental choice between CUDA and OpenCL to what it means to compile a kernel. It then covers the dominant approaches for using CUDA with python.

Foundations

CUDA vs. OpenCL

At a fundamental level, using a GPU for computing means using CUDA, OpenCL, or some other interface (OpenGL compute, Microsoft's DirectCompute, etc.) The big trade-off between CUDA and OpenCL is proprietary performance vs. open-source generality. Usually, I favour the later. However, at this point, the nVIDIA chipsets dominate the market and CUDA (which only runs on nVIDIA) seems to be the obvious choice. There have also been some attempts to make CUDA run on CL.

CUDA for C++ or Fortran

If you are coding in C/C++ then you can compile CUDA code into PTX (a low-level virtual machine language that runs on GPUs) with the [nVIDIA CUDA Compiler (nvcc)]. The nvcc separates out CUDA code that will run on the GPU and compiles it to PTX, and leaves the rest to be compiled using your regular compiler (likely GCC or the Microsoft Visual C compiler). Likewise, nVIDIA provides a dedicated Fortran CUDA compiler (nVIDIA bought the Portland Group, Inc. -- PGI -- to this end).

Approaches for other languages

However, if you want to write GPU compute code in Python, Perl, Java, Matlab, or a host of other languages, you'll need to think carefully about which of the offered approaches is right for you. There are broadly four classes of approaches:

  1. Getting a language-specific compiler that isn't made by nVIDIA!
  2. Wrapping CUDA C++ (or Fortran) code directly into your code
  3. Using the low-level CUDA video driver API
  4. Using the higher-level CUDA Runtime API, which sits on top of the low-level CUDA video library (i.e., using both APIs)

The distinction between the last two is that only the Runtime API gives you access to the full set of libraries including:

  • cuBLAS – CUDA Basic Linear Algebra Subroutines library
  • cuSOLVER – CUDA based collection of dense and sparse direct solvers, see main and docs
  • cuSPARSE – CUDA Sparse Matrix library

If you're an economist, then these libraries are very likely what you're going to want! (If you're a physicist, or doing signal processing, then you'll probably want cuFFT and other libraries that are also in the Runtime API).

For the second option, you'll need to use SWIG (Simplified Wrapper and Interface Generator) or something that gives you equivalent functionality for your language, such as Cython for Python. (NPCUDA is a simple project to demo both of these methods in Python.) The major advantage of this option is that you aren't hitching your horse to the continued support of a whole series of intermediate APIs.

Notice for Perl Programmers
A quick look at the GPU support for Perl suggests that SWIG the way to go!

Compiling a Kernel

In the language of GPU computing, we need to compile a kernel to run on the GPU. Some packages (discussed later) abstract away how GPUs handle memory and processing, but you should be aware of the fundamentals as they are often very important to maximizing the code's performance: if you understand the hardware implementation, you can tune for it!

Chi Wei Cliburn Chan, an associate prof of Biostatistics and Bioinformatics at Duke, teaches lots of great classes, and provides a guide to Massively parallel programming with GPUs as a part of his Computational Statistics in Python class (note that the 2018 version of his STA 663: Computational Statistics and Statistical Computing (2018) class (under the same course number) has sections on Spark, Tensorflow, Cython, and more!). This guide has a pretty good walk-through of how a CUDA kernel runs, though it is missing some images. See also:

The key things that you need to know are:

  • One kernel is executed at a time on a device
  • Many threads execute each kernel - each thread runs the same code but on different data (based on its threadID)
  • Threads are grouped into blocks and a kernel runs on a grid of blocks
  • Blocks can't synchronize. They can run concurrently or sequentially.
  • Threads have local memory (registers ~ 1clock cycle), blocks share memory (~10 clock cycles), and kernels have per-device global memory (~100s/1000 clock cycles)
  • Per device memory can transfer data to/from the CPU, and includes global, local (for consecutive access by a thread), constant (much faster than other per device), and some specialized memories for graphics (texture and surface).
  • Transfers from global memory to local registers is in 4,8 or 16 byte units (or can incur a penalty, which slows things down). Threads can talk to constant and texture memory.
  • Blocks should have dimension >=32 (see warps below).
  • A GPU device is a set of SIMT multiprocessor
  • The number of threads in a warp is the "warp size". It's usually 32. You can find yours by running the deviceQuery utility provided in the samples folder. See DIGITS DevBox#Test the installation. Warps are then grouped into blocks.
  • At each clock cycle, a multiprocessor executes the same instruction on a warp. Threads within a warp are executed physically in parallel. Warps and blocks are executed logically in parallel.
  • Kernel launches are asynchronous - the CPU hands off the kernel and moves on. The kernel only executes and all previous CUDA calls have completed.

CUDA and Python

There's lots of different Ways to use GPUs with Python. Here's a partial list:

Tensorflow

Tensors are matrices with a uniform type. Read all about them and tensorflow here: https://www.tensorflow.org/guide/. The highlights are:

  • Use it as a library in python, importing methods.
  • It plays well with NumPy!
  • It sits on top of Keras, allowing custom ANN development.
  • Tensorflow has both low level methods (e.g., tf.variable, tf.math, tf.GradientTape.jacobian, etc.) and pre-built estimators (e.g., tf.estimator.LinearClassifier) .
  • Crucially, tensorflow lets users build graphs and tf.functions that exist and persist beyond the python interpreter (analogous to kernels)
  • You can define custom models and layers for machine learning

CuPy

CuPy is a NumPy-compatible array library (it's really a 'drop-in' replacement) accelerated by CUDA: https://cupy.dev/.

  • It leverages CUDA-related libraries including cuBLAS, cuDNN, cuRand, cuSolver, cuSPARSE, cuFFT and NCCL
  • Lets users build elementwise or reduction kernels, or raw kernels that are defined using raw CUDA source

Numba

Numba translates python functions to optimized machine code at runtime: https://numba.pydata.org/

  • Define threads, grids, and blocks and manage them with 'facilities like those exposed by CUDA C'.
  • Doesn't work with NumPy because of memory management issues.

PyCUDA

PyCUDA is a wrapper of CUDA's API for python: https://wiki.tiker.net/PyCuda/

  • Essentially gives you access to CUDA's methods, and handles memory allocation and cleanup.
  • Doesn't work with CUBLAS. PyCuda is based on the CUDA driver API, which is mutually exclusive with the CUDA runtime API (used by CUBLAS).
  • Does reimplement a part of CUBLAS as GPUArray.
  • Works with NumPy.

Other options

There's also PyOpenCL (another wrapper but of OpenCL instead), PyTorch (a tensor library that provides a replacement for Numpy for working with GPUs), Theono, Hebel (a deep-learning library on top of PyCuda and numpy), Caffe2 (which uses CuDNN C++ libraries to provide a deep learning framework) and likely some other things that I have forgotten about.

Other things

The NVIDIA CUDA Toolkit 7.5 is available for free on Amazon Linux: https://aws.amazon.com/marketplace/pp/B01LZMLK1K

When developing CUDA code you'll want (both are installed as a part of the CUDA Software Development Kit):

  • Nvidia Nsight Eclipse Edition for C/C++
  • Nvidia Visual Profiler

There's a (probably pretty bad) book which has chapters on working with CUDA in python: https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781789341072/6/ch06lvl1sec55/configuring-pycuda-on-your-python-ide

IDEs

You'll likely want to write your python in an IDE that supports your CUDA development. It looks like PyCharm CE will work with an environment: https://medium.com/@ashkan.abbasi/quick-guide-for-installing-python-tensorflow-and-pycharm-on-windows-ed99ddd9598. But if we use a docker container, then we'll need the PyCharm PE or Visual Studio (which should work fine with Python): https://www.analyticsvidhya.com/blog/2020/08/docker-based-python-development-with-cuda-support-on-pycharm-and-or-visual-studio-code/. Visual Studio looks like it allows remote development, using either docker containers or ssh, which might be very nice! https://devblogs.microsoft.com/python/remote-python-development-in-visual-studio-code/ I'm not sure whether PyCharm CE has this feature, though PE does: https://www.jetbrains.com/help/idea/configuring-remote-python-sdks.html.

Matlab

If you want to use Matlab then Garland's GPU workshop is a good place to start: http://www.quantosanalytics.org/calpoly/gpu_workshop/index.html