Changes

Jump to navigation Jump to search
3,465 bytes added ,  23:19, 19 November 2020
no edit summary
==CUDA and Python==
 
There's lots of different Ways to use GPUs with Python. Here's a partial list:
 
===Tensorflow===
 
Tensors are matrices with a uniform type. Read all about them and tensorflow here: https://www.tensorflow.org/guide/. The highlights are:
*Use it as a library in python, importing methods.
*It plays well with NumPy!
*It sits on top of Keras, allowing custom ANN development.
*Tensorflow has both low level methods (e.g., tf.variable, tf.math, tf.GradientTape.jacobian, etc.) and pre-built estimators (e.g., tf.estimator.LinearClassifier) .
*Crucially, tensorflow lets users build graphs and tf.functions that exist and persist beyond the python interpreter (analogous to kernels)
*You can define custom models and layers for machine learning
 
===CuPy===
 
CuPy is a NumPy-compatible array library (it's really a 'drop-in' replacement) accelerated by CUDA: https://cupy.dev/.
*It leverages CUDA-related libraries including cuBLAS, cuDNN, cuRand, cuSolver, cuSPARSE, cuFFT and NCCL
*Lets users build elementwise or reduction kernels, or raw kernels that are defined using raw CUDA source
 
===Numba===
 
Numba translates python functions to optimized machine code at runtime: https://numba.pydata.org/
*Define threads, grids, and blocks and manage them with 'facilities like those exposed by CUDA C'.
*Doesn't work with NumPy because of memory management issues.
 
===PyCUDA===
 
PyCUDA is a wrapper of CUDA's API for python: https://wiki.tiker.net/PyCuda/
*Essentially gives you access to CUDA's methods, and handles memory allocation and cleanup.
*Doesn't work with CUBLAS. PyCuda is based on the CUDA driver API, which is mutually exclusive with the CUDA runtime API (used by CUBLAS).
*Does reimplement a part of CUBLAS as GPUArray.
*Works with NumPy.
 
===Other options===
 
There's also PyOpenCL (another wrapper but of OpenCL instead), and likely some other things that I have forgotten about.
 
==Other things==
 
The NVIDIA CUDA Toolkit 7.5 is available for free on Amazon Linux: https://aws.amazon.com/marketplace/pp/B01LZMLK1K
 
When developing CUDA code you'll want (both are installed as a part of the CUDA Software Development Kit):
*Nvidia Nsight Eclipse Edition for C/C++
*Nvidia Visual Profiler
 
There's a (probably pretty bad) book which has chapters on working with CUDA in python: https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781789341072/6/ch06lvl1sec55/configuring-pycuda-on-your-python-ide
 
===IDEs===
 
You'll likely want to write your python in an IDE that supports your CUDA development. It looks like PyCharm CE will work with an environment:
https://medium.com/@ashkan.abbasi/quick-guide-for-installing-python-tensorflow-and-pycharm-on-windows-ed99ddd9598. But if we use a docker container, then we'll need the PyCharm PE or Visual Studio (which should work fine with Python):
https://www.analyticsvidhya.com/blog/2020/08/docker-based-python-development-with-cuda-support-on-pycharm-and-or-visual-studio-code/. Visual Studio looks like it allows remote development, using either docker containers or ssh, which might be very nice!
https://devblogs.microsoft.com/python/remote-python-development-in-visual-studio-code/ I'm not sure whether PyCharm CE has this feature, though PE does: https://www.jetbrains.com/help/idea/configuring-remote-python-sdks.html.
 
===Matlab===
 
If you want to use Matlab then Garland's GPU workshop is a good place to start: http://www.quantosanalytics.org/calpoly/gpu_workshop/index.html

Navigation menu