Deep learning frameworks, such as PyTorch, have dramatically changed the way we build and train machine learning models. One of the advantages of such frameworks is their ability to harness the power of Graphics Processing Units (GPUs) to accelerate computations. NVIDIA’s CUDA, a parallel computing platform and API, plays a critical role in enabling this GPU acceleration.
In this article, we’ll walk through the process of installing PyTorch with CUDA support. This will enable you to train your models faster, leveraging the power of NVIDIA GPUs.
Prerequisites
- NVIDIA GPU: Ensure that you have an NVIDIA GPU installed in your system. Not all NVIDIA GPUs support CUDA, so it’s wise to check NVIDIA’s CUDA GPUs list to confirm compatibility.
- Operating System: While PyTorch supports various OS, including Windows, Linux, and macOS, CUDA support is most streamlined on Linux. This guide will focus on Linux, but similar steps can be followed for other OS.
Step-by-Step Installation
1. Install NVIDIA Drivers
Firstly, you’ll need the appropriate NVIDIA drivers for your GPU:
sudo apt update sudo apt install nvidia-driver-XXX
Replace XXX
with the version suitable for your GPU and system. You might need to reboot your system after installing the driver.
2. Install CUDA Toolkit
Once drivers are installed, it’s time to install the CUDA toolkit. The CUDA version you choose might depend on the PyTorch version you plan to install. For this guide, let’s assume we are installing CUDA 11.1:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600 sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /" sudo apt-get update sudo apt-get install cuda-11-1
After installation, add CUDA to your path:
echo 'export PATH=/usr/local/cuda-11.1/bin:$PATH' >> ~/.bashrc echo 'export LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc source ~/.bashrc
3. Install cuDNN Library
cuDNN is a GPU-accelerated library from NVIDIA that enhances the performance of deep neural network computations.
Visit NVIDIA’s cuDNN Archive and download the version compatible with your CUDA version. Once downloaded, install it:
tar -xzvf cudnn-X.X-linux-x64-vX.X.X.X.tgz sudo cp -P cuda/include/cudnn*.h /usr/local/cuda-11.1/include/ sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda-11.1/lib64/
Replace X.X
with appropriate cuDNN version numbers.
4. Install PyTorch with CUDA support
With the necessary NVIDIA software installed, you can now install PyTorch. Use the official PyTorch website’s installation guide to find the appropriate pip or conda command for your system and CUDA version.
For example, for CUDA 11.1 using pip:
pip install torch torchvision torchaudio -f https://download.pytorch.org/whl/cu111/torch_stable.html
Verifying the Installation
To verify your PyTorch installation with CUDA:
import torch print(torch.cuda.is_available())
If this script returns True
, then PyTorch has been successfully installed with CUDA support!
Conclusion
You’re all set! With PyTorch and CUDA installed, you’re ready to harness the power of GPUs for your deep learning projects. As both PyTorch and CUDA continue to evolve, it’s essential to regularly check for updates and ensure compatibility between the versions you’re using. Happy coding!