...
NVIDIA tools are available in /usr/local/cuda-11.4/bin/. You can add them to PATH following:
Code Block |
---|
$ export PATH=$PATH:/usr/local/cuda-11.4/bin/ |
...
Code Block |
---|
# change shell to bash for installations $ bash # update default packages $ sudo apt-get update $ sudo apt-get update # it's possible to get some update key and dirmngr errors while updating, below commands supply a workaround. After running the workaround, run update & upgrade again. $ sudo apt install dirmngr $ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys <YOUR-KEY-LIKE-AA16FCBCA621E701> # install miniforge (or any anaconda manager) $ wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh $ chmod +x Miniforge3-Linux-x86_64.sh $ ./Miniforge3-Linux-x86_64.sh #When it asks, conda init? answer yes #Do you wish the installer to initialize Miniforge3 #by running conda init? [yes|no] #[no] >>>>>> $ yes $ exit $ bash |
Library installations
Code Block |
---|
sudo# aptcreate install -y docker.io sudo usermod -aG docker $USER |
Confirmation of installations
conda environment
$ conda create -n ML python=3.8
# activate the environment
$ conda activate ML
# install packages, note that installing tensorflow-gpu and keras also installs: CUDA toolkit, cuDNN (CUDA Deep Neural Network library), Numpy, Scipy, Pillow
$ conda install tensorflow-gpu keras
# (OPTIONAL) cudatoolkit is installed automatically while installing keras and tensorflow-gpu, but if you need a specific (or latest) version run below command.
$ conda install -c anaconda cudatoolkit |
Confirmation of installations
Code Block |
---|
$ nvidia-smi
Mon Jan 8 10:24:59 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03 Driver Version: 470.161.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTXA6000... On | 00000000:00:05.0 Off | 0 |
| N/A N/A P8 N/A / N/A | 3712MiB / 48895MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
$ python3 --version
Python 3.8.18
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_Oct_11_21:27:02_PDT_2021
Cuda compilation tools, release 11.4, V11.4.152
Build cuda_11.4.r11.4/compiler.30521435_0
$ whereis cuda
cuda: /usr/local/cuda
$ cat /home/yigit/miniforge3/envs/myenv/include/cudnn.h
.
.
.
/* cudnn : Neural Networks Library
*/
#if !defined(CUDNN_H_)
#define CUDNN_H_
#include <cuda_runtime.h>
#include <stdint.h>
#include "cudnn_version.h"
#include "cudnn_ops_infer.h"
#include "cudnn_ops_train.h"
#include "cudnn_adv_infer.h"
#include "cudnn_adv_train.h"
#include "cudnn_cnn_infer.h"
#include "cudnn_cnn_train.h"
#include "cudnn_backend.h"
#if defined(__cplusplus)
extern "C" {
#endif
#if defined(__cplusplus)
}
#endif
#endif /* CUDNN_H_ */
$ conda list | grep tensorflow
tensorflow 2.13.1 cuda118py38h409af0c_1 conda-forge
tensorflow-base 2.13.1 cuda118py38h52ca5c6_1 conda-forge
tensorflow-estimator 2.13.1 cuda118py38ha2f8a09_1 conda-forge
tensorflow-gpu 2.13.1 cuda118py38h0240f8b_1 conda-forge
$ conda list | grep keras
keras 2.13.1 pyhd8ed1ab_0 conda-forge
$ python
import tensorflow as tf
tf.test.is_built_with_cuda()
True
tf.config.list_physical_devices('GPU')
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
print(tf.__version__)
2.13.1 |
Code Block |
sudo apt install -y docker.io
sudo usermod -aG docker $USER |
#Using Docker
If you want to use GPUs in docker, you need to take few extra steps after creating the VM.
...