...
Code Block |
---|
$ nvidia-smi Mon JanFeb 85 1013:2401:5943 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.161223.0302 Driver Version: 470.161223.0302 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA RTXA6000...-6C On | 00000000:00:05.0 Off | 0 | | N/A N/A P8 N/A / N/A | 3712MiB 512MiB / 48895MiB 5976MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ |
...
Code Block |
---|
$ nvidia-smi Mon JanFeb 85 1013:2414:5945 2024 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.161223.0302 Driver Version: 470.161223.0302 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA RTXA6000...-6C On | 00000000:00:05.0 Off | 0 | | N/A N/A P8 N/A / N/A | 3712MiB 512MiB / 48895MiB5976MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ $ python3 --version Python 3.8.18 $ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-20212022 NVIDIA Corporation Built on MonWed_OctSep_1121_2110:2733:0258_PDT_20212022 Cuda compilation tools, release 11.48, V11.48.15289 Build cuda_11.48.r11.48/compiler.3052143531833905_0 $ whereis cuda cuda: /usr/local/cuda $ cat /home/<USERNAME>/miniforge3/envs/ML/include/cudnn.h . . . /* cudnn : Neural Networks Library */ #if !defined(CUDNN_H_) #define CUDNN_H_ #include <cuda_runtime.h> #include <stdint.h> #include "cudnn_version.h" #include "cudnn_ops_infer.h" #include "cudnn_ops_train.h" #include "cudnn_adv_infer.h" #include "cudnn_adv_train.h" #include "cudnn_cnn_infer.h" #include "cudnn_cnn_train.h" #include "cudnn_backend.h" #if defined(__cplusplus) extern "C" { #endif #if defined(__cplusplus) } #endif #endif /* CUDNN_H_ */ $ conda list | grep tensorflow tensorflow 2.13.1 cuda118py38h409af0c_1 conda-forge tensorflow-base 2.13.1 cuda118py38h52ca5c6_1 conda-forge tensorflow-estimator 2.13.1 cuda118py38ha2f8a09_1 conda-forge tensorflow-gpu 2.13.1 cuda118py38h0240f8b_1 conda-forge $ conda list | grep keras keras 2.13.1 pyhd8ed1ab_0 conda-forge $ python import tensorflow as tf tf.test.is_built_with_cuda() True tf.config.list_physical_devices('GPU') [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] print(tf.__version__) 2.13.1 # (OPTIONAL) Check pytorch $ python import torch $ python print(torch.__version__) # Print PyTorch version 2.2.0 $ python print(torch.cuda.is_available()) # Check if CUDA is available True $ python print(torch.version.cuda) # Print the CUDA version PyTorch is using 11.8 $ python if torch.cuda.is_available(): # Create a tensor and move it to GPU x = torch.tensor([1.0, 2.0]).cuda() print(x) # Print the tensor to verify it's on the GPU else: print("CUDA is not available. Check your PyTorch installation.") tensor([1., 2.], device='cuda:0') |
...