You have to have Python 3.6 - 3.8 installed. Other versions do not work with Tensorflow.
It is highly recommended to install Tensorflow inside a virtual environment, here is a simple example using virtualenv:
python3 -m pip install virtualenv virtualenv gpu source gpu/bin/activate
CentOS 7
Install the prerequisites and Tensorflow 2.8:
sudo yum -y install epel-release sudo yum update -y sudo yum -y groupinstall "Development Tools" sudo yum -y install openssl-devel bzip2-devel libffi-devel xz-devel python3 -m pip install tensorflow==2.8.0 OS=rhel7 && \ sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/cuda-${OS}.repo && \ sudo yum clean all && \ sudo yum install -y sudo yum install libcudnn8.x86_64 libcudnn8-devel.x86_64
Then check if Tensorflow sees the GPUs by running:
import tensorflow as tf print(tf.config.list_physical_devices('GPU'))
You should see an output similar to this one:
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Ubuntu 20.04
Begin by installing the nvidia-cuda-toolkit:
sudo apt install nvidia-cuda-toolkit
After installing the nvidia-cuda-toolkit, you can now install cuDNN 7.6.5 by downloading it from this link. You’ll be asked to login or create an NVIDIA account. After logging in and accepting the terms of cuDNN software license agreement, you will see a list of available cuDNN software.
Once downloaded, untar the file, copy its ingredients to your cuda libraries and change permissions:
tar -xvzf cudnn-XX-linux-x64-vXX.tgz sudo cp cuda/include/cudnn.h /usr/lib/cuda/include/ sudo cp cuda/lib64/libcudnn* /usr/lib/cuda/lib64/ sudo chmod a+r /usr/lib/cuda/include/cudnn.h /usr/lib/cuda/lib64/libcudnn* echo 'export LD_LIBRARY_PATH=/usr/lib/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc echo 'export LD_LIBRARY_PATH=/usr/lib/cuda/include:$LD_LIBRARY_PATH' >> ~/.bashrc source ~/.bashrc
Now install Tensorflow with pip:
python3 -m pip install tensorflow==2.8.0
Then check if Tensorflow sees the GPUs by running:
import tensorflow as tf print(tf.config.list_physical_devices('GPU'))
You should see an output similar to this one:
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]