The full documentation and frequently asked questions are available on the repository wiki.
An introduction to the NVIDIA Container Runtime is also covered in our blog post.
Make sure you have installed the NVIDIA driver and a supported version of Docker for your distribution (see prerequisites).
If you have a custom /etc/docker/daemon.json
, the nvidia-docker2
package might override it.
Ubuntu will install docker.io
by default which isn't the latest version of Docker Engine. This implies that you will need to pin the version of nvidia-docker. See more information here.
# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge -y nvidia-docker
# Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd
# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
If you are not using the official docker-ce
package on CentOS/RHEL, use the next section.
# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo yum remove nvidia-docker
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | \
sudo tee /etc/yum.repos.d/nvidia-docker.repo
# Install nvidia-docker2 and reload the Docker daemon configuration
sudo yum install -y nvidia-docker2
sudo pkill -SIGHUP dockerd
# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
If yum
reports a conflict on /etc/docker/daemon.json
with the
docker
package, you need to use the next section instead.
For docker-ce on ppc64le
, look at the FAQ.
# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo yum remove nvidia-docker
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.repo | \
sudo tee /etc/yum.repos.d/nvidia-container-runtime.repo
# Install the nvidia runtime hook
sudo yum install -y nvidia-container-runtime-hook
# Test nvidia-smi with the latest official CUDA image
# You can't use `--runtime=nvidia` with this setup.
docker run --rm nvidia/cuda:9.0-base nvidia-smi
Look at the Installation section of the wiki.
New nvidia-docker packages have been released for docker 18.09.2 and 18.06.2 addressing CVE-2019-5736
Update nvidia-docker to adress runc critical vulnerability that allows specially-crafted containers to gain administrative privileges on the host.
# On Ubuntu/Debian
sudo apt upgrade
# On Centos/RHEL/Amazon Linux
sudo yum upgrade
A signed copy of the Contributor License Agreement needs to be provided to digits@nvidia.com before any change can be accepted.
- Please let us know by filing a new issue
- You can contribute by opening a pull request