Plex docker container with Nvidia transcoding

· 3 min read
Plex docker container with Nvidia transcoding
Photo by Alistair MacRobert / Unsplash

As part of my drive to make the homelab environment use less electricity (with the current price increases this got even more appealing) I decided to try running plex on the old HP microserver again. However, I knew that it didn't have the CPU horsepower to transcode media files on the fly so I started looking at using a GPU to the transcoding. I settled on a second hand Nvidia P400 as it was cheap and fitted in the low profile slot available on the microserver.

The Dell T620 with 8 drives consumed about 180 watts before adding any load to it while the microserver with 4 hard drives (and a SSD boot drive) idles at about 50-60 watts. In order to provide the required storage I had to either replace the drives with larger capacity disks or add an external disk storage unit with smaller drives. I took the latter route as I already had some old drives available so only had to obtain another couple of cheap drives as well as the enclosure. With all the drives and the GPU installed, total power at idle is around 75-80 watts so a significant saving.

I then copied all the data from the Dell server to the new storage allocated on the microserver using rsync and then set about creating my docker containers using Portainer stacks. Apart from changing paths the configs were the same as when on the Dell server and micro pc until it came to the graphics card.

Nvidia Driver Installation

This is easy in Ubuntu as the packages are available, I just installed the latest version as the runtime that is installed later requires at least version 418 and I figured the latest one should, in theory be the best one :-)

sudo apt install nvidia-driver-515-server

Then follow the prompts, wait for the download and install and reboot the server to complete the installation

Nvidia Runtime Installation

In order for the Plex container to be able to use the Nvidia GPU the Nvidia runtime needs to be installed. First step is to add the Nvidia repositories.

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

Then it's a case of installing the run time

apt update
apt install -y nvidia-docker2

Restart the docker daemon

systemctl restart docker

Now test by running an Nvidia enabled container

sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

This should produce an output similar to below

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06    Driver Version: 450.51.06    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:00:1E.0 Off |                    0 |
| N/A   34C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

docker config for Nvidia

Now that we have the run time enabled, it's a case of adding the Nvidia capabilities to the container config. This means adding a couple of extra lines into the portainer stack (or docker compose config). The runtime section is added and in the environment variables the Nvidia capabilites are added. To keep it simple I've added all to the but I beleive that using gpu is all that is required

    environment:
      - PUID=1000
      - PGID=1000
      - VERSION=docker
      - NVIDIA_DRIVER_CAPABILITIES=all
      - NVIDIA_VISIBLE_DEVICES=all
    runtime: nvidia

Once I'd tested playback worked and could see that the GPU was being used for transcoding I did a final rsync to copy over any new media files and shut down the old server. I then stopped plex on the micro pc and copied the plex config over to the correct location on the microserver so that it had all the history etc for all users.

The final step was to change the reverse proxy configs to point to the microserver instead of the Dell/micropc. The total downtime for the services was about 30 minutes, most of which was spent zipping up the plex database and unzipping it again on the microserver.

Another (newer) micro pc is now the Proxmox server which contains a couple of services that can be CPU intensive. It probably has nearly as much CPU power as the old Dell server. It idles at around 15 watts and has plenty of headroom for creating virtual machines for testing etc. although it doesn't have nearly the same drive capacity as the old Dell server did so VMs that have larger storage requirements now have storage presented via NFS from the HP server (slow but it works) whilst boot drives are on the solid state storage in the second micro pc.