Running Jellyfin on an unprivileged LXC proxmox container
This is a guide i took and modified from somebody's reddit post. They had used ubuntu for the LXC container but I will be using Alpine Linux instead.
They had referenced this page on the article as well: https://bookstack.swigg.net/books/linux/page/lxc-gpu-access # as a way to get more insight on ID mapping
Install drivers on Proxmox host
apt install vainfo
Simply create an unprivileged LXC container Mount media folder
We mount the folder using NFS on proxmox, then we mount it in the LXC container.
Why? because mouting NFS/CIFS on unprivilged container is both a pain in the ass and also insecure.
Edit LXC conf file /etc/pve/lxc/xxx.conf
:
... + mp0: /mnt/pve/nas-video,mp=/mnt/video
You should add the following lines that allow root to map those groups to a new GID.
vi /etc/subgid + root:44:1 + root:103:1
Then you'll need to create the ID mappings. Since you're just dealing with group mappings the UID mapping can be performed in a single line as shown on the first line addition below. It can be read as "remap 65,536 of the LXC guest namespace UIDs from 0 through 65,536 to a range in the host starting at 100,000." You can tell this relates to UIDs because of the u denoting users. It wasn't necessary to edit /etc/subuid
because that file already gives root permission to perform this mapping.
You have to do the same thing for groups which is the same concept but slightly more verbose. In this example when looking at /etc/group in the LXC guest it shows that video and render have GIDs of 44 and 103. Although you'll use g to denote GIDs everything else is the same except it is necessary to ensure the custom mappings cover the whole range of GIDs so it requires more lines. The only tricky part is the second to last line that shows mapping the LXC guest namespace GID for render (107) to the host GID for render (103) because the groups have different GIDs.
Edit LXC conf file /etc/pve/lxc/xxx.conf
:
... mp0: /mnt/pve/nas-video,mp=/mnt/video lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir + lxc.idmap: u 0 100000 65536 + lxc.idmap: g 0 100000 44 + lxc.idmap: g 44 44 1 + lxc.idmap: g 45 100045 62 + lxc.idmap: g 107 103 1 + lxc.idmap: g 108 100108 65428 ...
With some comments, for understanding (dont put comments in the lxc conf file):
+ lxc.idmap: u 0 100000 65536 // map UIDs 0-65536 (LXC namespace) to 100000-165535 (host namespace) + lxc.idmap: g 0 100000 44 // map GIDs 0-43 (LXC namspace) to 100000-100043 (host namespace) + lxc.idmap: g 44 44 1 // map GID 44 to be the same in both namespaces + lxc.idmap: g 45 100045 62 // map GIDs 45-106 (LXC namspace) to 100045-100106 (host namespace) // 106 is the group before the render group (107) in LXC container // 62 = 107 (render group in LXC) - 45 (start group for this mapping) + lxc.idmap: g 107 103 1 // map GID 107 (render in LXC) to 103 (render on the host) + lxc.idmap: g 108 100108 65428 // map GIDs 108-65536 (LXC namspace) to 100108-165536 (host namespace) // 108 is the group after the render group (107) in the LXC container // 65428 = 65536 (max gid) - 108 (start group for this mapping)
Add root to Groups
Because root's UID and GID in the LXC guest's namespace isn't mapped to root on the host you'll have to add any users in the LXC guest to the groups video and render to have access the devices. As an example to give root in our LXC guest's namespace access to the devices you would simply add root to the video and render group.
usermod -aG render,video root usermod -aG render,video root
Prepare jellyfin env Install Drivers
curl -s https://repositories.intel.com/graphics/intel-graphics.key | apt-key add - echo 'deb [arch=amd64] https://repositories.intel.com/graphics/ubuntu focal main' > /etc/apt/sources.list.d/intel-graphics.list apt update INTEL_LIBVA_VER="2.13.0+i643~u20.04" INTEL_GMM_VER="21.3.3+i643~u20.04" INTEL_iHD_VER="21.4.1+i643~u20.04" apt-get update && apt-get install -y --no-install-recommends libva2="${INTEL_LIBVA_VER}" libigdgmm11="${INTEL_GMM_VER}" intel-media-va-driver-non-free="${INTEL_iHD_VER}" mesa-va-drivers apt install vainfo
Running vainfo should work:
error: can't connect to X server! libva info: VA-API version 1.13.0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_13 libva info: va_openDriver() returns 0 vainfo: VA-API version: 1.13 (libva 2.13.0) vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 21.4.1 (be92568) vainfo: Supported profile and entrypoints VAProfileNone : VAEntrypointVideoProc VAProfileNone : VAEntrypointStats VAProfileMPEG2Simple : VAEntrypointVLD VAProfileMPEG2Simple : VAEntrypointEncSlice VAProfileMPEG2Main : VAEntrypointVLD VAProfileMPEG2Main : VAEntrypointEncSlice VAProfileH264Main : VAEntrypointVLD VAProfileH264Main : VAEntrypointEncSlice VAProfileH264Main : VAEntrypointFEI VAProfileH264Main : VAEntrypointEncSliceLP VAProfileH264High : VAEntrypointVLD VAProfileH264High : VAEntrypointEncSlice VAProfileH264High : VAEntrypointFEI VAProfileH264High : VAEntrypointEncSliceLP VAProfileVC1Simple : VAEntrypointVLD VAProfileVC1Main : VAEntrypointVLD VAProfileVC1Advanced : VAEntrypointVLD VAProfileJPEGBaseline : VAEntrypointVLD VAProfileJPEGBaseline : VAEntrypointEncPicture VAProfileH264ConstrainedBaseline: VAEntrypointVLD VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice VAProfileH264ConstrainedBaseline: VAEntrypointFEI VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP VAProfileVP8Version0_3 : VAEntrypointVLD VAProfileVP8Version0_3 : VAEntrypointEncSlice VAProfileHEVCMain : VAEntrypointVLD VAProfileHEVCMain : VAEntrypointEncSlice VAProfileHEVCMain : VAEntrypointFEI VAProfileHEVCMain10 : VAEntrypointVLD VAProfileHEVCMain10 : VAEntrypointEncSlice VAProfileVP9Profile0 : VAEntrypointVLD VAProfileVP9Profile2 : VAEntrypointVLD
Create user that will run jellyfin
useradd -m gauth usermod -aG render,video gauth #eventually usermod -aG sudo gauth
At this point, vainfo should run properly with the new user. Install Jellyfin
Then you can install jellyfin natively or through docker.
I personally use, Linuxserver docker image. Note for Linuxserver docker image
In this setup, the image init script won't detect char file correctly, leading to improper groups being (not) set and ultimately, not working transcoding.(https://github.com/linuxserver/docker-jellyfin/issues/150)
To bypass, create a custom init script for the image i.e /.../jellyfin/config/custom-cont-init/90-add-group
#!/usr/bin/with-contenv bash FILES=$(find /dev/dri /dev/dvb /dev/vchiq /dev/vc-mem /dev/video1? -type f -print 2>/dev/null) for i in $FILES do if [ -c $i ]; then VIDEO_GID=$(stat -c '%g' "$i") if ! id -G abc | grep -qw "$VIDEO_GID"; then VIDEO_NAME=$(getent group "${VIDEO_GID}" | awk -F: '{print $1}') if [ -z "${VIDEO_NAME}" ]; then VIDEO_NAME="video$(head /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c8)" echo "Creating group $VIDEO_NAME with id $VIDEO_GID" groupadd "$VIDEO_NAME" groupmod -g "$VIDEO_GID" "$VIDEO_NAME" fi echo "Add group $VIDEO_NAME to abc" usermod -a -G "$VIDEO_NAME" abc if [ $(stat -c '%A' "${i}" | cut -b 5,6) != "rw" ]; then echo -e "**** The device ${i} does not have group read/write permissions, which might prevent hardware transcode from functioning correctly. To fix it, you can run the following on your docker host: ****\nsudo chmod g+rw ${i}\n" fi fi fi