21 comments
dividuum · 19 days ago
I've worked a lot this year on writing DRM/KMS code while porting my digital signage player (https://info-beamer.com) to support the Raspberry Pi5. Since they moved away from their proprietary Broadcom provided graphical APIs (OMX/dispmanx) the Pi now fully supports DRM and the implementation is really solid by now.

There is a ton more to learn: KMS (kernel mode setting) allows fine control over the video mode in case you cannot or do not want to rely on auto-detection.

Then there's the atomic API: Unlike in the blog post, all changes applied to the output (from video mode to plane positions or assigned framebuffers...) are gathered and then applied atomically in a single commit. If necessary you can test if your atomic commit is correct and will work by doing a "Test only commit" before doing the real commit. Making changes atomically avoids all kinds of race conditions resulting in, for example, screen tearing.

Then there's the interaction with video decoding: Using FFmpeg on the Pi allows you to access to hardware decoder. They produce DRM framebuffers for each video frame. You can then directly assign them to planes and position them on the screen. The resulting playback is zero-copy and as fast as it gets on the Pi.

Another fun feature is the Writeback connector, which unlike the one ending up as an HDMI signal allows you to write your output to a new DRM framebuffer. This can, for example, be used to take screenshots of your output or even feed the buffer back into a video encoder.

One very frustration aspect is that there is basically no real documentation, especially about semantics. I guess it makes sense if you consider that there's probably only a limited number of API consumers (like desktop compositors, special video players).

Show replies

mdp2021 · 19 days ago
In case anyone misinterprets it:

DRM here is for Direct Rendering Manager (not e.g. interfaces studied to limit access to content).

Show replies

maplant · 19 days ago
DRM Framebuffers are also the preferred way to interface with Vulkan renderers in GTK. For example, if you wanted to make a game scene editor with gnome, you could render the scene to a DRM Framebuffer and use a GTKGraphicsOffload widget to indicate that it will continue to be updated outside of the event loop.

In practice I’ve never been able to get this work. Static images totally fine. Graphics offloading fails and manually refreshing the image causes some sort of memory leak in the GPU

sylware · 18 days ago
Have not looked deeply into the code, but I know it is important to work with the "DRM MODIFIERS" in order to work with a native (efficient) framebuffer format for the GPU (usually hardware custom tiling).

The hard part is to "blit" from a well-known framebuffer format to that native framebuffer format.

If I recall properly, on AMD GPU, you would use a 'DMA engine' which will perform the conversion (it may be obsolete and you may have to use the full GPU pipeline with texture image formats).

I dunno how much hardware abstraction there is in libdrm (and this is my own dawn fault as I should have dug deeper a long time ago in libdrm interface), do we have to "know" how to deal with the native format, or is there some (expensive) hardware abstraction to deal with this conversion?

Show replies

rjsw · 19 days ago