Release Notes - 31 Video decoding and playback

From MythTV Official Wiki
Jump to: navigation, search


There have been significant changes in the decoding and playback of video for the 0.31 release.

Please read these notes carefully to ensure you get the best performance and quality from your system.

OpenGL is now a requirement

As of version 0.31, all video playback requires a working OpenGL installation.

This maximises performance, quality and consistency of display across a wide range of devices and platforms. It also allows MythTV to efficiently display video from the variety of hardware accelerated video decoders that are supported.

There are no strict requirements on the version of OpenGL required - any version from OpenGL 2.0 and OpenGL ES 2.0 and above should work. In the future, more modern versions of OpenGL may be required for some advanced features but the intention is that these will be remain optional.

If OpenGL is not available when starting Mythfrontend, MythTV will fallback to Qt painting to display the GUI and a notification will be displayed that OpenGL (and hence video playback) is not available.

Video display profiles

With the move to OpenGL, decoder changes and significant changes to deinterlacing preferences, the Video Display Profile settings pages have changed significantly.

All new and existing users are strongly advised to review their video display profiles. This includes checking the detailed settings within each profile.

As part of the update to version 0.31, all video display profiles stored in the database have been modified/upgraded to ensure they still function with the new setup. The update attempts to make sensible decisions about the appropriate new settings and in the vast majority of cases, this should be seamless.

  • There will however be cases where an updated profile needs editing. This is most likely to impact users who previously used VDPAU and/or VAAPI for rendering of software decoded video; this use case is no longer supported.

Video decoders

Full accelerated hardware decoding is now available with the following interfaces:-

  • VideoToolBox (replaces VDA)
  • Video4Linux2 Codecs (V4L2-codecs)
  • MMAL
  • MediaCodec

The following hardware decoders have been removed:-

  • OpenMax
  • CrystalHD

There is also an experimental DRM-PRIME decoder for certain Rockchip based devices that is largely untested.

Please note that there is no hardware decoding on Windows at present (and indeed no functioning Windows build). This is expected to be fixed for 0.32.

The following table summarises the capabilities of each decoder; it should not be considered complete and actual decoder capabilities will vary depending on the drivers and hardware available. The default FFmpeg software decoding support is shown as a reference.

Note: The full list of supported hardware codecs available for a system (including any known resolution constraints) is now shown in the logs and in the 'System status' page ('Video decoders' tab) in the Mythfrontend GUI. Furthermore, the video display profiles will no longer display any decoder selections that are not available at run time (e.g. VDPAU support is widely available on linux builds but VDPAU decoders will only be displayed if functionality tests are passed.)

FFmpeg (software decode) VAAPI (1) VDPAU (2) NVDEC (3) VideoToolbox (4) V4L2-Codecs (5) MMAL (6) MediaCodec (7)
MPEG2 Simple
MPEG4 Simple
Advanced Simple
H264 Baseline
Extended ? ? ? ?
Extended High
Constrained High ? ? ? ?
High 422
High 444 (8)
VC1 Simple
HEVC (9) Main
VP9 Level 0
Level 2
AV1 (10)

(1) VAAPI support varies considerably depending on the chipset in use. The reference system uses a Coffee Lake chipset.

  • Note: VAAPI emulated via VDPAU is not supported - please use VDPAU directly.
  • Note: The 'VAAPI2' decoder available in version 0.30 is equivalent to 'VAAPI Decode Only' in version 0.31.

(2) Best case VDPAU support is shown based on up to date drivers and a modern GPU. Older chipsets and drivers will have a restricted range of support. Some of the oldest supported GPUs have additional, somewhat random size constraints when decoding H264 material.

  • Note: VDPAU decoding with AMD GPUs has had limited testing and there are reports that it is not working (but VAAPI does).

(3) NVDEC support is largely confined to more modern NVidia GPUs and actual support levels vary considerably.

(4) VideoToolbox support has not been widely tested and again actual device capabilities will vary.

  • Note: There is a known limitation whereby hardware decoding will fail when the stream changes and MythTV will fall back to software decoding until another stream change; this is a generic issue with OSX hardware decoding with no workable solution at present.

(5) On the Raspberry Pi 4, only H264 decoding is supported. Raspberry Pi 2 and 3 have additional decoding for MPEG2 and VC1 with a licence. Other devices will generally support H264 and possibly other codecs (e.g. MPEG2).

(6) Codec support for MMAL on the Raspberry Pi is the same as V4L2-Codecs.

(7) Every Android device is different and the best case support shown is based on support in FFmpeg.

  • Note: 'Direct rendering' of MediaCodec decoded video has a number of issues and the 'decode only' version is probably a more stable option for the time being.

(8) VDPAU reports support for H264 444 but it is unlikely to work.

(9) All decoders (including FFmpeg) struggle with decoding interlaced HEVC material and it should not be considered supported.

(10) MythTV must be compiled with either libaom or libdav1d support to decode AV1. libdav1d appears to offer faster and more capable decoding but is not generally available with most distributions.

For each decoder, there are two possible decoder selections. The default, and recommended option, displays the decoded frames directly using OpenGL. In the Video display profile settings, there will also be another option marked as 'Decode only' (e.g. 'VAAPI acceleration (decode only)'). These options are largely intended for future improvements and, with the exception of MediaCodec, are not recommended for general use unless there is a specific need for them. For example, automatic letterbox detection does not work with normal hardware decoders but will work with the 'decode only' options. Using 'decode only' will reduce playback performance significantly.


Deinterlacing support has been substantially changed for version 0.31 - both in terms of user setup and the underlying handling of deinterlacing.

Previously, within each Video Display Profile, a specific deinterlacer was selected for single rate and double rate deinterlacing.

This has been replaced with preferences for deinterlacing quality (None, Low, Medium and High) and where deinterlacing takes place - either in software, within the OpenGL shaders or using hardware specific deinterlacers (aka 'Driver' deinterlacers - as available with hardware decoders such as VAAPI and VDPAU).

'Driver' deinterlacers are always preferred if selected and available and OpenGL deinterlacers are always preferred over software deinterlacers if selected. There is no explicit selection for software deinterlacing as it is assumed they are available and are the default/fallback option.

This change in approach allows the underlying code to be more flexible in its choice of deinterlacer (when a fallback is needed) and better accommodates the range of deinterlacing and decoder choices available.

This is best explained with a few examples:-

- when VAAPI is used for decoding, frames can normally be deinterlaced either in OpenGL or by VAAPI. If any deinterlacing quality is selected, VAAPI will be preferred if 'Prefer driver deinterlacers' is selected or if no preference is made (as software deinterlacing cannot be used). OpenGL will be used if preferred and driver deinterlacers are not. Older Intel hardware does not support VAAPI Post Processing (VPP) and in this case OpenGL will always be used.
- for software decoding, either software or OpenGL deinterlacing is available. Select 'Prefer OpenGL deinterlacers' to use OpenGL (and reduce CPU load).
- when using VDPAU to decode video, VDPAU video frames can only be deinterlaced using the VDPAU hardware deinterlacers. Selecting any deinterlacer quality other than 'None' tells MythTV that deinterlacing should be enabled. The VDPAU deinterlacers will then be used (in this case, regardless of whether 'Prefer driver deinterlacers' was selected - as there is no other viable deinterlacing option). If VDPAU decoding is not available for a given video, MythTV will still note that deinterlacing is requested and fallback to an appropriate software deinterlacer or OpenGL deinterlacer if it is preferred. Note: it is useful to select appropriate preferences even if it at first glance they appear irrelevant - they may still be useful hints if a fallback option is needed.
- when using VAAPI for decoding only (not generally recommended), then frames can be deinterlaced using VAAPI in the decoder, in software or in OpenGL. Driver preferences will be preferred over OpenGL preferences which both trump software deinterlacing. If VAAPI VPP deinterlacing is not available, the next best option will be selected (i.e. typically OpenGL if preferred). Due to restrictions in FFmpeg's yadif software deinterlacer, these frames cannot be deinterlaced using the high quality software deinterlacer and the code will fallback to using OpenGL (high quality).

As for previous versions of MythTV, the 'double rate' deinterlacer selections are preferred when the display can support the additional frames. If the display cannot support the required frame rate, the single rate selections are used (or when Time Stretch is being used). If no 'double rate' deinterlacing is selected, the single rate selection is used - and if both are disabled, no deinterlacing will take place.

All of the old, software based video filters have been removed for 0.31 and replaced with either FFmpeg filters (which offer better format and cross platform support) or custom, optimised deinterlacers.

Decoder Available deinterlacers
Software OpenGL Driver
Low Medium High Low Medium High Low Medium High
FFmpeg Onefield Linearblend (5) Yadif (4) Onefield Linearblend Kernel - - -
VDPAU - - - - - - Onefield Temporal Temporal Spatial
VAAPI (EGL) - - - Onefield Linearblend Kernel Onefield Adaptive Compensated
VAAPI (EGL and no VPP) - - - Onefield Linearblend Kernel - - -
VAAPI (GLX) - - - - - - Onefield - -
VideoToolbox - - - Onefield Linearblend Kernel - - -
MMAL - - - Onefield Linearblend Kernel - - -
V4L2 Codecs (1) - - - EGL Onefield - - - - -
DRM-PRIME (1) - - - EGL Onefield - - - - -
MediaCodec (2) - - - - - - Potluck!
NVDEC (3) - - - - - - Onefield Adaptive Adaptive
VDPAU (decode only) Onefield Linearblend - Onefield Linearblend Kernel - - -
VAAPI (decode only with VPP) Onefield Linearblend - Onefield Linearblend Kernel Onefield Adaptive Compensated
VAAPI (decode only no VPP) Onefield Linearblend - Onefield Linearblend Kernel - - -
VideoToolbox (decode only) Onefield Linearblend - Onefield Linearblend Kernel - - -
MMAL (decode only) Onefield Linearblend - Onefield Linearblend Kernel - - -
V4L2 Codecs (decode only) Onefield Linearblend - Onefield Linearblend Kernel - - -
MediaCodec (decode only) (2) - - - - - - Potluck!
NVDEC (decode only) (3) Onefield Linearblend - Onefield Linearblend Kernel Onefield Adaptive Adaptive

(1) A custom, fast onefield deinterlacer is used for deinterlacing when using EGL DMA-BUF support. On OpenGL ES3.0 this could use regular OpenGL deinterlacers but the existing approach is used for performance reasons.

(2) MediaCodec may, or may not, deinterlace whatever it considers to be interlaced material. This is entirely dependant on the Android version, the device and the number of stars in the multiverse.

(3) NVDEC driver deinterlacing is requested once when playback starts - and cannot then be turned off.

(4) The yadif deinterlacer supports multi-threading and will use the 'Max CPUs' setting from the current video display profile to select the number of threads to use.

(5) On most setups, the software linearblend deinterlacer is actually faster than the onefield version and offers better quality.

Note: To confirm which deinterlacer is in use, either enable '-v playback' logging and check the logs for Mythfrontend or bring up the 'Debug OSD' screen during playback (Menu->Playback->Playback data or bind a key to the 'DEBUGOSD' action). Note however that not all themes have been updated to display the current deinterlacer during playback.

Note: Previous versions of MythTV assumed most video material was interlaced and switched off deinterlacing when progressive material was detected. With improvements in FFmpeg, this has now been reversed. All material is assumed to be progressive and deinterlacing is enabled once interlaced frames are detected. With some broadcast material, this can lead to confusion as deinterlacing may not be switched on for a few seconds (or occasionally longer) despite the expectation that the video is interlaced.

  • There is now improved support for mixed progressive/interlaced H264 material only and deinterlacing will be toggled when a change is seen.

Video frame formats

Previous versions of MythTV only supported displaying frames in the YUV420P frame format - which has been the de facto standard for many years for broadcast and media based (i.e. DVD, BluRay) video.

All other formats were converted to YUV420P prior to display. This led to a potential loss of quality and added an additional processing stage which reduced performance.

The OpenGL renderer can now display all of the common YUV frame formats - YUV420P, YV422P, YUV444P and NV12 - in all bit depths (e.g. YUV420P10, P010 etc).

This improves performance, allows for lossless display of higher bit depth material (see below) and integrates well with hardware decoders that typically return NV12/P010 frame formats when decoding.

Note: OpenGL ES2 only supports display of YUV420P, YUV422P and YV444P formats. All other formats will be converted in software to one of these types.

Note: When decoding for preview generation, commercial flagging etc, the code will still always convert to YUV420P.

Video renderers

When software decoding, two OpenGL Video renderer options are available; 'OpenGL' and 'OpenGL YV12'.

'OpenGL' will convert YUV420P frame formats to an intermediate format before uploading to an OpenGL texture for display. This adds a little software overhead but its main advantage is that it speeds up OpenGL deinterlacing. Hence it may be useful when the majority of content is interlaced and the OpenGL hardware is old - and slow. If the frame format is not YUV420P, the renderer will always fallback to 'OpenGL YV12'. Note: this approach is likely to be removed for MythTV version 0.32.

'OpenGL YV12' should be considered the default OpenGL renderer. It takes video frames and uploads them to OpenGL textures for display - unaltered.

When hardware decoding is selected, the only available Video renderer is 'OpenGL Hardware'. In the event that hardware decoding is not available or fails, this will always fallback to 'OpenGL YV12'.

Video colourspaces

Colourspace conversion has been re-written for version 0.31.

A single conversion matrix is used in the OpenGL shaders to handle the conversion to RGB (for display) as well as processing all picture adjustments (brightness, contrast, colour and hue). This also applies to most hardware decoders although some hardware decoders may have limited support.

An additional conversion is also available when the video colourspace differs from the display colourspace.

The behaviour of this additional stage is controlled by a new setting 'Primary colourspace conversion' (Setup->Video->Playback->Advanced Playback Settings).

This defaults to 'Auto' and most users will have no need to change from this default.

When 'Auto' is selected, the primary colourspace conversion will be enabled when the display colourspace is significantly different from the video. This typically means it will be enabled for High Dynamic Range (HDR) material. For most 'traditional' colourspaces, the differences between the video and display colourspaces are small enough that most people cannot tell the difference - and the extra processing stage is ignored. (This generally means displaying any Standard Range video (SDR) on traditional SDR displays - most of which use the Rec. 709 colourspace).

When 'Exact' is selected, the conversion is enabled whenever there is a difference between the video and the display, regardless of how small that difference may be.

Use 'Disable' to disable entirely.

Note: The previous setting for 'Studio levels' that was available from the menu during playback has been replaced with a global setting 'Use full range RGB output' (Setup->Appearance->Theme/Screen settings). This ensures consistent range adjustment for the UI as well as video. The associated keybinding has also been removed.

10/12bit Video

Decoding of higher depth video is supported by FFmpeg (software decoding) and some hardware decoders.

With the updates to the OpenGL video renderer, there should be no loss in precision when processing and displaying these video streams.

MythTV will always try and retain precision, including the use of 16bit OpenGL framebuffers and textures while rendering. The final rendering stage may (will) however lose precision when using OpenGL ES 2.0 and when using OpenGL ES3 on Qt versions before Qt5.12 (support was not complete before this version of Qt).

Lossless display of 10bit video does however require a 10bit display framebuffer (and display!).

This is not currently widely supported in Linux; though improvements are being made. Setting up a 10bit display is beyond the scope of these notes (and may well break a number of non-MythTV desktop applications).

Retaining 10bit precision throughout the MythTV code will improve display quality regardless.

High Dynamic Range (HDR) Video

While HDR video will be decoded and displayed, there is no support in version 0.31 for signalling to the display that the current content is High Dynamic Range. This will hopefully be at least partially included in version 0.32.

Support for signalling the correct data is only just becoming available in the latest Linux kernels and on other devices and platforms requires a significant amount of new code.

The visual impact of displaying HDR material without this support depends on the type of HDR material:-

  1. Hybrid Log Gamma (HLG)
    • HLG is designed to display on both HDR and standard range (SDR) displays. As such, it should display with good quality even without explicit HDR support.
  1. HDR10
    • HDR10 material tends to look 'washed out' when display without HDR support. There is currently no workaround for this problem (see Tonemapping below).

Tonemapping of HDR material

Tonemapping is the process of adjusting HDR material so that it displays 'correctly' on displays with a more limited range.

This will be available in version 0.32.

Picture In Picture (PiP)

Sorry - this isn't stable enough and has been disabled.

Picture In Picture should now be working as expected.

  1. Only the main player will use hardware decoding. All PiPs will use software decoding.
  2. When using hardware decoding, there is a small chance MythFrontend will become unresponsive when swapping PiP windows and/or the main player encounters a stream change (and the decoder needs to be re-created). This is currently unavoidable but should be rare.


  1. Video rotation
    • Rotation of videos, where properly flagged via FFmpeg, is now handled seamlessly via OpenGL transformations.
  2. Automatic letterbox detection
    • The letterbox detection code has been extended to support higher bit depths and more frame formats. It should now work with most broadcast material when using software decoding or one of the 'decode only' accelerated decoders.