Difference between revisions of "2019-render"

From MythTV Official Wiki
Jump to: navigation, search
(Details - libmythtv)
(Details - libmythtv)
 
(35 intermediate revisions by the same user not shown)
Line 8: Line 8:
  
 
This reduces the complexity of the MythTV code base through the removal of code duplicated between MythTV and Qt and greatly improves portability across platforms. The new code should operate without issue on any device that has a compliant OpenGL 2.0 or OpenGL ES 2.0 driver. More modern OpenGL/ES versions are also fully supported but not yet required. OpenGL ES is automatically used on platforms that implement it (e.g. Android, Raspberry Pi and other embedded devices).
 
This reduces the complexity of the MythTV code base through the removal of code duplicated between MythTV and Qt and greatly improves portability across platforms. The new code should operate without issue on any device that has a compliant OpenGL 2.0 or OpenGL ES 2.0 driver. More modern OpenGL/ES versions are also fully supported but not yet required. OpenGL ES is automatically used on platforms that implement it (e.g. Android, Raspberry Pi and other embedded devices).
 +
 +
By using the Qt QPA platform handling, it is now possible to run MythTV user facing applications (e.g. mythfrontend) without running an X server on Linux.
  
 
The user should notice no difference.
 
The user should notice no difference.
Line 14: Line 16:
 
libmythtv, the library that decodes and displays media, now exclusively uses OpenGL to display video and associated imagery (On Screen Display, Captions etc). This ensures video rendering is consistent across all supported devices and enables the use of highly performant rendering when using hardware based video decoders.
 
libmythtv, the library that decodes and displays media, now exclusively uses OpenGL to display video and associated imagery (On Screen Display, Captions etc). This ensures video rendering is consistent across all supported devices and enables the use of highly performant rendering when using hardware based video decoders.
  
libmythtv now supports fully featured decoding and direct rendering using VDPAU and NVDEC (NVidia), VAAPI (Intel and AMD), VideoToolBox (OSX), MMAL (Raspberry Pi), V4L2 Codecs (Linux - Raspberry Pi and other embedded devices) and MediaCodec (Android). Windows support is not yet implemented. 10, 12 and 16(!) bit decoding and rendering is available where supported by the relevant drivers. All hardware decoders also have an equivalent 'copy back' decoder that will pass decoded video back into main memory for processing and display. These may also be used in the future for hardware accelerated transcoding, commercial flagging etc.
+
libmythtv now supports fully featured decoding and direct rendering using VDPAU and NVDEC (NVidia), VAAPI (Intel and AMD), VideoToolBox (OSX), MMAL (Raspberry Pi), V4L2 Codecs (Linux - Raspberry Pi and other embedded devices) and MediaCodec (Android). Windows support is not yet implemented. 10, 12 and 16(!) bit decoding and rendering is available where supported by the relevant drivers. All hardware decoders also have an equivalent 'copy back' decoder that will pass decoded video back into main memory for processing and display. These may also be used in the future for hardware accelerated transcoding, commercial flagging etc and may be useful for users who rely on auto-detection of letterboxing (patch outstanding).
  
 
For software based decoding, MythTV now supports an extensive range of frame formats allowing lossless processing and display of 10, 12 and 16bit formats.
 
For software based decoding, MythTV now supports an extensive range of frame formats allowing lossless processing and display of 10, 12 and 16bit formats.
Line 22: Line 24:
 
'''Note''' Displaying high bit depth material on a 10bit (or higher) display has not been tested (donations welcome) but should, in theory, work without issue.
 
'''Note''' Displaying high bit depth material on a 10bit (or higher) display has not been tested (donations welcome) but should, in theory, work without issue.
  
Deinterlacing support has been re-written. MythTV now uses deinterlacers from FFmpeg/libavfilter for software deinterlacing and the OpenGL shader based deinterlacers have been optimised and improved. Deinterlacer settings now use preferences for methods and quality rather than using explicit deinterlacers. In most cases this will enable the same deinterlacers as the previous approach but allows MythTV to make informed decisions about how and when to deinterlace when the preferred option is not available.
+
'''Note''' Displaying HDR material on an HDR display will likely not work as expected (for linux at least). This is because most video drivers do not signal the correct information to the display to tell it that the material is HDR. There is, at the time of writing, no standard API in the kernel to handle such information.
 +
 
 +
Deinterlacing support has been re-written. MythTV now uses deinterlacers from FFmpeg/libavfilter for software deinterlacing, the OpenGL shader based deinterlacers have been optimised and improved and driver deinterlacers (e.g. VDPAU, VAAPI) are available where possible. Deinterlacer settings now use preferences for methods and quality rather than using explicit deinterlacers. In most cases this will enable the same deinterlacers as the previous approach but allows MythTV to make informed decisions about how and when to deinterlace when the preferred option is not available.
 +
 
 +
Rotated video is now handled seamlessly where correctly flagged in the stream and detected by FFmpeg.
  
 
==Details - libmythui==
 
==Details - libmythui==
 +
 +
'''Limited/full range output (previously known as 'Studio levels')'''
 +
 +
The old setting and associated action to switch to 'Studio levels' (full range output) during video playback have been removed and replaced with a global UI setting (Settings->Appearance->Theme/Screen settings->Use full range RGB output). GUI images now have their output ranges adjusted as well - ensuring consistent presentation for UI and video. A more complete solution would autodetect the range from the display data (EDID) and also add colourspace conversion for displays using a different colourspace (the current code assumes BT Rec 709/sRGB).
 +
 +
'''Linux without X'''
 +
 +
On most Linux based systems, setting the environment variable QT_QPA_PLATFORM="eglfs" will 'encourage' Qt to create a fullscreen, windowless display. If Qt cannot find an X display, it will default to using a KMS/GBM based implementation that is fully featured with most up to date distributions. This does not appear to work with NVidia closed source drivers.
 +
 +
VAAPI (Intel) and V4L2 codecs (Rpi, s905) hardware accelerations are fully supported without X. Other hardware is currently untested (VDPAU and NVDEC do not work due to the NVidia limitation above).
 +
 +
'''Display resolution/frame rate switching'''
 +
 +
There is a new setting to pause playback for a given number of seconds while the new mode is set. This ensures content isn't missed while the display adjusts. Note: for a variety of reasons, this only applies when switching from the GUI to video playback - and not from playback to GUI or when exiting to the desktop.
 +
 +
Nvidia NVCtrl support stopped working with recent versions of the NVida driver. This has been fixed.
 +
 +
'''*New* MythDisplay class'''
 +
 +
Much of the display handling functionality has been merged into MythDisplay (which was previously just a group of static functions) and split out into platform specific subclasses. This includes the old DisplayRes classes (for mode switching) and will be extended to add EDID parsing etc. A new DRM based subclass is needed for Linux when X is not available (though the generic Qt fallback version handles the basics fairly well). MythDisplay listens for screen changes and notifies when the current screen has changed or screens are added/removed. Support for switching displays needs improvement however.
 +
 +
'''Debugging OpenGL'''
  
 
  OpenGL ES
 
  OpenGL ES
Line 40: Line 68:
  
 
==Details - libmythtv==
 
==Details - libmythtv==
 +
 +
Software decoding
 +
  - Previously the video rendering code only supported displaying YUV420P video frames (which was the universal standard).
 +
  - Frames in any other format were converted into YUV420P in the decoder.
 +
  - The OpenGL video code can now support rendering of all the major formats (YUV420, YUV422, YUV444, NV12 and UYVY) in all bit depths (essential 8-16bits). The decoder 'negotiates' a frame format with the video renderer and only falls back to YUV420P if the original frame format cannot be displayed.
 +
  - OpenGLES2.0 cannot display anything other than 8bit formats.
 +
  - Commercial flagging, preview generation etc all assume YUV420P and they will still receive frames in that format.
 +
  - Where higher bit depth material is being used, the OpenGL rendering code will attempt to maintain precision for as long as possible. This typically means no detail is lost until it is rendered to the display framebuffer - which is usually just 8bit:(
 +
 +
'Decode only' decoders (aka 'Copy back' decoders)
 +
  - Most of the hardware decoders can use direct rendering or 'decode only'.
 +
  - Direct rendering ensures the frames stay in the GPU/VPU and are passed directly to the OpenGL context for display. In the best cases there is zero copying of these frames for maximum performance.
 +
  - Using 'decode only' the frames are passed back from the GPU/VPU to the CPU for processing and then back to the GPU for display. This can be expensive as the memory transfers can be large in size and slow.
 +
  - In general direct rendering should be the preferred option.
 +
  - Currently 'decode only' is the recommended option for MediaCodec (Android) until direct rendering issues have been resolved.
 +
  - Otherwise 'decode only' may be preferable if using, for example, automatic letterbox detection (which only works for frames in CPU memory). NOTE: automatic letterbox detection needs fixing for frames copied back from hardware decoders as it currently only operates with YUV420P formats - and most decoders return NV12 (or P010) formats.
 +
 +
VAAPI
 +
  - VAAPI is now fully implemented and stable with full deinterlacing support - though see below on ensuring performance is maximised. MythTV with a modern Intel CPU (e.g. CoffeeLake) probably offers the most comprehensive feature support of all currently supported hardware decoders.
 +
 +
VDPAU
 +
  - VDPAU support has been re-written but it should be considered end-of-life - it is no longer actively developed by NVidia. VDPAU filtering support has been removed.
 +
 +
VideoToolBox (OSX)
 +
  - VideoToolBox support appears to be fully functional but has had limited testing. Additional work may be required for 10bit support when the MythTV internal FFmpeg version has been updated.
 +
  - Support for VDA, the predecessor to VideoToolBox, has been removed.
 +
 +
NVDEC (Cuda)
 +
  - NVDEC is stable and fully functional. There are some minor issues that appear to be driver related. It is recommended that the most up-to-date driver version is used.
 +
 +
Video4Linux Codecs
 +
  - V4L2 Codecs is a relatively new Linux API for video decoding. It works on the Raspberry Pi (Pi3 tested) and s905 devices. It 'should' work with any compliant driver on other embedded systems.
 +
 +
DRM-PRIME
 +
  - a 'generic' DRM-PRIME decoder has been added that should pick up hardware decoders on Rockchip based devices. It is entirely untested:)
 +
 +
MediaCodec (Android)
 +
  - Note: MediaCodec should be considered a virtual 'black box' when it comes to features supported. Different devices offer a widely differing array of hardware accelerations, deinterlacing, colourspace conversions etc. In many cases there is currently no way to check what the MediaCodec codec will actually return - other than knowing they will be video frames...
 +
  - The initial decode only/copy back MediaCodec (from Peter Bennett) should work without issue, though as for other 'copy back' methods, performance isn't the best.
 +
  - Direct Rendering support is still a work in progress. There are several outstanding issues.
 +
 +
Raspberry Pi
 +
- OpenMax support has been removed and replaced with support for the Broadcom specific MMAL API and the generic Linux V4L2 Codecs API. MMAL will work with both the open and closed source Pi drivers but direct rendering for MMAL is only available when using the closed source (Broadcom) driver. V4L2 Codecs will only work when using the open source driver.
 +
- Future development work will focus on V4L2 Codecs (in conjunction with the open source driver). This is because Qt requires a custom build to work with the Broadcom driver (and most Pi distributions appear to not use this build), the open source driver is, or will become, the default on most distributions and V4L2 Codecs improvements will benefit other embedded platforms as well.
 +
- there is currently no HEVC/H.265 decoding support.
 +
- as noted below, Raspberry Pi video performance needs improvement.
 +
 +
XVideo
 +
- Support for the legacy X11 XVideo extension has been removed.
 +
 +
Deinterlacing
 +
- The old, custom MythTV software deinterlacing filters have been removed and FFmpeg/libavfilter filters are used instead.
 +
 +
Deinterlacing setup
 +
- Deinterlacing settings have changed to better handle the options available and to ensure the code can correctly fallback to alternatives when the preferred option is not available.
 +
- For both single and double rate deinterlacing there are now settings for quality - None, Low, Medium and High. 'None' disables deinterlacing and any other value enables options to request OpenGL shaders and/or driver deinterlacers (VAAPI, VDPAU, NVDec etc).
 +
- Generally speaking, an increase in deinterlacer quality has an associated increase in computational cost (in the CPU or GPU) and thus overall performance is reduced.
 +
- where the requested deinterlacer type is not available but deinterlacing is enabled (quality is set to anything other than none), a fallback of the requested quality will be used.
 +
- For example, VDPAU frames can only be deinterlaced using VDPAU deinterlacers. If any deinterlacing quality is selected, VDPAU will be used - regardless of whether driver deinterlacers were requested.
 +
- Likewise when using any 'copy back' hardware decoders, software deinterlacing is not available and OpenGL shaders will be used instead.
 +
- Certain hardware decoder formats (e.g. VAAPI and NVDEC) can use both OpenGL and driver based deinterlacers.
 +
- Some hardware decoders can optionally deinterlace at the decode stage (VAAPI) whilst others currently only support deinterlacing in the decoder (NVDEC).
 +
- MediaCodec will always do its own thing - it might deinterlace in the decoder, it might not.
 +
 +
Deinterlacers - Old versus New
 +
- Write me..
 +
 +
Deinterlacers - Matrix of support by quality/codec
 +
- TODO
 +
 +
Colourspace handling
 +
- In a nutshell, coulourspace conversion should now be complete and accurate where properly flagged in the source material (and reported by FFmpeg) - with the exception of some HDR material.
 +
- A single conversion matrix is calculated to handle full/limited range, colourspace, picture controls (contrast, hue, colour etc) and handling of higher bit depth material. This is then used in the OpenGL shaders at very low cost.
 +
- Where the colourspace primaries differ significantly, additional shader code is used to convert between the primaries and adjust for gamma correction. This is a little more expensive but is currently only needed for BT2020 (HDR) material.
 +
- There is currently no support for tonemapping of HDR material for output onto non-HDR displays. This requires compute shaders to, amongst other things, calculate the overall frame luminance. Displaying Hybrid Log Gamma material should work well enough without this additional stage but other HDR formats will tend to look washed out.
 +
- Note: all colourspace conversions currently assume the display is using the Rec Bt709 colourspace. This is a fairly safe bet at the moment as there is no support for switching displays to HDR formats. Additional work is in progress to auto detect the actual display primaries (via the EDID).
 +
 +
Matrix of supported codecs and formats by platform
 +
- this really needs doing
  
 
  Debugging OpenGL video performance
 
  Debugging OpenGL video performance
Line 47: Line 154:
  
 
  Colourspace handling
 
  Colourspace handling
   - Support temporal dithering when displaying content with a higher bit depth than the display.
+
   - <s>Support temporal dithering when displaying content with a higher bit depth than the display.</s> Not a priority and possibly invasive for limited benefit.
 
   - Complete tone mapping for HDR material.
 
   - Complete tone mapping for HDR material.
 
   - Auto detection of display colour primaries and transfer characteristics from EDID.
 
   - Auto detection of display colour primaries and transfer characteristics from EDID.
Line 55: Line 162:
  
 
  Deinterlacing
 
  Deinterlacing
   - Add back deinterlacing of HLS material (override deinterlacer).
+
   - Add quality/method overrides (i.e. via keybindings and/or popup menu)
 
   - Add an A/V sync adjustment for deinterlacers with multiple reference frames. The displayed frame may not be in sync with the audio.
 
   - Add an A/V sync adjustment for deinterlacers with multiple reference frames. The displayed frame may not be in sync with the audio.
 +
  - Implement using the CUDA Yadif deinterlacer for NVDEC. This will allow proper control of deinterlacing as opposed to the current setup where it is enabled once when decoding starts - and we have to try and detect what NVDEC has actually done:)
 +
 +
Video rotation
 +
  - Add rotation override (as for aspect ratio and fill) to force rotation where it is not correctly detected.
 +
 +
Letterbox detection
 +
  - Fix DetectLetterbox to operate on software frame formats other than YV12/YV420P.
  
  NVDEC
+
  GBM/KMS Platform plugin for accelerated DRM-PRIME rendering
   - Add support for yadif-cuda post process deinterlace filter
+
   - GBM/KMS/DRM rendering is a must for some embedded platforms. For once, however, our reliance on Qt lets us down. When using fullscreen EGL displays, Qt takes ownership of the DRM connection and provides no way of using/retrieving the required handle. So a custom GBM/KMS plugin is required - but the vast majority of the QPA platform abstraction code is private. Whilst building against that code is possible, it will not necessarily be compatible between Qt versions.
  
 
==Known limitations==
 
==Known limitations==
Line 68: Line 182:
 
  VAAPI
 
  VAAPI
 
   - To get the best VAAPI direct rendering performance and quality (using DRM), the environment variable QT_XCB_GL_INTEGRATION="xcb_egl" must be set to tell Qt to use EGL as the windowing interface. On Wayland desktops, there is no VAAPI direct rendering without setting this variable and on 'regular' X desktops performance will be significantly reduced by using GLX code. Unfortunately, setting this variable breaks OpenGL setup on NVidia systems - with no obvious workaround. It is unclear how this can be resolved as we need to create our OpenGL context before we can check what driver is in use.
 
   - To get the best VAAPI direct rendering performance and quality (using DRM), the environment variable QT_XCB_GL_INTEGRATION="xcb_egl" must be set to tell Qt to use EGL as the windowing interface. On Wayland desktops, there is no VAAPI direct rendering without setting this variable and on 'regular' X desktops performance will be significantly reduced by using GLX code. Unfortunately, setting this variable breaks OpenGL setup on NVidia systems - with no obvious workaround. It is unclear how this can be resolved as we need to create our OpenGL context before we can check what driver is in use.
 
Deinterlacing
 
  - The basic software deinterlacer is very poor quality and needs improvement.
 
  
 
  V4L2 Codecs
 
  V4L2 Codecs
   - There is no direct rendering support for V4L2 and performance is limited as a result. There are unsupported and possibly unstable patches for FFmpeg to use DRM PRIME buffers for zero copy.
+
   - Direct rendering support is highly experimental. FFmpeg has been patched to export DRM PRIME frames. It is working on the PI3 but performance, while better, is still not good enough - this appears to be a limitation of the OpenGL ES driver. Frames are returned to MythTV as RGB (i.e. after colourspace conversion) so there is currently no deinterlacing support. It has not been tested extensively on other devices but appears to work on s905 based devices.
 
   - On the Raspberry Pi (at least) the driver does not flag whether frames are interlaced. Automatic interlaced detection then fails and deinterlacing is not enabled.
 
   - On the Raspberry Pi (at least) the driver does not flag whether frames are interlaced. Automatic interlaced detection then fails and deinterlacing is not enabled.
  
Line 79: Line 190:
 
   - Performance is not good enough.
 
   - Performance is not good enough.
 
   - MMAL deinterlacing is not implemented.
 
   - MMAL deinterlacing is not implemented.
 
VDPAU
 
  - No support for other VDPAU video filters (denoise etc). There are no plans to add this functionality back.
 
  
 
  OpenGL ES
 
  OpenGL ES
Line 94: Line 202:
 
==Known bugs - with resolution==
 
==Known bugs - with resolution==
  
  VAAPI
+
  Hardware deinterlacers and decode only decoders (VAAPI, NVDEC and MediaCodec)
   - intermittent null pointer dereference crash when using VAAPI for decoding and seeking. Requires a simple patch to FFmpeg.
+
   - doublerate deinterlacing in the decoder is throwing out seeking as twice the number of expected, decoded frames are returned. Fix is a work in progress.
  
 
==Known bugs - unresolved==
 
==Known bugs - unresolved==
 
OpenGL playback
 
  - incorrect viewport used with certain windowing/display settings e.g. running mythfrontend in a window and not using the GUI size for playback.
 
  
 
  Deinterlacing
 
  Deinterlacing
Line 107: Line 212:
 
  VAAPI
 
  VAAPI
 
   - minor scaling issue with certain H.264 (and possibly HEVC) content when using VPP for deinterlacing. The root cause is the old 1080 v 1088 issue. FFmpeg will set the height for some content to 1088 which then causes issues as the frames are passed through the VPP deinterlacing filter.
 
   - minor scaling issue with certain H.264 (and possibly HEVC) content when using VPP for deinterlacing. The root cause is the old 1080 v 1088 issue. FFmpeg will set the height for some content to 1088 which then causes issues as the frames are passed through the VPP deinterlacing filter.
 +
  - the above issue can lead to purple or green video which is resolved by installing the ''i965-va-driver-shaders'' package (debian based systems). This package installs scaling shaders that are not available in the default ''i965-va-driver'' package.
 
   - A/V sync issue with VAAPI copy back and VPP deinterlacing with streams who's PTS increments by only 1 (unlikely to be a real world issue).
 
   - A/V sync issue with VAAPI copy back and VPP deinterlacing with streams who's PTS increments by only 1 (unlikely to be a real world issue).
 +
  - Playback hangs when trying to play ''some'' HEVC interlaced material. This appears to be a regression in the drivers. Previously VAAPI would crash trying to deinterlace these clips but VPP deinterlacing has been disabled for interlaced HEVC for some time.
 +
  
 
  NVDEC
 
  NVDEC
   - static functionality check at startup sometimes failing and NVDEC decoding is not available when it should be.
+
   - static functionality check at startup sometimes failing and NVDEC decoding is not available when it should be (seems to be resolved with latest drivers).
 +
  - corrupt frames when using 'edit' mode and seeking to a keyframe.
  
 
  MediaCodec (Android)
 
  MediaCodec (Android)
 
   - long pauses under certain conditions while the decoder waits for video surfaces to become available (direct rendering only?)
 
   - long pauses under certain conditions while the decoder waits for video surfaces to become available (direct rendering only?)
 
   - deadlock and playback failures when the stream changes.
 
   - deadlock and playback failures when the stream changes.

Latest revision as of 16:28, 29 November 2019

2019 Render branch

This page outlines the development of the 2019-render branch of MythTV - which is not yet merged into the master branch. The intention is to merge the branch into master in time for the v31 release.

Summary - libmythui

The OpenGL portions of libmythui, the library that displays MythTV's user interface, have been substantially re-written to utilise the newer OpenGL functionality provided by Qt5.

This reduces the complexity of the MythTV code base through the removal of code duplicated between MythTV and Qt and greatly improves portability across platforms. The new code should operate without issue on any device that has a compliant OpenGL 2.0 or OpenGL ES 2.0 driver. More modern OpenGL/ES versions are also fully supported but not yet required. OpenGL ES is automatically used on platforms that implement it (e.g. Android, Raspberry Pi and other embedded devices).

By using the Qt QPA platform handling, it is now possible to run MythTV user facing applications (e.g. mythfrontend) without running an X server on Linux.

The user should notice no difference.

Summary - libmythtv

libmythtv, the library that decodes and displays media, now exclusively uses OpenGL to display video and associated imagery (On Screen Display, Captions etc). This ensures video rendering is consistent across all supported devices and enables the use of highly performant rendering when using hardware based video decoders.

libmythtv now supports fully featured decoding and direct rendering using VDPAU and NVDEC (NVidia), VAAPI (Intel and AMD), VideoToolBox (OSX), MMAL (Raspberry Pi), V4L2 Codecs (Linux - Raspberry Pi and other embedded devices) and MediaCodec (Android). Windows support is not yet implemented. 10, 12 and 16(!) bit decoding and rendering is available where supported by the relevant drivers. All hardware decoders also have an equivalent 'copy back' decoder that will pass decoded video back into main memory for processing and display. These may also be used in the future for hardware accelerated transcoding, commercial flagging etc and may be useful for users who rely on auto-detection of letterboxing (patch outstanding).

For software based decoding, MythTV now supports an extensive range of frame formats allowing lossless processing and display of 10, 12 and 16bit formats.

Colour space conversion and processing has been extended considerably to ensure accurate rendering in most cases. Colour mapping of High Dynamic Range (HDR) content is a work in progress though Hybrid Log Gamma material should display relatively well on a non-HDR display.

Note Displaying high bit depth material on a 10bit (or higher) display has not been tested (donations welcome) but should, in theory, work without issue.

Note Displaying HDR material on an HDR display will likely not work as expected (for linux at least). This is because most video drivers do not signal the correct information to the display to tell it that the material is HDR. There is, at the time of writing, no standard API in the kernel to handle such information.

Deinterlacing support has been re-written. MythTV now uses deinterlacers from FFmpeg/libavfilter for software deinterlacing, the OpenGL shader based deinterlacers have been optimised and improved and driver deinterlacers (e.g. VDPAU, VAAPI) are available where possible. Deinterlacer settings now use preferences for methods and quality rather than using explicit deinterlacers. In most cases this will enable the same deinterlacers as the previous approach but allows MythTV to make informed decisions about how and when to deinterlace when the preferred option is not available.

Rotated video is now handled seamlessly where correctly flagged in the stream and detected by FFmpeg.

Details - libmythui

Limited/full range output (previously known as 'Studio levels')

The old setting and associated action to switch to 'Studio levels' (full range output) during video playback have been removed and replaced with a global UI setting (Settings->Appearance->Theme/Screen settings->Use full range RGB output). GUI images now have their output ranges adjusted as well - ensuring consistent presentation for UI and video. A more complete solution would autodetect the range from the display data (EDID) and also add colourspace conversion for displays using a different colourspace (the current code assumes BT Rec 709/sRGB).

Linux without X

On most Linux based systems, setting the environment variable QT_QPA_PLATFORM="eglfs" will 'encourage' Qt to create a fullscreen, windowless display. If Qt cannot find an X display, it will default to using a KMS/GBM based implementation that is fully featured with most up to date distributions. This does not appear to work with NVidia closed source drivers.

VAAPI (Intel) and V4L2 codecs (Rpi, s905) hardware accelerations are fully supported without X. Other hardware is currently untested (VDPAU and NVDEC do not work due to the NVidia limitation above).

Display resolution/frame rate switching

There is a new setting to pause playback for a given number of seconds while the new mode is set. This ensures content isn't missed while the display adjusts. Note: for a variety of reasons, this only applies when switching from the GUI to video playback - and not from playback to GUI or when exiting to the desktop.

Nvidia NVCtrl support stopped working with recent versions of the NVida driver. This has been fixed.

*New* MythDisplay class

Much of the display handling functionality has been merged into MythDisplay (which was previously just a group of static functions) and split out into platform specific subclasses. This includes the old DisplayRes classes (for mode switching) and will be extended to add EDID parsing etc. A new DRM based subclass is needed for Linux when X is not available (though the generic Qt fallback version handles the basics fairly well). MythDisplay listens for screen changes and notifies when the current screen has changed or screens are added/removed. Support for switching displays needs improvement however.

Debugging OpenGL

OpenGL ES
 - OpenGL ES will generally only be used when required. To try and force the use of OpenGL ES where regular OpenGL would normally be selected set the environment variable 'MYTHTV_OPENGL_ES=true'
OpenGL Core profiles
 - MythTV does not currently require any functionality offered by more modern OpenGL versions. To try and force a more modern profile set the environment variable "MYTHTV_OPENGL_CORE=true". This will attempt to create an OpenGL context that includes compute shader functionality (which will be used for future development).
Debugging OpenGL issues
 - to enable GPU driver logging, use gpu logging verbosity (i.e. mythfrontend -v gpu).
 - GPU logging may produce a lot of log messages for certain drivers. To filter out unwanted, verbose messages use the environment variable "MYTHTV_OPENGL_LOGFILTER" with a combination of other, error, deprecated, undefined, performance, portability, grouppush and grouppop.
 - for advanced debugging of OpenGL errors, set the environment variable MYTHTV_OPENGL_SYNCHRONOUS=true (in combination with GPU logging). Break points can then be set in your debugger of choice that will give a backtrace pointing to exactly which lines of code triggered the error.
 - to debug Qt QPA (Qt Platform Abstraction) issues - set the environment variable 'QT_LOGGING_RULES=qt.qpa.gl=true'.
 - as all drawing now uses OpenGL (user interface and video), grabbing a screenshot (bind the appropriate action to a key in settings) will give an accurate version of what is being displayed in all cases. This may be useful when reporting rendering issues.

Details - libmythtv

Software decoding
 - Previously the video rendering code only supported displaying YUV420P video frames (which was the universal standard).
 - Frames in any other format were converted into YUV420P in the decoder.
 - The OpenGL video code can now support rendering of all the major formats (YUV420, YUV422, YUV444, NV12 and UYVY) in all bit depths (essential 8-16bits). The decoder 'negotiates' a frame format with the video renderer and only falls back to YUV420P if the original frame format cannot be displayed.
 - OpenGLES2.0 cannot display anything other than 8bit formats.
 - Commercial flagging, preview generation etc all assume YUV420P and they will still receive frames in that format.
 - Where higher bit depth material is being used, the OpenGL rendering code will attempt to maintain precision for as long as possible. This typically means no detail is lost until it is rendered to the display framebuffer - which is usually just 8bit:( 
'Decode only' decoders (aka 'Copy back' decoders)
 - Most of the hardware decoders can use direct rendering or 'decode only'.
 - Direct rendering ensures the frames stay in the GPU/VPU and are passed directly to the OpenGL context for display. In the best cases there is zero copying of these frames for maximum performance.
 - Using 'decode only' the frames are passed back from the GPU/VPU to the CPU for processing and then back to the GPU for display. This can be expensive as the memory transfers can be large in size and slow.
 - In general direct rendering should be the preferred option.
 - Currently 'decode only' is the recommended option for MediaCodec (Android) until direct rendering issues have been resolved.
 - Otherwise 'decode only' may be preferable if using, for example, automatic letterbox detection (which only works for frames in CPU memory). NOTE: automatic letterbox detection needs fixing for frames copied back from hardware decoders as it currently only operates with YUV420P formats - and most decoders return NV12 (or P010) formats.
VAAPI
 - VAAPI is now fully implemented and stable with full deinterlacing support - though see below on ensuring performance is maximised. MythTV with a modern Intel CPU (e.g. CoffeeLake) probably offers the most comprehensive feature support of all currently supported hardware decoders. 
VDPAU
 - VDPAU support has been re-written but it should be considered end-of-life - it is no longer actively developed by NVidia. VDPAU filtering support has been removed.
VideoToolBox (OSX)
 - VideoToolBox support appears to be fully functional but has had limited testing. Additional work may be required for 10bit support when the MythTV internal FFmpeg version has been updated.
 - Support for VDA, the predecessor to VideoToolBox, has been removed.
NVDEC (Cuda)
 - NVDEC is stable and fully functional. There are some minor issues that appear to be driver related. It is recommended that the most up-to-date driver version is used.
Video4Linux Codecs
 - V4L2 Codecs is a relatively new Linux API for video decoding. It works on the Raspberry Pi (Pi3 tested) and s905 devices. It 'should' work with any compliant driver on other embedded systems.
DRM-PRIME
 - a 'generic' DRM-PRIME decoder has been added that should pick up hardware decoders on Rockchip based devices. It is entirely untested:)
MediaCodec (Android)
 - Note: MediaCodec should be considered a virtual 'black box' when it comes to features supported. Different devices offer a widely differing array of hardware accelerations, deinterlacing, colourspace conversions etc. In many cases there is currently no way to check what the MediaCodec codec will actually return - other than knowing they will be video frames...
 - The initial decode only/copy back MediaCodec (from Peter Bennett) should work without issue, though as for other 'copy back' methods, performance isn't the best.
 - Direct Rendering support is still a work in progress. There are several outstanding issues.
Raspberry Pi
- OpenMax support has been removed and replaced with support for the Broadcom specific MMAL API and the generic Linux V4L2 Codecs API. MMAL will work with both the open and closed source Pi drivers but direct rendering for MMAL is only available when using the closed source (Broadcom) driver. V4L2 Codecs will only work when using the open source driver.
- Future development work will focus on V4L2 Codecs (in conjunction with the open source driver). This is because Qt requires a custom build to work with the Broadcom driver (and most Pi distributions appear to not use this build), the open source driver is, or will become, the default on most distributions and V4L2 Codecs improvements will benefit other embedded platforms as well.
- there is currently no HEVC/H.265 decoding support.
- as noted below, Raspberry Pi video performance needs improvement.
XVideo
- Support for the legacy X11 XVideo extension has been removed.
Deinterlacing
- The old, custom MythTV software deinterlacing filters have been removed and FFmpeg/libavfilter filters are used instead.
Deinterlacing setup
- Deinterlacing settings have changed to better handle the options available and to ensure the code can correctly fallback to alternatives when the preferred option is not available.
- For both single and double rate deinterlacing there are now settings for quality - None, Low, Medium and High. 'None' disables deinterlacing and any other value enables options to request OpenGL shaders and/or driver deinterlacers (VAAPI, VDPAU, NVDec etc).
- Generally speaking, an increase in deinterlacer quality has an associated increase in computational cost (in the CPU or GPU) and thus overall performance is reduced.
- where the requested deinterlacer type is not available but deinterlacing is enabled (quality is set to anything other than none), a fallback of the requested quality will be used.
- For example, VDPAU frames can only be deinterlaced using VDPAU deinterlacers. If any deinterlacing quality is selected, VDPAU will be used - regardless of whether driver deinterlacers were requested.
- Likewise when using any 'copy back' hardware decoders, software deinterlacing is not available and OpenGL shaders will be used instead.
- Certain hardware decoder formats (e.g. VAAPI and NVDEC) can use both OpenGL and driver based deinterlacers.
- Some hardware decoders can optionally deinterlace at the decode stage (VAAPI) whilst others currently only support deinterlacing in the decoder (NVDEC).
- MediaCodec will always do its own thing - it might deinterlace in the decoder, it might not.
Deinterlacers - Old versus New
- Write me..
Deinterlacers - Matrix of support by quality/codec
- TODO
Colourspace handling
- In a nutshell, coulourspace conversion should now be complete and accurate where properly flagged in the source material (and reported by FFmpeg) - with the exception of some HDR material.
- A single conversion matrix is calculated to handle full/limited range, colourspace, picture controls (contrast, hue, colour etc) and handling of higher bit depth material. This is then used in the OpenGL shaders at very low cost.
- Where the colourspace primaries differ significantly, additional shader code is used to convert between the primaries and adjust for gamma correction. This is a little more expensive but is currently only needed for BT2020 (HDR) material.
- There is currently no support for tonemapping of HDR material for output onto non-HDR displays. This requires compute shaders to, amongst other things, calculate the overall frame luminance. Displaying Hybrid Log Gamma material should work well enough without this additional stage but other HDR formats will tend to look washed out.
- Note: all colourspace conversions currently assume the display is using the Rec Bt709 colourspace. This is a fairly safe bet at the moment as there is no support for switching displays to HDR formats. Additional work is in progress to auto detect the actual display primaries (via the EDID).
Matrix of supported codecs and formats by platform
- this really needs doing
Debugging OpenGL video performance
 - use gpuvideo logging verbosity (i.e. mythfrontend -v gpuvideo). This will give timing information for the various stages of OpenGL video rendering - texture upload, framebuffer clearing, rendering, flushing and swapping. Note - the timing detail is informative only. Overall performance will be limited by many other factors.

TODO

Colourspace handling
 - Support temporal dithering when displaying content with a higher bit depth than the display. Not a priority and possibly invasive for limited benefit.
 - Complete tone mapping for HDR material.
 - Auto detection of display colour primaries and transfer characteristics from EDID.
 - Auto detection of display range (limited v full) from EDID.
 - Validate lossless render path where supported (software decode, VAAPI DRM, NVDEC, VideoToolBox)
 - Possibly fallback to a 'lossy' render path when we know the display cannot handle the full colour depth i.e. rendering 10bit to 8bit framebuffer.
Deinterlacing
 - Add quality/method overrides (i.e. via keybindings and/or popup menu)
 - Add an A/V sync adjustment for deinterlacers with multiple reference frames. The displayed frame may not be in sync with the audio.
 - Implement using the CUDA Yadif deinterlacer for NVDEC. This will allow proper control of deinterlacing as opposed to the current setup where it is enabled once when decoding starts - and we have to try and detect what NVDEC has actually done:)
Video rotation
 - Add rotation override (as for aspect ratio and fill) to force rotation where it is not correctly detected.
Letterbox detection
 - Fix DetectLetterbox to operate on software frame formats other than YV12/YV420P.
GBM/KMS Platform plugin for accelerated DRM-PRIME rendering
 - GBM/KMS/DRM rendering is a must for some embedded platforms. For once, however, our reliance on Qt lets us down. When using fullscreen EGL displays, Qt takes ownership of the DRM connection and provides no way of using/retrieving the required handle. So a custom GBM/KMS plugin is required - but the vast majority of the QPA platform abstraction code is private. Whilst building against that code is possible, it will not necessarily be compatible between Qt versions.

Known limitations

Windows
 - There is currently no Windows support. Any Windows build will fail in multiple places.
VAAPI
 - To get the best VAAPI direct rendering performance and quality (using DRM), the environment variable QT_XCB_GL_INTEGRATION="xcb_egl" must be set to tell Qt to use EGL as the windowing interface. On Wayland desktops, there is no VAAPI direct rendering without setting this variable and on 'regular' X desktops performance will be significantly reduced by using GLX code. Unfortunately, setting this variable breaks OpenGL setup on NVidia systems - with no obvious workaround. It is unclear how this can be resolved as we need to create our OpenGL context before we can check what driver is in use.
V4L2 Codecs
 - Direct rendering support is highly experimental. FFmpeg has been patched to export DRM PRIME frames. It is working on the PI3 but performance, while better, is still not good enough - this appears to be a limitation of the OpenGL ES driver. Frames are returned to MythTV as RGB (i.e. after colourspace conversion) so there is currently no deinterlacing support. It has not been tested extensively on other devices but appears to work on s905 based devices.
 - On the Raspberry Pi (at least) the driver does not flag whether frames are interlaced. Automatic interlaced detection then fails and deinterlacing is not enabled.
MMAL
 - Performance is not good enough.
 - MMAL deinterlacing is not implemented.
OpenGL ES
 - OpenGL ES does not support certain texture formats which are required for uploading video frames with a higher bit depth (e.g. 10bit). Hence certain frame formats are disallowed when using OpenGL ES for rendering and they are converted to 8bit in the decoder. This applies to software decoding only and there is no known workaround.

Possible issues

VideoToolBox
 - HEVC decoding is untested.
 - 10bit HEVC decoding is untested (only available in FFmpeg master - so requires a re-sync).

Known bugs - with resolution

Hardware deinterlacers and decode only decoders (VAAPI, NVDEC and MediaCodec)
 - doublerate deinterlacing in the decoder is throwing out seeking as twice the number of expected, decoded frames are returned. Fix is a work in progress.

Known bugs - unresolved

Deinterlacing
 - not enabling single rate deinterlacer when timestretch is enabled and the display cannot support the required rate. Leads to broken A/V sync and no audio.
VAAPI
 - minor scaling issue with certain H.264 (and possibly HEVC) content when using VPP for deinterlacing. The root cause is the old 1080 v 1088 issue. FFmpeg will set the height for some content to 1088 which then causes issues as the frames are passed through the VPP deinterlacing filter.
 - the above issue can lead to purple or green video which is resolved by installing the i965-va-driver-shaders package (debian based systems). This package installs scaling shaders that are not available in the default i965-va-driver package.
 - A/V sync issue with VAAPI copy back and VPP deinterlacing with streams who's PTS increments by only 1 (unlikely to be a real world issue).
 - Playback hangs when trying to play some HEVC interlaced material. This appears to be a regression in the drivers. Previously VAAPI would crash trying to deinterlace these clips but VPP deinterlacing has been disabled for interlaced HEVC for some time.


NVDEC
 - static functionality check at startup sometimes failing and NVDEC decoding is not available when it should be (seems to be resolved with latest drivers).
 - corrupt frames when using 'edit' mode and seeking to a keyframe.
MediaCodec (Android)
 - long pauses under certain conditions while the decoder waits for video surfaces to become available (direct rendering only?)
 - deadlock and playback failures when the stream changes.