Video pipeline

From MythTV Official Wiki
Jump to: navigation, search

The video playback mechanism in MythTV uses a common path for all of the types:

  • Recording playback.
  • Live TV.
  • Video playback.

There are checks in many places to do different things depending on the type. Live TV is just a recording and simultaneous playback, but does have many nuances for features such as channel changing and moving from one program to the next.

Logic Flow

All of the flow described here takes place in one thread, the main GUI thread, which is thread number 1 in the process.

Recording playback is invoked from the recording list window, which is themed at recordings-ui.xml. The window name is watchrecordings and it is implemented in class PlaybackBox in file programs/mythfrontend/playbackbox.cpp. The PlaybackBox::Play method calls the TV class to perform the playback.

In the case of watching videos playback is initiated from the Video list window, which is themed at video-ui.xml. The class name is VideoDialog and is implemented in mythfrontend/videodlg.cpp. The window name varies depending on which view is being used. The logic ultimately lands up in the TV class as well.

Playback is controlled by the TV class, which is implemented in libmythtv/tv_play.cpp. Do not be confused by the class name. Although the class is called TV it is used in Videos, Recordings and Live TV playback.

Each time a new playback is started, there is a new TV object created for it. A static method TV::StartTV calls static method TV::GetTV which creates the object. The TV::StartTV method is invoked when you start to play a video or recording and control remains in that method until the playback is done. Although it is called StartTV it handles Starting, playing and ending of the playback.

The TV::StartTV method calls TV::PlaybackLoop on the newly created TV object. The TV object contains a vector of PlayerContext objects, one for the main playback and one for each PIP or PBP additional playback that is happening at the same time. In most cases there will only be one (i.e. the main playback). Inside StartTV there is a while (true) loop that has a nested loop calling MythPlayer::EventLoop and MythPlayer::VideoLoop for each player that is active. Each iteration of the while(true) loop is one frame of video display.

The MythPlayer class in libmythtv/mythplayer.cpp implements lower level aspects of playback. MythPlayer::VideoLoop processes one frame of video. MythPlayer::EventLoop is called between frame displays and checks for events such as keyboard or ir remote actions and controls the various actions than can be invoked.

Inside MythPlayer::VideoLoop is a call to MythPlayer::DisplayNormalFrame which processes the next frame of video, if it is available.

MythPlayer::DisplayNormalFrame calls MythPlayer::AVSync which synchronizes the next frame with the audio and gets it ready for display.

MythPlayer::AVSync determines if the video is in sync and calls the VideoSync class to wait an appropriate interval before displaying the frame. The VideoSync class is implemented in libmythtv/vsync.cpp. There are a number of implementations of VideoSync with classes for various methods of syncing. If audio and video are out of sync by more than a certain amount, AVSync adjust things by (1) dropping a frame (if video is behind), (2) doubling frame interval (if video is ahead), (3) Pausing audio (if video is behind ). AVSync calls videoOutput->Show to actually show a frame on the screen.

VideoOutput::Show displays a frame on the screen. VideoOutput is an abstract base class in libmythtv/videooutbase.cpp. There are implementations of derived classes in several files named libmythtv/videoout_*.cpp, for the various output methods, such as OpenGL, VDPAU, etc.

Life cycle of a frame

Reading the recording data

Each playback frame starts as data in the recording file on disk somewhere. The class that handles getting it off the disk and into playback is RingBuffer, implemented in libmythtv/ringbuffer.cpp. Do not be confused by the name, it is not a ring buffer but a file access class.

The setup of RingBuffer happens in TV::HandleStateChange, which is called on the frontend when starting a playback or when there is a state change (which could require opening a different file). The file being opened is specified as a string called playbackURL. This could be a file name or a URL. The system is smart enough to detect if the backend is on the same system as the frontend and in that case the frontend reads the file directly. That bit of optimization is done in ProgramInfo::GetPlaybackURL which is called before RingBuffer is created. If the backend is on a different server a URL is passed into RingBuffer in the form myth://servername/2001_20170428144337.ts.

RingBuffer contains many of the normal file I/O features that any application would use for reading files. If the file is on a remote server it handles the necessary protocols to get the file data. It also handle reading of DVDs and Blu-Ray disks.

MythPlayer creates a Decoder object derived from Decoderbase, typically AVFormatDecoder. AvFormatDecoder is the decoder that supports ffmpeg, the main decoding engine. AvFormatDecoder::OpenFile passes the RingBuffer class into an AVFRingBuffer object. AVFRingBuffer is a wrapper for RingBuffer that supports the direct reading of files by ffmpeg. Pointers to the static member functions of AVFRingBuffer are passed into ffmpeg in AvFormatDecoder::InitByteContext.

AvFormatDecoder has field AVFormatContext *ic. AVFormatContext has a pointer AVIOContext * pb which is filled in with the addresses of read methods and other functions. AvFormatDecoder::OpenFile calls avformat_open_input which is an ffmpeg fuction that starts ffmpeg reading the file.

Demux the Video file

Using the methods passed into it, ffmpeg function avformat_open_input scans the start of the file and makes a list of streams in ic->streams.

AvFormatDecoder::ScanStreams looks at the list of streams, finds codecs for them and selects which streams to play.

In the description below there are several methods with "decoder" in the name. This is confusing, because they are actually demuxing the input file into its different streams. The actual decoding comes later. The confusing terms are in italics below,

MythPlayer::StartPlaying is called from TV::HandleStateChange. After opening the file and setting up ffmpeg as described above, it calls MythPlayer::DecoderStart which starts the decoder thread. The decoder thread runs MythPlayer::DecoderLoop. The loop repeatedly calls MythPlayer::DecoderGetFrame which calls decoder->GetFrame, which in the case of ffmpeg calls AvFormatDecoder::GetFrame.

AvFormatDecoder::GetFrame gets a still-encoded frame. It creates an AVPacket structure and reads a frame from ffmpeg using the ffmpeg call av_read_frame. The packet received can be a video, audio, subtitle or other type packet. If it is Video, it calls PreProcessVideoPacket which looks at the resolution and other things. It looks at the codec and calls ProcessVideoPacket, ProcessAudioPacket or ProcessSubtitlePacket. In the case of a video frame, ProcessVideoPacket will call the codec to decode the frame into video data. See below for details. It loops until it has got a decoded video frame back from ProcessVideoPacket or until buffers are full.

Decode the Video (decoder thread)

This logic runs in the decoder thread and uses the VideoOutput class vbuffers object to buffer decoded data for the main thread to pick up.

While AvFormatDecoder::ScanStreams is looking at the streams (see above), it calls AvFormatDecoder::OpenAVCodec, which opens the decoder using the ffmpeg function avcodec_open2.

AvFormatDecoder::ProcessVideoPacket passes data into ffmpeg decoder and checks if a decoded video frame is ready by calling ffmpeg (deprecated) function avcodec_decode_video2. This is a multi-purpose call. You pass some data to avcodec_decode_video2. If avcodec_decode_video2 now has enough data to complete a video frame it returns that frame. If there was a frame returned ProcessVideoPacket does a bunch of checks of timestamps then calls ProcessVideoFrame passing it the completed decoded frame.

AvFormatDecoder::ProcessVideoFrame does some processing of the decoded video frame then passes it on to MythPlayer::ReleaseNextVideoFrame.

MythPlayer::ReleaseNextVideoFrame passes the decoded video frame to the video output engine using VideoOutput::ReleaseFrame. VideoOutput places the frame in a queue in the vbuffers object of class VideoBuffers, so that it can be retrieved by the call to videoOutput->GetLastShownFrame in MythPlayer::DisplayNormalFrame in the main thread.

Video Output

Video Output uses the VideoOutput class (libmythtv/videooutbase.cpp) which is an abstract base class with several derived video output classes.

The VideoOutput object is created by MythPlayer calling the VideoOutput::Create static function, which looks at the video profile and the video capabilities of the system and creates an object of one of the subclasses. That object is stored in the variable videoOutput of MythPlayer.

In the main thread, MythPlayer::DisplayNormalFrame calls videoOutput->GetLastShownFrame, which is misnamed, because the frame has not yet been shown but is the frame about to be shown. It does some processing on it and passes it to MythPlayer::AVSync. MythPlayer::AvSync has logic to ensure the frame is played at the right time and in sync with the audio. MythPlayer::AVSync then calls videoOutput->ProcessFrame to draw the OSD and apply filters. It then calls videoOutput->PrepareFrame and videoOutput->Show. The details of these methods are dependent on the video output type. PrepareFrame releases the frame from the vbuffer queue, and then prepares the window as necessary for receiving the frame. In some cases nothing needs to be done. The videoOutput->Show method actually gets the frame to the display.


The OSD may be drawn in various ways. Creating the OSD is done with the MythUI classes that create the rest of the GUI. The OSD class in libs/libmythtv/osd.cpp is responsible for creating the images that will form the OSD and handling the formatting of the various types of OSD that appear from time to time.

The OSD is created in the VideoOutput*::ProcessFrame or VideoOutput*::PrepareFrame method before each frame is displayed. The base class has a method VideoOutput::DisplayOSD which merges the OSD into the image and is called if OSD mode softblend is chosen. Otherwise the various VideoOutput* classes prepare and display the OSD. They call OSD::Draw or OSD::DrawDirect. OSD::Draw draws the OSD into an image which can then be displayed or merged into the frame. OSD::DrawDirect draws using a mythpainter, thus it can draw direct to OpenGL or another method.

To Be continued ...

Video Deinterlace

- How do we deinterlace frame(s)

Video Painters

- What does the video painter do?

Video Renderers

- What does the video renderer do?