[mythtv] Anyone working on direct MPEG2 encoding?
ke-aa at frisurf.no
Thu Jan 1 18:28:56 EST 2004
Geoffrey Hausheer wrote:
> On Thu, 1 Jan 2004 22:24:53 +0100, "Kenneth Aafl°y" said:
> > Chris Petersen wrote:
> > > Chris Osgood wrote:
> > > > I'd like to encode directly to MPEG2 off my bttv card so
> > > > that I can use the PVR350's hardware decoder for output.
> > > > Now that ffmpeg supports mpeg2 encoding are there any plans
> > > > to add that as a direct encoding target?
> > I'm pretty sure nobody is working on this, and I would
> > really like to see this as a feature, as long as the fact
> > that (for the most part) what you reffer to as mpeg2
> > encoding, would become libavcodec supported encoding.
> > This would also have the side effect (if i'm not
> > mistaken) of adding (with a little effort) support for
> > this to mythtranscode.
> First off, implementation won't be trivial but not too
> difficult. Making an mpeg2 compliant stream isn't hard
> using libavformat. libavcodec can do the encoding.
This would actually be trivial, as long as you familiarize yourself with
the api. Once you get the hang of it all it goes like a dream. There is
a couple of api examples in the ffmpeg cvs reposotory (donno if they'r
in mythtv cvs), and there is also the core ff* programs to use as a
> However, I can't imagine this working in real time.
> I think using mpeg2enc would be a bad idea. libav* is
> already well integrated in mythtv, and the ffmpeg guys
> have been doing a lot of work on mpeg2 lately.
Using external encoders for encoding live content is not a very good
idea, since this involves moving a lot of data around. What external
transcoders(dec/encoders) is ideal for is in combination with the
mythtranscode stuff, as these could use as much resources as they want.
> Once it is there, it would allow rtjpeg/mpeg4->mpeg2
> transcoding, but it wouldn't be helpful for the
> mpeg2->mpeg2 commercial-cut stuff (which is mostly
> working now anyway). It may help with getting frame-exact
> cuts though (though I plan to do this in a different way
> using gop-exact cuts)
But the mpeg2 -> mpeg2 is really for those who have a hardware encoding
capture card (or a dvb or hdtv card) as an alternative to doing a mpeg2
-> mpeg4/rtjpeg first, to be able to do commercial cutting.
On the subject of frame exact cuts it might help if the framework for
(re)encoding was flexible enough. This assuming (if considering ending)
that you would set up a decoder, and decode the frames from the last
keyframe to the frame you are interrested in, and them setup a encoder
(with the same format as the existing stream), and encode the last
frame(s) output by the decoder, and append those to the mpeg stream
(doing this in slight reverse for the beginning).
> Anyhow, I'm not aware of anyone working on it, though last
> time I proposed to attempt it (knowing a lot less about
> MPEG2 than I do today), there wasn't all that much interest.
When just considering mpeg2 encoding from a v4l(2) capture card, the
benefit is slim. But if, as previously noted, the api layer (or whatever
you call it) could be made flexible enough to do more than that...
> Note that the quality won't be great, since real-time
> encoding is 1-pass (I believe the 1st pass is used to
> determine how to allocate the bandwidth, whereas the
> second does the encoding...and that doesn't work well
> in real-time). In general incorporating a two-pass
> encoder won't be easy. Better to just use one of the
> currently availiable tools for the job
> (nuvexport comes to mind)
This is offcourse dependent upon the hardware this is used on, and the
future development of mpeg2 encoding in ffmpeg. Also the bitrate and
quality parameters should have a hand in the equation. You seem to
elaborate on the fact that 1-pass encoding is weak, but I must object to
this opinion a bit. The objective of a multipass encoding is to find the
best oportunity of spending bits versus quality degredation. Mostly this
spending equation operates on how much data the resulting picture will
spend (ie much motion equals large chunks), so what this multipass
encoding does (basically) is try to balance the large data chunks
against the smaller, with a predefined set of boundaries. So this is
heavily dependent upon the source material, as how the result would
More information about the mythtv-dev