[mythtv] bttv recordings run fast
bjm at lvcm.com
Sat Mar 6 15:18:39 EST 2004
Andrew Mahone wrote:
> On Sat, 06 Mar 2004 01:28:34 -0800, "Bruce Markey" <bjm at lvcm.com> said:
>>The tendency is to think that if the picture looks 'bad' then higher
>>resolution or compression parameters will make it better. However, none
>>of these can correct poor color reproduction which is the biggest
>>reason for the picture not looking good.
> Tuning encode parameters will help *iff* the badness is due to
> compression artefacts. If it's anything else, you're wasting your time
> and clock cycles ;)
Oh, of course. I agree and I didn't mean to imply otherwise.
What I did mean is that no one ever posts that they see
blockiness around motion and wonder if they should change the
hue and contrast ;). However, there are always messages that
assume that if the picture looks dingy they must need to record
at higher resolution so they can get a better picture.
> Also, it's my experience that HQ and 4MV aren't the best knobs to fiddle
> with on ffmpeg, in terms of the CPU cost for the quality/bitrate gain.
Andrew, help me understand if I have this wrong but it seems
to me that these help reduce artifacts when the bitrate is
relatively low. If the bitrate is high and thus very little
artifacting, there is little to be gained by wasting cycles on
this extra effort. Therefore, I use these on my Low Quality
settings where there is plenty of CPU time available and I see
a significant improvement. I think many people use these for
their highest quality settings assuming that they want the 'best'
picture possible but it doesn't really do any good and just
> Some experimentation with mplayer seems to indicate that the adaptive
> quantization controls will improve quality more with less CPU cost. This
> was discussed a while ago on this list, but never got anywhere... It's
> been on my back burner pretty much since then, I just keep getting
> distracted by other things (math is fun for me, DBs and figuring out
> other people's code are not).
> Getting a postproc filter that works right, and handling qscale values
> correctly, will also go a long way toward killing codec artefacts. Since
> its mention here, I've played with mplayer's newer spp filter, and it
> provides fairly nice results. However, the code is quite painful to
> read. I could probably port it, but I'd need to actually understand
> *what* it does to optimize it at all.
I can't tell you how much I appreciate your work on filters.
>>For me, this makes the colors in all ranges almost identical to to the
>>cable signal fed directly to the TV. This makes live video look much
>>more like video with less of that 'old film' look. Once the colors are
>>right the picture should still look good even at lower resolutions.
> Oddly, mine seems fine with default setting, and it may be that there are
> some variations
I'm curious what differences you see, if any, when using these
modified settings? I've tinkered with the v4l tuner settings,
XV controls and TV controls for over a year trying to get the
image to look more like the original cable TV image. I have cable
and Linux (tvtime/xawtv/myth) on different inputs to my main TV.
This is an older 70" Mitsubishi that has separate picture controls
for each input (this is a Good Thing ;-). I experiment and flip
between the inputs to compare the image on the same station at
the same time.
What I've always seen with any bttv card (AverTV, ATI and two
different models of Hauppauge cards) is this wash-out effect.
Even with the saturation driven to the point of blooming on dark
colors, bright colors have less saturation than they should. As
I'd mentioned before, dark red can look thick but then bright
red looks pinkish as if extra white is mixed in. Outdoor scenes
look as if the air is hazy rather than clear as if everything
bright and colored is closer to white than it should be.
Clearly there is a problem with the luma range that the adjust
filter fixes by default where anything near white becomes fully
white and there would be a lose of detail. But even with the
adjust default correction this lack of saturation for bright
colors persists. I've known for a long time that there is less
wash-out at lower contrast/brightness when the highest total
luma doesn't go as high. However, recording like this would mean
that the XV and/or TV setting would have to be distorted so far
to compensate that the picture again wouldn't look good.
The trick here is to set the tuner's contrast low enough to avoid
the higher luma values then immediately use the adjust filter
to stretch the range so that the TV will get a bright enough
picture. Narrowing the gap in the chroma range has the effect of
widening the range so that brighter colors are more dense. ESPN,
for example, uses this trick on the Sunday SportsCenter "Ultimate
Highlight". This can make images look somehow more vivid than
reality. The colors will eventually look unrealistic when pushed
The net effect of these changes:
update channel set contrast=21000,brightness=32768,colour=32768,hue=32768;
update channel set videofilters='adjust=16:214:1.0:22:233:1.0';
for me is that the image looks much more vivid and more like the
direct video feed. This is true for a big tube, a really big back
projection screen, and computer monitors with every brand of bttv
card I can get my hands on. This also gives me a better picture
from bttv cards than I can get from a PVR-250 where I cannot make
these kinds of recording adjustments.
>>The player expects to have 29.97 frames per second and will slew to
>>bring the audio and video into sync. However, if there are a lot of
>>dropped frames so there there are only about 20 per second while the
>>player is trying to play 30, there will be a constant tug'o'war going
>>on while it tries to line up the audio/video timing.
> Hm, I thought that MythTV was timestamping frames so that it would know
> "when" to play them back. Even with that, dropping a random 10 of every
> 30 frames will result in video that looks quite poor.
It is timestamping frames but after a frame is drawn, the video
output loop assumes the next frame will be 1/30sec later. It
checks the audio and video times to decide if it needs to adjust
that 1/30sec interval until it draws the video frame. It will
smooth out one or two missing frames but if there are significantly
less than 30 frames each second it will go bonkers trying to
The timestamps also get funky when the CPU is pegged. There is
a thread that grabs the frames from the card and marks the
timecode based on the current system time. When the system isn't
busy frames are about 33msec apart. If the CPU is pegged the
grabber thread may not get to grab the frame until later and
the current system time no longer reflects the spacing. One
frame may be marked 100msec after the previous frame followed
by intervals like 4msec and 12msec etc.
This is one if the reasons that live TV is more jittery than
recorded playback. To manage the ringbuffer, the frontend has
to get it's data from a socket to a backend thread. With the
encoding, decoding and playback plus all the interprocess comm.
threads going on at the same time, the grabber is less likely
to get the CPU in time to mark the timecode correctly and the
video output loop is less likely to get the CPU when it is time
to draw a new frame. This all adds up to jitter.
> I also noticed a bit of an oddity with the adjust filter's speed. When I
> time the MMX adjust filter, I sometimes get 1110fps, and sometimes
> 2300fps. In other words, I can be watching mythbackend record, and every
> ten seconds it'll report adjust getting somewhere around 1100fps. I can
> then kill it, and restart, and it will consistently report 2300fps. Each
> restart, I'll get one speed or the other, seemingly at random. Maybe
> sometimes, MythTV's threads interact such that the filter consistently
> gets interrupted mid-frame?
Maybe you could test with it spewing out time stamps at key
points to see where the differences are.
Last week I was benchmarking the the new scheduler (which is
quantifibly oodles faster BTW =). Typical schedules would run
in one or two kernel scheduler time slices and the timestamps
weren't significant enough to make valid comparisons. Therefore
I had to do some terribly complex schedules adding hundreds of
shows to force it to take a half second to second in order to
get numbers that were worth comparing.
More information about the mythtv-dev