[mythtv] [mythtv-commits] Ticket #2782: automatic simultaneous jobs scaling

osma ahvenlampi oa at iki.fi
Sat Dec 9 11:57:22 UTC 2006


On 12/9/06, Daniel Kristjansson <danielk at cuymedia.net> wrote:
> On Fri, 2006-12-08 at 23:57 +0200, osma ahvenlampi wrote:
> > mysql is most likely running at the same nice value (0) as the backend
> > writing the recordings. In this setting, it would most likely be the
> > best approach to run recordings at (unnice) -5, mysql at 0, playback
> > at (nice) +5 and transcodings and commflags at (nice) +10.
>
> For most recorders people use the nice value of the backend and
> mysql is the higher of the two since we need to write to both
> the db and the filesystem when writing a file. I made this

I'm not sure I understood this. Which is less nice of the two? On my
backend, which also doubles as my primary frontend, mythbackend and
mysqld are both nice 0 processes. I haven't tuned that since my two
DVB tuners and generally not more than one simultaneous playback has
never become an I/O problem.

> same mistake before the 0.19 release when trying to maximise
> the number of recorders you could use simultaneously with
> a single disk to three. Lowering the recorder's niceness below
> the mysql niceness has very little effect for MPEG recorders
> because we need to write the keyframe rows to the DB once in
> a while. (The NVP handles this much better, but fewer and fewer
> people still use framegrabbers with the low cost of PVR-x50 and
> digital cards these days.)

But if mysql's buffer flushes are delayed a bit due to recording
buffer flushes, nothing is lost - and mysql writes much less. That's
why I suggested the recorder might benefit from being less nice. Also,
btw, might be a good reason to recommend (or even try to create)
innodb tables instead of myisam -- innodb is better at writing through
buffers.

> We also don't want playback to run at greater niceness than
> recording. Playback has much higher real-time requirements.

Well - is the priority to ensure correctly recorded programs, or to
avoid playback glitches? A recording error will cause a playback error
anyway - again, that's why felt it might be best if recording was the
least nice process.

> But the point of this patch is to make this scaling automatic.
> If it in fact is just tuned for one person's system it doesn't
> make much sense to apply it. If it can scale the number and
> run speed of commercial flagging and transcoding processes for
> multiple people then it becomes a very nice contribution.

I would venture to guess that people running their backends on Linux
2.6 kernels are in the overwhelming majority amongst the user base.
I'd be happy to try to work this out to support BSD, OSX and even
Windows if someone could point me to examples of how to read system
stats out of these systems and volunteer to test patches. Does the
backend even run on Windows?

> > Networking can at least in theory be solved with the same principles
> > using QoS settings on per-connection basis.
>
> If you are volunteering to make MythTV configure QoS settings
> for the 5-6 OS MythTV runs on you are a better man than I. :)

I did say "in theory" :)

> I think there must be a solution which doesn't require mucking
> this much with all the different operating systems on which
> MythTV runs.

The only thing this really requires is a way to read "how busy is the
CPU", and information on whether that busyness includes I/O wait or
not. Like I wrote in my previous message, under these circumstances
the patch will *reduce* competition for resources. Unfortunately,
getting even that much info is entirely OS-specific. I was thinking of
digging into libgtop to see how it's done on most systems. Which would
be more preferable, linking against that or just using it as an
example for a re-implementation?

> > Although what's more
> > likely to be a problem is that "network reads", which in many cases
> > with Myth are in fact nfs/smb clients will cause disk access with the
> > priority level of the nfs/smb daemon (and see above regarding nice
> > levels..) -- to solve that either those daemons would have to be
> > reprioritised or myth processes would in fact need to work as file
> > servers (http on custom ports might do the trick, sort of like UPnP).
>
> Classic priority inversion? I think the only way around this
> without doing the scaling within MythTV would be to write a
> MythTV disk elevator algorithm for the various operating systems
> which gives transcoding and commercial flagging lower priority.

Yikes. Speaking of system-specific. Since myth can already serve the
files out to frontends without nfs, I'd guess it would be far easier
to extend that for processing jobs as well than to start playing in
the kernel I/O elevator land.

> > Or you could solve it the way Internet is usually solved -- brute
> > force and more capacity than is going to be needed for the job at hand
>
> But transcoding and commercial flagging are not real-time processes,
> we should be able to run them when recording/playback is not
> happening, or better yet use only the disk/cpu/network resources
> that are currently going unused.

That's exactly what I started out to do, until I heard people want to
use resources that are NOT going unused, except sometimes they don't,
and I must prove I'm not going to trash a recording on a system
already 99% committed. That's a couple of objectives too many for my
original rather simple patch :)

> I would think that monitoring the buffer fill on recording and
> playback processes would be a good enough metric to be control
> the throttling of transcode & commercial flagging processes.

Not a bad idea, that. How would one go about monitoring it? Although
the buffers are worth what - a few seconds of I/O? The jobqueue
scheduling decisions are made over minutes, so it's not really in a
position to react to (possibly very temporary) buffer fill scenario.

-- 
Osma Ahvenlampi   <oa at iki.fi>       http://www.fishpool.org


More information about the mythtv-dev mailing list