[mythtv] [mythtv-commits] Ticket #1404: Invalid file error at
f-myth-users at media.mit.edu
f-myth-users at media.mit.edu
Mon Feb 27 07:50:49 UTC 2006
Date: Mon, 27 Feb 2006 02:05:54 -0500 (EST)
From: "Chris Pinkham" <cpinkham at bc2va.org>
> If I follow you right, the slave backend got stuck waiting too long in
> SetNextRecording() due to network traffic since whether it was writing
> to the database or the nfs server it was still going across the
> network which was busy transferring the 24MB file. If transfer
Well, since the cron that was running was a database backup, odds are it
was probably the recordedmarkup table being locked so the recorder
couldn't write out the rest of the seektable information to the database.
> happens at any other time I suppose it may just cause some prebuffer
> pauses that it can recover from? I changed the time for the cron job
> to 4:40am so that it will not likely occur at the time of a file
> change again. I am going to leave the FE watching livetv all night to
> see what happens at 4:40am. Since I frequently copy 1GB files across
You might get (un)lucky and it won't occur, it could just be the timing
between when the program changed and when the cron was backing up a
certain table (probably recordedmarkup since it's the largest table in
the database), so it might not happen at 4:40am.
Seems to me that this would be easy enough to test deterministically.
Either wait for a program transition and then run a mysqldump across
it, or embed mysqldump in a loop that calls it repeatedly (either with
no delay, or with a few seconds in between to let the system breathe)
and see what glitches. You could even play around with nice-ing it
higher or lower than the default.
(I wonder if the behavior would differ based on whether mysqldump
wrote directly to disk, or through gzip --best, or to /dev/null?
The former would thrash the heads hardest; the middle might slow
it down just enough that the DB isn't locked solid (or might extend
the duration of a lock instead), and the latter would load the DB but
involve no disk motion besides the DB itself. Or maybe bzip2 instead
of gzip, since it uses a -lot- more CPU to get that extra 5-10%...)
Another idea might be to just forcibly acquire a write lock on
recordedmarkup with, e.g., LOCK TABLE and see how long it takes
for the rest of Myth to explode. :)
P.S. I -will- note that I did quite a bit of testing under 0.18.1
to see what sorts of loads would break things, and didn't see any
problems running mysqldump | gzip --best even when I had six SD tuners
writing to the same disk as the DB and the gzip output file. Nor have
I seen problems copying many GB over a 100baseT (not gigabit!) NIC
while similarly recording 6 SD streams; all on a typical Athlon 2800+.
I just can't stress the disk hard enough to cause a problem, unless I
try to delete many GB under ext3fs, which caused recording hiccups
'cause the FS was locked too long---after I ran that test, I switched
to JFS. Granted, all these tests were in 18 and have nothing to do
directly with the OP's problem or version, but they do show that at
least under those circumstances DB load didn't seem to be an issue.
More information about the mythtv-dev