[mythtv-users] File System Recommendation?

f-myth-users at media.mit.edu f-myth-users at media.mit.edu
Mon Apr 3 04:10:09 UTC 2006


    Date: Mon, 3 Apr 2006 13:01:21 +1000
    From: "Richard Dale" <richard at interlink.com.au>

    > I'm using ext3 on a RAID-0 system. Who the heck cares if it takes a  
    > second or two to delete a 1-hour show ??

    Here's an interesting solution - the recording could be flagged for deletion
    and then deleted in a background thread.  So, even if it takes a while it's
    the background thread that has to wait on I/O, not the foreground user
    interface.

The problem isn't making the -user- wait for the disk, it's making
-Myth- wait for the disk.

While ext3fs is deleting a file, all other writers block.  If you're
currently writing a stream (or several) to the same filesystem, -they-
will block.  Eventually your buffers will overflow, and you'll lose
data, and your recordings in progress will have glitches in them.

I verified this experimentally with ext3fs on an ordinary disk (no
LVM or RAID), using all SD NTSC feeds from ivtv.  Deleting a one-hour
recording was typically okay, even if multiple streams were writing,
but deleting (say) 10G (e.g., a 5hr recording) was not.  I don't
recall exactly where the breakpoint was (I could look it up in my
notes), but it also varies based on how many other streams you're
writing, and whether the database is doing anything, etc.  Of course,
the breakpoint might be different on your machine, and if something
decides to delete too many 1-hour recordings at the same time (such as
expiration), you'll be in the many-GB case and lose in the same way.

This is simple to reproduce---just use dd or something to create files
of arbitrary sizes, and time their deletion.  That'll give you an idea
of the work the filesystem is going through.  Then (if you want) try
recording n streams to that filesystem while doing a deletion.  There
will come a point in ext3fs where the amount of time it takes for the
deletion to happen will exceed your RAM buffers, and you'll lose data.
(In JFS, the deletions are in the tens of milliseconds or less no matter
how large the file is, so that won't happen there.  Ditto XFS, I believe.)

If you queued all deletions until no writers were writing---or due to
write in the next 5-10 seconds---this would be fine, but you might
wind up waiting hours for a deletion to happen.  You'd have to be
pretty sure you wouldn't run out of space in the meantime.

So after running that test, I switched to JFS and haven't had a
problem.  My frontend's JFS has experienced a lot of unclean shutdowns
due to frontend freezes, but the filesystem seems fine; I'm assuming
that the repair done while mounting it is sufficient---if that's not
true and I need to run fsck.jfs manually, someone please let me know,
although I haven't noticed any problems.  (After all, journalled
filesystems have dirty bits and replay their logs on remounting; it's
hard to believe that any unclean shutdown should require a full fsck
of the filesystem unless there's a bug in JFS.  Is there?)  Since I
use the JFS on the frontend as a temporary staging area while writing
stuff to DVD, it gets plenty of exercise, and as far as I can see, it
handles it well.

The backend's JFS typically doesn't get shut down uncleanly because
the backend typically doesn't hang.

I use ext3fs everywhere else except in Myth media partitions, because
I like being able to shrink (as well as grow) the filesystem, and
because it is extremely resilient to damage and likely to be
repairable in the face of damage, due to its design; most other
filesystems can't survive certain kinds of damage nearly as well.
(And I won't go anywhere near ReiserFS, because its semantics are
inherently broken in the face of machine resets or kernel crashes,
leading to good metadata but the contents of open files swapped among
each other---no thanks.)


More information about the mythtv-users mailing list