[mythtv-users] Storing recordings on network share
raymond at wagnerrp.com
Thu Sep 22 12:01:39 UTC 2011
On 9/22/2011 03:11, Simon Hobson wrote:
> Raymond Wagner wrote:
>> > Would a single back end have enough HP to run Myth and be a NAS server
>>> ? What if there is a high demand for disk services on the NAS from
>>> time to time ?
>> If MythTV is writing to local disks, and those disks are not being
>> accessed separately from MythTV, then you can saturate the network and
>> MythTV won't care. Make sure you OS and database are also not on disks
>> that would see high demand from other applications.
>>> What if the Myth BE needs to serve 1080 content to 3 to 5 FEs, some of
>>> them requiring transcoding?
>> Serving content is trivial, as trivial as recording it in the first
>> place. All you're looking at is streaming some 14-18mbps over the
>> network. More importantly, if you briefly saturate the disk, who
>> cares. You get a bit of stuttering on the frontend. It's not like the
>> recording is damaged in any manner.
> Now, I believe the problem is in the way Myth syncs data it writes.
> In an old thread, someone posted a comment to the effect that Myth
> syncs write data very frequently - which in the general case is a
> good thing to do since it means you write "little and often" and
> avoid building a up a big buffer and have the machine pause while it
> all gets written out.
> I've asked since but never got a reply - is this the case ?
> If it is, then I suspect the issue is that these frequent syncs mean
> that when disks get heavy I/O, instead of buffering the data, the
> process pauses waiting for the sync to return and incoming data gets
> lost. Thus your recording is now corrupted, and you get broken up&
> stuttering playback.
The threadedfilewriter has a one second sync loop for each file. This
sync loop runs independently for each file, and only syncs that single
file. If you are writing a bunch of other data to disk independently,
those writes should be unaffected by MythTV's loop. However, you are
correct in that this does not prevent catastrophic issues in the disk
scheduler. ZFS loves its memory, and running on a system with far too
little of it, I've had problems where dealing with a heavily fragmented
file has stalled recording for several minutes on an independent disk.
On a properly spec'd system, it would not have been an issue.
More information about the mythtv-users