<div class="gmail_quote">On Wed, Mar 14, 2012 at 7:31 AM, Russell Gower <span dir="ltr"><<a href="mailto:mythtv@thegowers.me.uk">mythtv@thegowers.me.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">>>><br>
>> I've taken this to the user list as it doesn't appear to be a development issue at this stage.<br>
>><br>
>> After my last post I started digging into the OpenFile() 4014ms issue, as Jens said it appears than the backend is having storage issues, I'm going to setup a separate combinded BE/FE test system for further analysis.<br>
>><br>
>> At this stage I believe the "Waited 0.2 seconds for data" are due to the frontend reading to close to the end of the file, if playback is paused for a few seconds they go away.<br>
><br>
> May as well forward the request over there then...<br>
><br>
> It looks like roughly 7.1 of that 7.6 seconds is sitting there waiting<br>
> for the data to get written to your filesystem. If the data isn't<br>
> available for the frontend to read it, then nothing you change in the<br>
> frontend is going to make the slightest bit of difference. We need to<br>
> see backend logs to see where the actual stall is.<br>
><br>
> We also need to know more about how your system is actually configured.<br>
> Looking through the mailing list archives, you appear to be running<br>
> independent frontends, which means if you are reading off the<br>
> filesystem, you are reading over NFS. Is it possible your problem is<br>
> caused by nothing more than NFS caching issues that would be resolved by<br>
> merely not mounting those disks and letting MythTV stream its content<br>
> internally?<br>
><br>
<br>
</div>I only recently switched to using NFS, as the backend seems to stop supplying data over the internal protocol whilst the scheduler is running, I still had the program transition glitches then as well as additional glitches whenever the scheduler ran.<br>
<br>
I no longer have the backend log, but from memory it was the threaded file writer waiting for a flush to finish<br>
<br>
My current configuration is rather complicated (two node cluster, with DRBD shared storage etc), i'm in the process of setting up a dedicated system (with a conventional/separate OS/Media disk setup) to do further analysis on - I did had this problem to a lesser extent on my previous system that was a combined FE/BE (with dedicated raid1 disks for recordings) but that was running 0.23.<br>
</blockquote><div><br></div><div>The best thing I did for my system was put the database and /var logging on a different spindle than recordings. I have had zero disk issues since doing this. The amount of difference this can make is significant.</div>
<div><br></div><div>Kevin</div></div>