[mythtv-users] Linux wall wart pulls 5 watts - could make a great master backend
brad+myth at templetons.com
Wed Feb 25 23:00:16 UTC 2009
On Wed, Feb 25, 2009 at 12:57:19PM -0500, Michael T. Dean wrote:
> On 02/24/2009 08:28 PM, Brad Templeton wrote:
>> Tunerless slave backends work fine, I use them for transcode jobs etc.
> I still have to wonder what benefit that specific configuration
> provides... I can see how a tunerless master backend could be useful
> for power saving/running in a VM on an always-on machine, but for a
> remote backend, it becomes a mythjobqueue server--the only difference
> being that by running mythbackend instead of mythjobqueue, you're a)
> running an unsupported configuration that may fail at any time (even
> without any version updates or whatever) and b) you're wasting a lot of
> memory that would be /far/ more useful for the actual
> transcoding/commflagging jobs it's running...
> At least until the tunerless backend is supported, why not do it the
> right way and run mythjobqueue?
That's easy. Mythjobqueue is barely documented. It's only
noted once in the wiki, and was not talked about much at all when
I first started working on Myth (0.16) -- so I didn't know about it.
(Also, since one of my frontends once was a backend I set it up
What you say makes sense, though of course there is still a desire
for a tunerless master backend. The ideal power configuring is
to have a very low power master server in your house that is always
on and does your basics -- mythbackend, asterisk, dns etc., and to have
all other computers -- workstations, frontends, slaves -- only be on when
using them. Of course fully reliable suspend would help a lot there,
but I have very bad luck with that.
Of course, setting all this up is not particularly user friendly
yet, as you have to create a web of NFS mountings, and if you enable
a slave/jobqueue to do transcodes, it must mount all spools or it will
give an unexplained numeric fail for transcodes that happen to be
given to it, and once this happens, the job is tied to that machine
forever it seems, ie. the failure state does not suggest the queue
manager try to assign the job to a different system. I realize
that would be a bit messy to code as you would have to remember
what systems you had tried the job on which returned a "can't access
file" error so as not to try them again, at least not right away.
The other alternative would be to allow jobs to run without NFS,
which would require the file-owning backend allow a 2nd connection
for writing out files, and then replacing old files with the newly
written ones etc. (Of course a protocol to read via the backend
already is present.)
More information about the mythtv-users