Difference between revisions of "Talk:MythGrid"

From MythTV Official Wiki
Jump to: navigation, search
m
Line 6: Line 6:
 
:::It looks like BOINC's site is down, so try it later. It was working when I posted it.
 
:::It looks like BOINC's site is down, so try it later. It was working when I posted it.
 
:: Why not using one of the existing batch schedulers designed for local use, like [http://www.mcs.anl.gov/research/projects/openpbs/ OpenPBS], or its offspring [http://www.adaptivecomputing.com/products/open-source/torque/ TORQUE], or even just Myth's internal [[Job Queue]]? [[User:Wagnerrp|wagnerrp]] 03:29, 16 May 2012 (UTC)
 
:: Why not using one of the existing batch schedulers designed for local use, like [http://www.mcs.anl.gov/research/projects/openpbs/ OpenPBS], or its offspring [http://www.adaptivecomputing.com/products/open-source/torque/ TORQUE], or even just Myth's internal [[Job Queue]]? [[User:Wagnerrp|wagnerrp]] 03:29, 16 May 2012 (UTC)
:::I don't know anything about OpenPBS or TORQUE, I did consider openstack, but part of this is about promoting the use of BOINC, to claim some unused cycles from all of those unused frontends for other projects. You also get the use of machines that are not mythTV boxes you have when you need the extra processing. Can mythTVs job queue facilitate doing distributed processing?
+
::: I don't know anything about OpenPBS or TORQUE, I did consider openstack, but part of this is about promoting the use of BOINC, to claim some unused cycles from all of those unused frontends for other projects. You also get the use of machines that are not mythTV boxes you have when you need the extra processing. Can mythTVs job queue facilitate doing distributed processing?
 +
:::: OpenPBS and TORQUE are common job schedulers in the high performance computing arena. They're designed to be used on private networks, although they're generally used for clustered jobs, stuff communicating through MPI/LAM/PVM, not the "ridiculously parallel" task you are describing. However all of this is superfluous, as the task you are describing can be performed by the internal jobqueue. Schedule one job that performs the initial file splitting, and fires off N more jobs to individually transcode each. Add one final task to stitch them together, but place it in stopped status. Have each individual transcode job update it to queued status, conditionally on whether all the transcode tasks are marked complete. The current jobqueue is distributed, but a "free for all" rather than scheduled. Each jobqueue instance runs twice a minute, and checks the database to see if there are any jobs available it is allowed to run, at which point it grabs the first in the list and runs it. Each backend runs one jobqueue instance, and additional machines can be added in with the mythjobqueue application.

Revision as of 06:33, 16 May 2012

Can you even run your own BOINC server? wagnerrp 15:46, 14 May 2012 (UTC)

Yes although I haven't finished the configuration yet, see http://boinc.berkeley.edu/trac/wiki/ServerIntro
Link is bad.
It looks like BOINC's site is down, so try it later. It was working when I posted it.
Why not using one of the existing batch schedulers designed for local use, like OpenPBS, or its offspring TORQUE, or even just Myth's internal Job Queue? wagnerrp 03:29, 16 May 2012 (UTC)
I don't know anything about OpenPBS or TORQUE, I did consider openstack, but part of this is about promoting the use of BOINC, to claim some unused cycles from all of those unused frontends for other projects. You also get the use of machines that are not mythTV boxes you have when you need the extra processing. Can mythTVs job queue facilitate doing distributed processing?
OpenPBS and TORQUE are common job schedulers in the high performance computing arena. They're designed to be used on private networks, although they're generally used for clustered jobs, stuff communicating through MPI/LAM/PVM, not the "ridiculously parallel" task you are describing. However all of this is superfluous, as the task you are describing can be performed by the internal jobqueue. Schedule one job that performs the initial file splitting, and fires off N more jobs to individually transcode each. Add one final task to stitch them together, but place it in stopped status. Have each individual transcode job update it to queued status, conditionally on whether all the transcode tasks are marked complete. The current jobqueue is distributed, but a "free for all" rather than scheduled. Each jobqueue instance runs twice a minute, and checks the database to see if there are any jobs available it is allowed to run, at which point it grabs the first in the list and runs it. Each backend runs one jobqueue instance, and additional machines can be added in with the mythjobqueue application.