[mythtv] Alternative to LVM/Raid for multiple disks?
ipso at snappymail.ca
Wed Sep 8 12:03:37 EDT 2004
Will the JobQueue have triggers for Auto-Expire and/or Disk is full? So
you can run scripts when either of those cases are met?
Disk is full (or, more specifically, disk has Xgb free, start auto-
expire) would be the ideal time for the archive script to run most
On Wed, 2004-09-08 at 10:54 -0400, Chris Pinkham wrote:
> > >why not set up a samba share or an nfs share on all of the machines and
> > >then use a perl script to check available space and as long as no
> > >recording is happening use symlink to point to available nfs/samba mount
> > >point. ie. if you record to /home/video make that a symlink for the
> > >nfs/samba partitions that are mounted and viola you have the space you
> > >are needing, nothing on the backend need change.
> > >
> > How would you playback a video on mount point 1 if the perl script has
> > changed the symlink that the backend uses to mount point 2?
> After 0.16 is released, I'll be committing code to CVS which will add what
> I'm calling a JobQueue to Myth. The Job Queue is responsible for running
> "jobs" for recordings after they finish recording. Currently the only
> job implemented is Commercial Flagging, but I'd like to port the
> transcoder over to using this common Queue as well. I also have support
> for User Jobs as well. A User Job is just a script or program that is
> setup to run after a recording finishes. So for instance, you could
> automatically run nuvexport automatically on all your CSI episodes.
> One script that I've been thinking of creating is a simple perl "archive"
> script. The script would have an array of directory names and free
> space limits declared at the top something like this:
> # key is directory name, value is free space to keep in Gigs
> my %ArchiveDirs = (
> "/usr3/video/mythtv/recordings", 10,
> "/usr4/video/mythtv/recordinds", 5,
> "/usr5/video/mythtv/recordinds", 15
> The script would take the filename of a MythTV recording as an argument.
> The script would get the filesize of the recorded file and then cycle through
> each ArchiveDir key and find the directory with the most free space that also had
> enough room to copy the original file. The file would be copied to the
> Archive directory and then a link would be created to link the archived file
> back into the original file location. This would all be transparent to Myth.
> I have a much simpler script than this running nightly at home right now and
> Myth doesn't know the difference. Once caveat is that Myth will not delete
> the actual archived file, it will only delete the link. I've considered
> making an option for Myth to follow links when deleting recordings which
> would solve this issue. I have a script that runs nightly and links in all
> my production recordings into my development recording directory so I
> have access to recordings to test with. So I wouldn't want Myth to follow
> links always.
> If anyone wants to take a shot at developing the above archive script, feel
> free. Currently I only have Commerical flagging being automatically run
> through the Job Queue, but User Jobs are working as well but must be
> manually inserted into the queue. I need to figure out how I'm going to
> handle setting up the configuration of user jobs and how the user can
> specify what jobs to run on what recordings. Eventually this should allow
> you to turn on/off commercial flagging, transcoding, and user jobs on a
> per scheduled recording basis.
> mythtv-dev mailing list
> mythtv-dev at mythtv.org
Mike Benoit <ipso at snappymail.ca>
More information about the mythtv-dev