User Manual:Periodic Maintenance

From MythTV Official Wiki
Revision as of 16:40, 23 August 2010 by Cncook001 (Talk | contribs)

Jump to: navigation, search
Previous Up Next
Go-prev.png User Manual:Regression Testing Go-up.png User Manual:Index User_Manual:Technical_Details Go-next.png

This page is up-to-date to MythTV version 0.20, the current release is 0.27

Maintaining your MythTV

Your MythTV system can continue to run for weeks, months or even years. Of course, hardware fails, techies fiddle, and power-outages, tornadoes, and alien takeovers happen. But, if you are interested in keeping your MythTV alive for as long as possible, here's a few suggestions.


Your system is only as good as your backup. Backup NOW!!!!! You may be confident that you can rebuild quickly, but two parts of MythTV aren't easily replaceable: the database, and your media content (TV recordings, videos, photos, music, etc.)

The database

You'll miss the database only when it's gone. The database keeps a history of every show you've recorded, which is very nice when you don't want to re-record the same show over again. It keeps all the settings of your backends and frontends, it keeps metadata of all your videos (titles, director, parental levels, etc.), it keeps all the commercial flagging data, bookmarks, and more.

You can create a backup of your database with the backup script. Before doing so, you must configure the backup script by specifying the directory to use for backups:

$ echo "DBBackupDirectory=/home/mythtv" > ~/.mythtv/backuprc

Then, from that point on, you may create backups by simply running:


By default, the backup script will compress and rotate backups, keeping the last 5 backups. More information and other usage scenarios are shown on the Database Backup and Restore page.

If you ever need to restore the backup see the Database Backup and Restore page.

Backing up to a remote server

Using rsync along with a trusted ssh connection is a way to move backup files from the current machine (your MythTV server) to an alternate machine you may have set up for backup. rsync supports many protocols, including rsh and ssh. This example demonstrates ssh.

# In this example, the source machine is "startrek", and the remote backup
# server is "stargate".
# The backup directory

# Requires ssh tools to make connection to remote host.
# (If you use Debian/Ubuntu, apt-get install keychain).
source /root/.keychain/${LOCAL_HOSTNAME}-sh

# Store knowledge of all currently installed packages. Restore with dpkg
# --set-selections (This is good for Debian/Ubuntu users, along with any other
# distro based on apt/dpkg pkg mgt).
dpkg --get-selections > /${LOCAL_HOSTNAME}_pkgs.txt

# Backup up the mythconverg database.

# Push all the files to the backup disk using rsync. This preserves file
# permissions, and also deletes old files on the receiving end not found on the
# sending end.
# This example pushes everything from the listed folders out to destination
# ${BACKUP_SERVER}, stored in a folder designated for this local machine,
# ${LOCAL_HOSTNAME}. It backs up a wiki site, all the kernels built as "deb's",
# the /home partition, the /etc files, and all the /usr/local/src packages
# (like MythTV SVN!)
rsync -avq --delete --stats --progress \
         /${LOCAL_HOSTNAME}_pkgs.txt ${BACKUP_DIR} /var/www/mediawiki/images \
         /var/www/mediawiki/LocalSettings.php /usr/src/*.deb /home* \
         /usr/local/src /etc** \

Optimize the Database

Myth interacts with its database all the time. Without optimization, some of the queries will run slower and slower. For example, the scheduler query evaluates all of the recording rules against every upcoming listing and against all previously recorded shows. This is a complicated query that takes a noticeable amount of time to run on an optimized database. It can easily take 5 times longer on a fragmented database.

In the contrib directory there is a Perl script, to run MYSQL utilities Repair and Optimize on each table in your MythTV database. It is recommended to run it regularly from a cron job. It uses the perl bindings to connect to the database and therefore should run with no further configuration.

Note that each table in the database is locked while the repair and optimize is performed. Therefore, the script should only be run when Myth is otherwise idle. Monthly execution may be sufficient. Daily may be overkill but ought to be harmless if the system would not otherwise be busy. Be sure that you have an appropriate backup strategy in place, as well.

First make sure it is executable:

   chmod 755 /usr/share/doc/mythtv-docs-0.21/contrib/

Create a shell script to run the job:



echo "Started ${OPT_MYTHDB} on `date`" >> ${LOG}
echo "Finished ${OPT_MYTHDB} on `date`" >> ${LOG} 

run with your (daily|weekly|monthly) cron jobs

The media

TODO: media backup

You can backup recordings with a command line program called nuvexport. When run, you will be given an option to export a chosen recording to a location of your choice, giving a new .nuv file along with a .sql file containing information on the recording. This can also be used to import the recording into another mythtv system.

A utility written by Chris Peterson (of mythweb fame) creates a view of human-readable symlinks to your recordings. This allows you to archive recordings to other media by copying from the symlink view (and dereferencing the link) so you can identify your archived videos by filename.

For more info, see the output of --help

If you can't locate the script, try looking in /usr/share/doc/mythtv-0.23/contrib (insert actual mythtv version as appropriate).

OS and software maintenance

If it ain't broke, don't fix it! This is difficult to follow if your MythTV serves other purposes besides Myth. If non-techies are using Myth for their daily TV watching, they will appreciate you keeping your hands off! If you must mess with Myth, please backup first. Fixing one thing may break another.

XFS Filesystem Defragmentation

If you use XFS to store your media it can get to a fragmentation level where your system will be slower to respond. As the root user check your fragmentation level.

 xfs_db -c frag -r /xfs_disk1

Defragment your filesystem.

  xfs_fsr -r /xfs_disk1