User Manual:Periodic Maintenance

From MythTV Official Wiki
Jump to: navigation, search


This page is up-to-date to MythTV version 0.20, the current release is 34.0


Maintaining your MythTV

Your MythTV system can continue to run for weeks, months or even years. Of course, hardware fails, techies fiddle, and power-outages, tornadoes, and alien takeovers happen. But, if you are interested in keeping your MythTV alive for as long as possible, here's a few suggestions.

Backup

Your system is only as good as your backup. Backup NOW!!!!! You may be confident that you can rebuild quickly, but two parts of MythTV aren't easily replaceable: the database, and your media content (TV recordings, videos, photos, music, etc.)

The database

You'll miss the database only when it's gone. The database keeps a history of every show you've recorded, which is very nice when you don't want to re-record the same show over again. It keeps all the settings of your backends and frontends, it keeps metadata of all your videos (titles, director, parental levels, etc.), it keeps all the commercial flagging data, bookmarks, and more.

You can create a backup of your database with the backup script. Before doing so, you must configure the backup script by specifying the directory to use for backups: https://www.mythtv.org/wiki/Silicondust_HDHomeRun_setup#Failure_after_backend_reboot

$ echo "DBBackupDirectory=/home/mythtv" > ~/.mythtv/backuprc

Then, from that point on, you may create backups by simply running:

$ mythconverg_backup.pl

By default, the backup script will compress and rotate backups, keeping the last 5 backups. More information and other usage scenarios are shown on the Database Backup and Restore page.

If you ever need to restore the backup see the Database Backup and Restore page.

Backing up to a remote server

Using rsync along with a trusted ssh connection is a way to move backup files from the current machine (your MythTV server) to an alternate machine you may have set up for backup. rsync supports many protocols, including rsh and ssh. This example demonstrates ssh.

#!/bin/bash
# In this example, the source machine is "startrek", and the remote backup
# server is "stargate".
LOCAL_HOSTNAME=startrek
BACKUP_SERVER=stargate
# The backup directory
BACKUP_DIR=/home/mythtv/backups

# Requires ssh tools to make connection to remote host.
# (If you use Debian/Ubuntu, apt-get install keychain).
source /root/.keychain/${LOCAL_HOSTNAME}-sh

# Store knowledge of all currently installed packages. Restore with dpkg
# --set-selections (This is good for Debian/Ubuntu users, along with any other
# distro based on apt/dpkg pkg mgt).
dpkg --get-selections > /${LOCAL_HOSTNAME}_pkgs.txt

# Backup up the mythconverg database.
mythconverg_backup.pl

# Push all the files to the backup disk using rsync. This preserves file
# permissions, and also deletes old files on the receiving end not found on the
# sending end.
#
# This example pushes everything from the listed folders out to destination
# ${BACKUP_SERVER}, stored in a folder designated for this local machine,
# ${LOCAL_HOSTNAME}. It backs up a wiki site, all the kernels built as "deb's",
# the /home partition, the /etc files, and all the /usr/local/src packages
# (like MythTV source code!)
rsync -avq --delete --stats --progress \
         /${LOCAL_HOSTNAME}_pkgs.txt ${BACKUP_DIR} /var/www/mediawiki/images \
         /var/www/mediawiki/LocalSettings.php /usr/src/*.deb /home* \
         /usr/local/src /etc** \
         ${BACKUP_SERVER}:/mnt/backup/${LOCAL_HOSTNAME}

Optimize the Database

Myth interacts with its database all the time. Without optimization, some of the queries will run slower and slower. For example, the scheduler query evaluates all of the recording rules against every upcoming listing and against all previously recorded shows. This is a complicated query that takes a noticeable amount of time to run on an optimized database. It can easily take 5 times longer on a fragmented database.

In the contrib directory there is a Perl script, optimize_mythdb.pl to run MYSQL utilities Repair and Optimize on each table in your MythTV database. It is recommended to run it regularly from a cron job. It uses the perl bindings to connect to the database and therefore should run with no further configuration. If it Can't locate MythTV.pm in @INC then you'll need to install libmythtv-perl.

Note that each table in the database is locked while the repair and optimize is performed. Therefore, the script should only be run when Myth is otherwise idle. Monthly execution may be sufficient. Daily may be overkill but ought to be harmless if the system would not otherwise be busy. Be sure that you have an appropriate backup strategy in place, as well. optimize_mythdb.pl can be found in

   /usr/share/doc/mythtv-backend/contrib/maintenance


First make sure it is executable:

   chmod 755 /usr/share/doc/mythtv-docs-0.21/contrib/optimize_mythdb.pl

Create a shell script to run the job:

###### optimize_mythdb.sh
#!/bin/sh

OPT_MYTHDB='/usr/share/doc/mythtv-docs-0.21/contrib/optimize_mythdb.pl'
LOG='/var/log/mythtv/optimize_mythdb.log'

echo "Started ${OPT_MYTHDB} on `date`" >> ${LOG}
${OPT_MYTHDB} >> ${LOG}
echo "Finished ${OPT_MYTHDB} on `date`" >> ${LOG} 

run with your (daily|weekly|monthly) cron jobs

Removing recordings which are pending deletion

If you have a system break whilst Mythtv is deleting a file you may well be left with the recording still in the database but invisible and with the recording file cluttering up your disk. This will also occur if you delete a recording then close down the system, such as will happen frequently in a Mythwelcome scenario. Several hundred such files occupying 660GB have been found on a system. They are not shown in the frontend, in 31 they do not appear in the API getrecordedlist, they are not resolved by optimising the database and are not detected by find_orphans.py. Their status is 'pending deletion'. You can run this script to check whether you have this problem

showpendings.sh

#!/bin/bash
#  mysql commands to show deletepending recordings
PASS=`grep Password /home/mythtv/.mythtv/config.xml | sed 's/<\/*Password>//g' | sed 's/ *//g'`
SQLLINE='select deletepending, recgroup, count(*) from recorded group by deletepending, recgroup;' 
echo $SQLLINE1 | mysql --database=mythconverg --user=mythtv --password=${PASS} 2>&1 | grep -v Warning

You can expected a response like this:

deletepending recgroup count(*)
0 Default 643
1 Deleted 432
1 LiveTV    4

That 432 figure is the number of these spurious recordings. It also shows 4 LiveTV deletependings. The 'Deleted' and 'LiveTV' lines will be absent if there are none, though you will get a line like this for recordings undergoing a countdown before deletion - entirely normal:

0 Deleted 1

You can remove the spurious recordings (under 0.27 or 31) with this script run at a time when the backend is idle:

#!/bin/bash
#
#  mysql commands to eliminate deletepending recordings/
read -p "ok to closedown backend? (yes/no)" response
test $response != yes  && exit 1
#get database password:
PASS=`grep Password /home/mythtv/.mythtv/config.xml | sed 's/ *<\/*Password>//g'`
echo stopping backend
systemctl stop mythtv-backend.service
echo clearing deletepending flags
SQLLINE="UPDATE recorded SET deletepending = 0 where recgroup = 'Deleted' or recgroup = 'LiveTV';"
echo $SQLLINE | mysql --database=mythconverg --user=mythtv --password=${PASS} 2>&1 | grep -v 'Using a password'
echo Restarting backend
systemctl start mythtv-backend.service

Then leave backend running to delete the files and their entries in the database - allow about 10 sec/recording. Use showpendings.sh whilst this is going on to check progress. Alternatively, optimize_mythdb.pl will remove the 'pending deletion' status if you append the following lines to it. This will then allow the backend to delete them.

# Remove deletependings
  if ($dbh->do("UPDATE recorded SET deletepending = 0 where recgroup = 'Deleted' or recgroup = 'LiveTV';")) {
      print "Fixed deletependings\n";
  }

Another option is to run the code to delete these recordings when starting backend. In ubuntu you need to set up an override file - other options tend to get overwritten on updates. see: https://www.mythtv.org/wiki/Silicondust_HDHomeRun_setup#Failure_after_system_reboot for example code which does this but remove the HD Homerun stuff if not using that tuner.


The media

optimize_mythdb.pl TODO: media backup

You can backup recordings with a command line program called nuvexport. When run, you will be given an option to export a chosen recording to a location of your choice, giving a new .nuv file along with a .sql file containing information on the recording. This can also be used to import the recording into another mythtv system.

mythlink.pl

A utility written by Chris Peterson (of mythweb fame) creates a view of human-readable symlinks to your recordings. This allows you to archive recordings to other media by copying from the symlink view (and dereferencing the link) so you can identify your archived videos by filename.

For more info, see the output of mythlink.pl --help

If you can't locate the mythlink.pl script, try looking in /usr/share/doc/mythtv-0.23/contrib (insert actual mythtv version as appropriate). If it [Can't locate MythTV.pm in @INC] then install libmythtv-perl

Mythfs.pl

A utility written by Lincoln Stein mounts a virtual filesystem corresponding to your recordings. It uses the 0.25 API so that the backend storage directories do not need to be NFS mounted on the client, nor does the MySQL database need to be accessible to the user mounting the filesystem.

Once the filesystem is mounted, you may use the human-readable filenames to play or archive your recordings.

The script can be found on github. See Mythfs.pl for instructions.

OS and software maintenance

If it ain't broke, don't fix it! This is difficult to follow if your MythTV serves other purposes besides Myth. If non-techies are using Myth for their daily TV watching, they will appreciate you keeping your hands off! If you must mess with Myth, please backup first. Fixing one thing may break another.

XFS Filesystem Defragmentation

If you use XFS to store your media it can get to a fragmentation level where your system will be slower to respond. As the root user check your fragmentation level.

 xfs_db -c frag -r /xfs_disk1

Defragment your filesystem.

  xfs_fsr -v /xfs_disk1