This HOWTO aims to collect the multitude of tips regarding optimizing performance of your system for use with MythTV.
- 1 Filesystems
- 2 Devices
- 3 Mythfrontend
- 4 Kernel Configuration
- 5 MySQL Tweaks
- 6 OS, other
Disable NFS file attribute caching
if you are using SMB (not CIFS), you can try the ttl option using "-o ttl=100" which should set your timeout lower than the default. The default is supposed to be 1000ms which equals 1 second, but one user has reported that setting ttl=100 corrected the issue for him, so SMB users can give it a try.
Ensure that your NFS server is running in 'async' mode (configured in /etc/exports). The default for many NFS servers is 'async', but recent versions of debian now default to 'sync', which can result in very low throughput and the dreaded "TFW, Error: Write() -- IOBOUND" errors. Example of setting async in /etc/exports:
There are a few other NFS mount options that can help, such as "intr", "rsize", "wsize", "nfsvers=3", "actimeo=0" , "noatime" and "tcp". You can read the man pages for a more detailed description, but suggestions are below. (Please note that "soft" mentioned here before is prone to cause file corruption).
rsize,wsize - 8192 - 32768 (8k - 32k) suggested, can depend on your network. Try one, test it, try another, test it. 8192 is a reasonable default.
nfsvers=3 - This tells the client to use NFS3, which is better. Of course, the server has to also support it.
actimeo=0 - disable this attribute caching to allow the frontend to see updates from the backend quicker. The problem has been seen where LiveTV fails to transition from one program to another. The cache file attribute prevents the frontend from opening the new file promptly. This also causes more load on the server if that is a issue.
tcp - This tells NFS to use TCP instead of UDP. This seems to be *very* important for high speed networks (ie, 1000mbit), mixed networks and probably isn't a bad idea in any case.
intr - Makes I/O to a NFS mounted filesystem interuptable if the server is down. If not given the I/O becomes a uninteruptable sleep which causes the process to be impossible to kill until the server comes up again.
soft - If the NFS server becomes unavailable the NFS client will generate "soft" errors instead of hanging. Some software will handle this well, other much less well. In the later case file corruption will result. For a frontend node that does reading only it might still be a reasonable setting.
Example /etc/fstab entry:
server:/mythtv/recordings /mythtv/recordings nfs intr,rsize=8192,wsize=8192,async,nfsvers=3,bg,actimeo=0,tcp
Fragmentation happens when the data placement of files is not contiguous on disk, causing time-consuming head seeks when reading or writing a file.
MythTV recordings on disk can become quite fragmented, due to several factors, such as the fact that MythTV writes large files over a very long period of time, the fact that recording files may have drastically different sizes, and the fact that many MythTV systems have multiple capture cards--allowing for recording multiple shows at once. Note, also, that any time MythTV is recording multiple shows to a single filesystem (even if in different directories and/or in different Storage Groups), the recordings will necessarily be fragmented.
Configuring multiple local filesystems within MythTV's Storage Groups will allow MythTV to write recordings to separate filesystems, thereby minimizing fragmentation. Therefore, the best approach to combat fragmentation is to ensure each computer running mythbackend has at least as many local (and available) filesystems as capture cards. If using a combination of local and network-mounted filesystems, you may need to adjust the Storage Groups Weighting to cause MythTV to write to network-mounted filesystems (though doing so may negatively impact performance, meaning the use of a sufficient number of local filesystems or the use of only network-mounted filesystems is preferred). The availability of a filesystem is somewhat dependent on that filesystem having space available for writing (i.e. having 2 filesystems for 2 capture cards with one filesystem completely full and the other only half full will not help prevent fragmentation, though if both are full, autoexpiration should allow either to be used).
Fragmentation can be measured by the "filefrag" command on most any filesystem, or "xfs_bmap" which also works on xfs.
The xfs filesystem has a mount option which can help combat this fragmentation: allocsize
allocsize=size Sets the buffered I/O end-of-file preallocation size when doing delayed allocation writeout (default size is 64KiB). Valid values for this option are page size (typically 4KiB) through to 1GiB, inclusive, in power-of-2 increments.
This can be added to /etc/fstab, for example:
/dev/hdd1 /video xfs defaults,allocsize=512m 0 0
This essentially causes xfs to speculatively preallocate 512m of space at a time for a file when writing, and can greatly reduce fragmentation. For example on my box with HD streams that typically take about 3Gb for an hour of video, I used to get thousands of extents. This is largely due to the fsync loop in the file writer, which un-does any benefit that xfs's delayed allocation would otherwise provide. With the allocsize mount option as above, now I get at most 30 or so extents, because the periodic fsync now flushes to pre-allocated blocks.
For files which are already heavily fragmented, the xfs_fsr command (from the xfsdump package) can be used to defragment individual files, or an entire filesystem.
Run the following command to determine how fragmented your filesystem is:
xfs_db -c frag -r /dev/hdd1
xfs_fsr with no parameters will run for two hours. The -t parameter specifies how long it runs in seconds. It keeps track of where it got up to and can be run repeatedly. It can be added to our crontab to periodically defragment your disk. Add the following to /etc/crontab:
30 1 * * * root /usr/sbin/xfs_fsr -t 21600 >/dev/null 2>&1
to run it every night at 1:30 for 6 hours.
Don't forget to see the complete XFS_Filesystem wiki page that includes general info about XFS, defragmenting, disk checking and maintenance, etc.
Other Performance Tweaks
According to Filesystem Performance Tweaking with XFS, there are some other useful tweaks to improve the performance of your XFS filesystem.
Disabling File Access Time Logging
XFS (like some other filesystems), logs the access times of files. Generally this file metadata shouldn't be necessary, however, if for some strange reason you experience problems, then don't apply this tweak.
To disable the logging of file access times in XFS, add the "noatime" and "nodiratime" options to your /etc/fstab:
# 1.5 TB RAID 5 array. Large file optimization: 512m of prealloc # NO logging of access times: improves performance # NO block devices or suid progs allowed: improves security /dev/md0 /terabyte xfs defaults,noatime,nodiratime,nosuid,nodev,allocsize=512m 0 0
This tweak should also work with ext3, and most likely other filesystems. If you get something like the following, the mount option is not supported for your filesystem:
mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg | tail would return something like: YOUR_FILESYSTEM_TYPE: unknown mount option [noatime].
Using relatime Instead
You may also wish to look into the "relatime" mount option to improve performance, but still have file atime updated. For more information on this (and related discussion), see: Linux: Replacing atime With relatime
Changing Number of Log Buffers
Another interesting tweak mentioned in Filesystem Performance Tweaking with XFS is to change the number of log buffers used by XFS. This tweak can improve both sequential and random file creation and deletion times. The default depends on your filesystem's blocksize, and each log buffer takes up 32K of RAM. SGI (the company who created XFS) advises against using 8 on a system with 128M of RAM or less. Since most systems have more than 128M of RAM nowadays, it should be safe to increase the number of logbuffers for XFS.
By default, XFS adjusts this depending on your filesystem's blocksize:
logbufs=value Set the number of in-memory log buffers. Valid numbers range from 2-8 inclusive. The default value is 8 buffers for filesys‐ tems with a blocksize of 64KiB, 4 buffers for filesystems with a blocksize of 32KiB, 3 buffers for filesystems with a blocksize of 16KiB and 2 buffers for all other configurations. Increasing the number of buffers may increase performance on some workloads at the cost of the memory used for the additional log buffers and their associated control structures.
To check if you really need this tweak, check your XFS filesystem's blocksize using:
# replace /dev/md0 with your device's name xfs_info /dev/md0
Look for the bsize=X listed in the output. This value is reported in kilobytes. Note: I did not need to tweak this value, as my blocksize was 4096 on a RAID 5 array.
To perform this tweak, edit your /etc/fstab and add the "logbufs=X" option, where X is 2 to 8 inclusive.
# Change X to what you want. /dev/md0 /terabyte xfs defaults,noatime,nodiratime,nosuid,nodev,allocsize=512m,logbufs=X 0 0
Ethernet Full-duplex mode
Make sure that your ethernet adapters are running in [full duplex] mode. Check your current configuration with this command:
Typically both sides will be configured for autonegotiation by default and you will get the best possible connection automatically but there are conditions--typically involving old or buggy hardware--when this may not happen. The following can be used to disable autonegotiation and force a 100base-T network adapter into full duplex mode, when autonegotiation is failing.
ethtool -s eth0 speed 100 duplex full autoneg off
This problem can exhibit itself with IOBOUND errors in your logs.
Note: To use full-duplex mode, your network card must be connected to a switch (not a hub) and the switch must be configured to allow full-duplex operation (almost always the default) on the ports that are being used. By definition, a network switch supports full duplex operation and a network hub (sometimes referred to as a repeater) does not. If you are connecting to a hub, full-duplex operation will not be possible. Most switches support using 100base-T (Fast Ethernet) as well as 10base-T, while most hubs will only use 10base-T, and while a few 100base-T hubs (and 10base-T switches) do exist, they are quite rare. Gigabit switches can reliably be expected to handle both fast ethernet and normal ethernet connections in addition to the gigabit ethernet speeds.
Problems will arise if only one side of a connection supports full duplex or if one side only supports autonegotiation and can not be manually configured. It should be noted that most cheap switches and home routers do not support manual port configuration which will result in them autonegotiating to a half duplex connection if the computer is forced to full duplex as shown above. Forced connections can't advertise what they are so the autonegotiating side must assume half duplex so you will actually be creating a problem if the now forced connection was already full duplex. Nearly all of the time, using autonegotiation on all of the equipment will give you the best possible results. If you encounter problems with autonegotiation you can opt to manually configure settings for that device but it is highly recommended that you manually configure every piece of equipment on that segment as well.
Harddisk DMA Access
MythTV is very demanding of disks, and can demand a sizeable amount of throughput from the disks to be available to operate properly, although it may not at first be obvious quite how much is actually needed. When watching LiveTV on a PVR card (for instance), if MythTV is all on one machine for you, the backend is writing to the ringbuffer, while the frontend is simultaneously reading from it, so just watching TV uses twice the disk resources that one might at first think. Filesystem caching can usually reduce the impact of this, but not always. If you are getting repeated messages from MythTV complaining that the ringbuffer file is not available, it's likely that DMA access has gone wrong in your configuration and caused your disks to become very slow.
First, check to see if DMA access has been enabled for the drive you are using. Running `hdparm /dev/hdx` for each drive will tell you (the using_dma setting) whether or not DMA has been enabled. Under normal conditions, the kernel will always enable DMA support for any and all drives and controllers that support the feature (which is basically everything that shouldn't be in a museum by now). If DMA has not been enabled, usually the kernel will have said something in the syslog/dmesg as to why it refused to enable DMA support. Solve whatever problem it's referring to before continuing. Remember that you really do need an 80-conductor IDE cable for DMA transfers to work reliably. 40-conductor cables are fine for optical drives, but not for magnetic disks. If you're still using a 40-conductor cable, replace it even if it seemed to work just fine--high speed transfers are not reliable without the 80-conductor cable.
There is a "Generic PCI bus-master DMA support" option in the kernel that will enable DMA support, but this by itself results in rather slow (<6MB/s) throughput. Run `hdparm -t /dev/hdx` while the machine is relatively non-busy and you should see a number around 16MB/s, 33MB/s or sometimes even higher (SATA and SCSI drives and RAID arrays will often show even higher numbers). If you see a very low number, then you need to enable support for the specific chipset of your disk controller in the kernel. If you are booting from this controller, you must compile the chipset support for it directly into the kernel and not as a module.
If DMA access is still not available, you can try to force it on by using the command `hdparm -d1 -X /dev/hdx`. Use this only as a last resort as enabling DMA access when the system isn't capable of properly supporting it can easily result in massive data corruption.
The mythfrontend & mythtv threads can be configured to run with "realtime" priorities - if the frontend is configured this way, and if sufficient privileges are available to the user running mythfrontend.
The HOWTO has an excellent section on how to set your system up to enable this (look for "Enabling real-time scheduling of the display thread.") You will also need to select "Enable Realtime Priority Threads" in the General Playback frontend setup dialogue.
Realtime threads can help smooth out video and audio, because the system scheduler gives very high priority to mythtv. For more information on how this works, see the Real-Time chapter in Robert Love's great Linux Kernel Development book.
Ensure that the "Processor Family" (in "Processor Type and Features") is configured correctly. Ensure that the correct IDE controller is set (in "Device Drivers->ATA/ATAPI support->PCI IDE chipset support").
If you're compiling your own kernel, you might want to try out the following options:
Kernel preemption allows high priority threads to interrupt even kernel operations -- this ensures the lowest possible latency when responding to important events. (Note: apparently some IVTV drivers show stability problems with a preemptible kernel.)
Increasing the scheduler's timer frequency to 1000Hz can reduce latency between multiple threads of execution (at a small cost to overall performance), e.g. when recording/playing multiple video streams.
On some machines you may hear an annoying high-pitched "whistle": reduce the frequency to 250Hz or lower to avoid this.
Taken from this thread in mythtv-users.
Add the following to the [mysqld] section of /etc/my.cnf to see improvements in database speed for MythTV as well as MythWeb.
key_buffer = 48M max_allowed_packet = 8M table_cache = 128 sort_buffer_size = 48M net_buffer_length = 8M thread_cache_size = 4 query_cache_type = 1 query_cache_size = 4M
Incorrect or less-than-optimal settings of PCI Latency can cause performance-related problems. See the page PCI Latency
RTC maximum frequency
XORG CPU Hogging
Under some circumstances, X can use huge amounts of CPU. This can be fixed in some cases by increasing its priority above the base value of 0 (i.e. to a negative value). E.g. renice -10 [pid for X]
A second way of lowering Xorg CPU usage (especially when watching hd/x.264)with NVidia cards, is to add
Option "UseEvents" "True"
to the Device section of your Xorg.conf. (warning: although this works well for watching hd content, it's considered unstable for 3D software like gaming, etc... )