[mythtv-users] Autoexpire not working

Sean Whitney sean.whitney at gmail.com
Mon Feb 17 14:32:52 UTC 2014



On 02/16/2014 10:13 PM, Hika van den Hoven wrote:
> Hoi Sean,
> 
> Monday, February 17, 2014, 7:01:12 AM, you wrote:
> 
>> Recently Autoexpire stopped working for my and I can't figure out why.
>> Yesterday I let the filesystem fill completely up.  Here are the
>> relevant logs.
> 
>> Feb 15 15:40:29 hubble mythbackend: mythbackend[27353]: N Expire
>> autoexpire.cpp:264 (CalcParams) AutoExpire: CalcParams(): Max required
>> Free Space: 67.0 GB w/freq: 7 min
>> Feb 15 15:47:06 hubble mythbackend: mythbackend[27353]: E TFWWrite
>> threadedfilewriter.cpp:508 (DiskLoop)
>> TFW(/mythtv/recordings/3081_20140215225900.mpg:56): File I/O  errcnt:
>> 1#012#011#011#011eno: No space left on device (28)
>> Feb 15 15:47:06 hubble mythbackend: mythbackend[27353]: E TFWWrite
>> threadedfilewriter.cpp:569 (DiskLoop)
>> TFW(/mythtv/recordings/3081_20140215225900.mpg:56): No space left on the
>> device for file
>> '/mythtv/recordings/3081_20140215225900.mpg'#012#011#011#011file will be
>> truncated, no further writing will be done.
>> Feb 15 15:47:06 hubble mythbackend: mythbackend[27353]: E TFWWrite
>> threadedfilewriter.cpp:508 (DiskLoop)
>> TFW(/mythtv/recordings/3321_20140215225900.mpg:73): File I/O  errcnt:
>> 1#012#011#011#011eno: No space left on device (28)
>> Feb 15 15:47:06 hubble mythbackend: mythbackend[27353]: E TFWWrite
>> threadedfilewriter.cpp:569 (DiskLoop)
>> TFW(/mythtv/recordings/3321_20140215225900.mpg:73): No space left on the
>> device for file
>> '/mythtv/recordings/3321_20140215225900.mpg'#012#011#011#011file will be
>> truncated, no further writing will be done.
>> Feb 15 15:47:50 hubble mythbackend: mythbackend[27353]: N Expire
>> autoexpire.cpp:264 (CalcParams) AutoExpire: CalcParams(): Max required
>> Free Space: 67.0 GB w/freq: 7 min
>> Feb 15 15:54:10 hubble mythbackend: mythbackend[27353]: I ProcessRequest
>> mainserver.cpp:1420 (HandleAnnounce) MainServer::ANN Monitor
>> Feb 15 15:54:10 hubble mythbackend: mythbackend[27353]: I ProcessRequest
>> mainserver.cpp:1422 (HandleAnnounce) adding: planck as a client (events: 0)
>> Feb 15 15:54:10 hubble mythbackend: mythbackend[27353]: I ProcessRequest
>> mainserver.cpp:1420 (HandleAnnounce) MainServer::ANN Monitor
>> Feb 15 15:54:10 hubble mythbackend: mythbackend[27353]: I ProcessRequest
>> mainserver.cpp:1422 (HandleAnnounce) adding: planck as a client (events: 1)
>> Feb 15 15:54:50 hubble mythbackend: mythbackend[27353]: N Expire
>> autoexpire.cpp:264 (CalcParams) AutoExpire: CalcParams(): Max required
>> Free Space: 67.0 GB w/freq: 7 min
>> Feb 15 16:02:50 hubble mythbackend: mythbackend[27353]: N Expire
>> autoexpire.cpp:264 (CalcParams) AutoExpire: CalcParams(): Max required
>> Free Space: 67.0 GB w/freq: 7 min
>> Feb 15 16:10:50 hubble mythbackend: mythbackend[27353]: N Expire
>> autoexpire.cpp:264 (CalcParams) AutoExpire: CalcParams(): Max required
>> Free Space: 67.0 GB w/freq: 7 min
>> Feb 15 16:18:50 hubble mythbackend: mythbackend[27353]: N Expire
>> autoexpire.cpp:264 (CalcParams) AutoExpire: CalcParams(): Max required
>> Free Space: 67.0 GB w/freq: 7 min
>> Feb 15 16:26:50 hubble mythbackend: mythbackend[27353]: N Expire
>> autoexpire.cpp:264 (CalcParams) AutoExpire: CalcParams(): Max required
>> Free Space: 67.0 GB w/freq: 7 min
>> Feb 15 16:34:50 hubble mythbackend: mythbackend[27353]: N Expire
>> autoexpire.cpp:264 (CalcParams) AutoExpire: CalcParams(): Max required
>> Free Space: 67.0 GB w/freq: 7 min
> 
>> This is my autoexpire settings
> 
> mysql>> select * from settings where value like '%autoexpire%' and
>> hostname is null;
>> +---------------------------+------+----------+
>> | value                     | data | hostname |
>> +---------------------------+------+----------+
>> | AutoExpireDiskThreshold   | 5    | NULL     |
>> | AutoExpireFrequency       | 10   | NULL     |
>> | AutoExpireMethod          | 3    | NULL     |
>> | AutoExpireDefault         | 1    | NULL     |
>> | RerecordAutoExpired       | 1    | NULL     |
>> | AutoExpireLiveTVMaxAge    | 1    | NULL     |
>> | AutoExpireExtraSpace      | 65   | NULL     |
>> | AutoExpireDayPriority     | 3    | NULL     |
>> | AutoExpireWatchedPriority | 1    | NULL     |
>> | AutoExpireInsteadOfDelete | 0    | NULL     |
>> +---------------------------+------+----------+
> 
>> This is also posted on the forums
>> https://forum.mythtv.org/viewtopic.php?f=36&t=49
> 
> 
> 
>> Thanks,
> 
>> Sean
> 
> 
> What is the status of the drive? Is it actually full or is it out of
> inodes. What is the reserved space for root, etc.
> 
> Tot mails,
>   Hika                            mailto:hikavdh at gmail.com
> 
> "Zonder hoop kun je niet leven
> Zonder leven is er geen hoop
> Het eeuwige dilemma
> Zeker als je hoop moet vernietigen om te kunnen overleven!"
> 
> De lerende Mens
> --

Plenty of free inodes.  Right now it has 50G of free space.  Inode usage
is about .02%.

root at hubble:/var/lib/mythtv/rokubuild# dumpe2fs /dev/mapper/mythtvvg-mythtv
dumpe2fs 1.42 (29-Nov-2011)
Filesystem volume name:   <none>
Last mounted on:          /mythtv
Filesystem UUID:          ec86bbe0-e401-400c-ab01-b7a309bd1e84
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index
filetype needs_recovery extent flex_bg sparse_super large_file huge_file
uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    journal_data
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              61054976
Block count:              244189184
Reserved block count:     2441891
Free blocks:              7027844
Free inodes:              61041706
First block:              0
Block size:               4096
Fragment size:            4096


> 
> _______________________________________________
> mythtv-users mailing list
> mythtv-users at mythtv.org
> http://www.mythtv.org/mailman/listinfo/mythtv-users
> http://wiki.mythtv.org/Mailing_List_etiquette
> MythTV Forums: https://forum.mythtv.org
> 


More information about the mythtv-users mailing list