[mythtv] [mythtv-commits] Ticket #1835: Gradually delete big files to avoid I/O starvation on some filesystems
danielk at cuymedia.net
Wed May 24 22:46:58 UTC 2006
On Wed, 2006-05-24 at 16:33 -0400, Chris Pinkham wrote:
> I'm not sure about this. The gradual delete has to happen inside
> MainServer::DoDeleteThread() which is where he put it. The deadline for
> deleting is on a per-recording basis. A recording needs to be deleted within
> 5 minutes of when we were told to delete it, otherwise it will pop back up on
> the Watch Recordings screen.
Yep, lets say it has to be fully truncated in 4 minutes.
Lets just say every file needs to be fully truncated in 4 minutes and
DoHandleDeleteRecording() has to return immediately. This ensures
that all the files deleted by AutoExpire::SendDeleteMessages()
are actually deleted by the next time it runs.
The files you delete manually are a bonus, if it is deleted before
the next autoexpire, maybe you get to keep a file that would have
been deleted next. If not then hey, the manual delete will allow
you to keep some other file at the next autoexpire run.
Combine this with:
> 1. We don't want to delete so slowly that the recording is put back on
> the Watch Recordings screen. This can be solved by opening the file
> first, then unlinking and then the gradual delete loop (using
> ftruncate instead of truncate). In the loop we should update
> recorded.lastmodified (maybe not on every iteration but you get
> the idea). As a bonus, if the backend crashes the file will be
> deleted completely automatically by the OS. If we do this, this
> constraint basically goes away.
And we don't need to worry about files popping back into
The only downside over a more complicated method that takes the
freespace into account is that it could delete a file more quickly
than you have to if say a 8 Gig file gets deleted but you only
need 1 Gig freed in the next 5 minutes. But this may be sufficient
and is much simpler so...
More information about the mythtv-dev