[mythtv] Re: [PATCH] Increase block size

Bruce Markey bjm at lvcm.com
Tue Dec 23 17:14:43 EST 2003


Mark Frey wrote:
> Bruce Markey wrote:
...
>>I the existing code, there is the concept of requestedbytes that
>>sends REQUEST_BLOCKs ahead of time so the pipeline is always
>>moving....
> 
> ...It then waited until it got a response from the
> backend and 64000 were available and exited from RingBuffer::RequestBlock()
> until the next time through. The magic number here was 130000 which was the
> size of the underliying socket's send/recieve buffer. However, the way the
> actual request worked was as follows:
> 
> Frontend: send request for number of bytes to backend through control
> socket.
> Frontend: wait for response.
> Backend: get request for bytes, assign to one of the pool of backend threads
> for processing.
> Backend: write the number of bytes requested to the data socket.
> Backend: write the response (hardcoded false, indicating not at end of file)
> to the control socket.
> Frontend: receive response (again, always false).
> Frontend: wait for 64000 bytes to be available on the data socket.
> Frontend: done!

It's moot given the circumstances. Your approach turns out
to work just as well and I like consolidating into remotefile
and lots of other clean up.

But just to make sure we're on the same page =). First, the
pre-requesting of data has been there for many months and the
limitations of the QSocketDevice just showed up a couple weeks
ago. This was a much slicker idea before there was a 130k limit.

The requestedbytes represents the REQUEST_BLOCKs that were sent
ahead of time before read asked for the data. This read ahead
meant that the request/response was already in-progress before
the next read with the goal that bytesAvailable > sz before
the next call to safe_read(rf,...).

1st safe_read
request blocks 1,2,3
return block 1

2nd safe_read
block 2 and part of 3 available
request block 4
return block 2

3rd safe_read
block 3 and part of 4 available
request block 5
return block 3

This way safe_read would never stall because there were
more bytesAvailable than the current read. So safe_read would
send a new REQUEST_BLOCK and did wait for the control socket
reply but it didn't wait for data. It pulled 64k that was
already available and returned.

There was a VB_PLAYBACK that showed this state:

  sz: 64000 return: 64000 requested: 128000 avail: 66360

64k was returned. There are two pending REQUEST_BLOCKs. All
of the first and 2360 of the second have already arrived.
As you can see it didn't wait for all the data from the most
recent REQUEST_BLOCK before returning. Data would continue
to arrive in the background.

Again, this was much slicker when the blocksize could be
256000 and 3 blocks could be pending. Data would stream over
the network and read would never wait.

I did try using a fixed REQUEST_BLOCK size of 64000 and a
readblocksize of 256000 and after rewriting safe_read from
scratch ;-) I got it to work:

2003-12-23 13:35:26 sz: 256000 return: 256000 requested: 128000 avail: 25824
2003-12-23 13:35:26 49      QUERY_FILETRANSFER 23[]:[]REQUEST_BLOCK[]:[]64000
2003-12-23 13:35:26 49      QUERY_FILETRANSFER 23[]:[]REQUEST_BLOCK[]:[]64000
2003-12-23 13:35:26 49      QUERY_FILETRANSFER 23[]:[]REQUEST_BLOCK[]:[]64000
2003-12-23 13:35:26 49      QUERY_FILETRANSFER 23[]:[]REQUEST_BLOCK[]:[]64000
2003-12-23 13:35:26 sz: 256000 return: 256000 requested: 128000 avail: 14536
2003-12-23 13:35:26 49      QUERY_FILETRANSFER 23[]:[]REQUEST_BLOCK[]:[]64000
2003-12-23 13:35:26 49      QUERY_FILETRANSFER 23[]:[]REQUEST_BLOCK[]:[]64000
2003-12-23 13:35:26 49      QUERY_FILETRANSFER 23[]:[]REQUEST_BLOCK[]:[]64000
2003-12-23 13:35:26 49      QUERY_FILETRANSFER 23[]:[]REQUEST_BLOCK[]:[]64000
2003-12-23 13:35:27 sz: 256000 return: 256000 requested: 128000 avail: 68928

However, I now see that this isn't a win. If there can only
be 128000 pre-requested and we have to wait for 256000 then
safe_read can't return until it gets data from newly sent
REQUEST_BLOCKs. If it has to wait for the data from a request
it might as well request it all at once.

>>Before the QSocketDevice patch the block size was variable
>>based on estbitrate. On low bitrate data, waiting to complete
>>large blocks can add unnecessary latency. This may be easy to
>>fix by changing using the block size calculations from a few
>>versions back.
> 
> 
> I would agree here. I think there certainly is room for improvement on
> choosing the block size based on whatever heuristic criteria (bitrate,
> connection speed, whatever). Again, as a first step I wanted to break the
> requirement that request size <= 130000. In profiling the code I found that
> the overhead of making the request to the backend was high, and so I wanted
> to amortize this cost over more bytes == larger requested block size.

av already addressed this in his mod but I think it is important
to understand that there is a subtle problem with using too large
of a block size which is most apparent during channel changes.

Say the bitrate a user sets for live TV equates to about 1MB/sec.
which would be about 3.6GB/hour. For the read to return 256000
it needs 256000 bytes of data. This data won't exist until it's
recorded about 0.25sec or about 8 frames. 

Now lets say she sets her bitrate for 0.9GB/hour or 1/4MB/sec.
Now the encoder needs to run for a full second in order to have
256000 that it can return. If the block size was 64000 it would
again take 0.25sec. Therefore the different in blocksize adds
0.75sec to every channel change.

An unstated benefit of the QSD patch was that the smaller block
size sped up channel changes for everyone to various degrees but,
as you are well aware, the smaller block size caused other
problems at higher bitrates. 

>>I guess what I expected to see was a separation of the reqsize
>>(fixed at 64000) and the readblocksize (variable from 64000 to
>>256000 or higher if needed in the future)...
> Not entirely sure what you're getting at here, could you reword for me?

See above =).

--  bjm



More information about the mythtv-dev mailing list