[mythtv] Re: [PATCH] Increase block size

Mark Frey markfrey at fastmail.fm
Mon Dec 22 20:50:58 EST 2003


Bruce Markey wrote:
>Mark Frey wrote:
>> As per the "Solving my performance problems" thread, here is the patch
>> that allows for block sizes greater than 64000 bytes. In this case I've
>> set the block size to 256000 bytes. I've tested this on a frontend ==
>> backend machine, and thanks to Bruce this has also been tested for the
>> frontend != backend case.
>>
>> My machine is limited by throughput and this patch helps me a great
>> deal, YMMV, but I don't believe it will hurt anyone's performance.
>
>Mark, as you know, I did test that it does work for different
>configurations and we hashed out some details off line. However,
>I was focusing on the fact that it did work and didn't really
>look at how it worked until today. A few questions come to mind.
>
>I the existing code, there is the concept of requestedbytes that
>sends REQUEST_BLOCKs ahead of time so the pipeline is always
>moving. It appears that in RemoteFile::Read a request is sent,
>it waits to read that data then the network is dormant until
>the next Read. Is this correct? It may be okay but it seems
>less efficient.

That's correct, but I would guess that this is a non-issue. Here's my
reasoning: The old code essentially tried to keep the requested bytes near
128000 bytes (it actually wound up requesting about 64000 bytes every time
in the steady-state case). It then waited until it got a response from the
backend and 64000 were available and exited from RingBuffer::RequestBlock()
until the next time through. The magic number here was 130000 which was the
size of the underliying socket's send/recieve buffer. However, the way the
actual request worked was as follows:

Frontend: send request for number of bytes to backend through control
socket.
Frontend: wait for response.
Backend: get request for bytes, assign to one of the pool of backend threads
for processing.
Backend: write the number of bytes requested to the data socket.
Backend: write the response (hardcoded false, indicating not at end of file)
to the control socket.
Frontend: receive response (again, always false).
Frontend: wait for 64000 bytes to be available on the data socket.
Frontend: done!

Essentially the frontend wound up waiting for all the requested bytes to be
sent anyhow. I'm guessing by the time the response was receieved by the
control socket all the bytes on the data socket were already transfered. So
essentially the network was dormant between reads in that case as well. Also
note that, at most we could have 130000 bytes "in the pipe" in any case. I
would consider that a further optimization over what I've done with this
patch. Again, I don't think my patch reduces the efficiency here.

>Part of av's QSocketDevice patch is that he needed to stay
>below a limit of about 130k somewhere. However, I see you
>are sending and receiving 256000 blocks. How is it that this
>isn't a problem?

The problem with the old code was that the frontend thread waited for a
response from the backend on the control socket. This response wasn't sent
until the backend had sent all the requested bytes, therefore, the number of
bytes requested had to be less than the buffer size. If not a deadlock
occured: the frontend waiting for a response, while the backend waits for
the frontend to clear out the socket so it can send more data. With the
patch the frontend sends the request, and then starts reading from the data
socket without waiting for a response. Once it's received all the data it
requested, or times out, or sees that a response is in the control socket,
it completes (this is not exactly what happens, there is a slight wrinkle,
but this is the basic idea). In this way we don't care what size the
underlying socket buffer is.

>Before the QSocketDevice patch the block size was variable
>based on estbitrate. On low bitrate data, waiting to complete
>large blocks can add unnecessary latency. This may be easy to
>fix by changing using the block size calculations from a few
>versions back.

I would agree here. I think there certainly is room for improvement on
choosing the block size based on whatever heuristic criteria (bitrate,
connection speed, whatever). Again, as a first step I wanted to break the
requirement that request size <= 130000. In profiling the code I found that
the overhead of making the request to the backend was high, and so I wanted
to amortize this cost over more bytes == larger requested block size.

>I guess what I expected to see was a separation of the reqsize
>(fixed at 64000) and the readblocksize (variable from 64000 to
>256000 or higher if needed in the future). safe_read(rf,...
>use requested bytes and 64000 REQUEST_BLOCKs to suck in the
>data. I think I'll this a shot and send a proposed patch if
>it works out okay.

Not entirely sure what you're getting at here, could you reword for me?

>
>--  bjm



More information about the mythtv-dev mailing list