Does the latency of the AGP card/bus matter? Does it effect the PCI bus, or is it totally seperate? It is listed in the 'lspci' command.
- AGP uses the same programming concepts as a PCI bus and thus it is listed with lspci. However, it is a different physical bus. You can see this in the output from the lspci command, the device identifiers are of the form xxxx:yy:zz.ww where I don't know off the top of my head what xxxx is, but yy is the bus number, zz is the device number, and ww is the subfunction number (for devices that have more than one "function"). If "yy" are different for two devices, it means they are on separate buses, and their latency will not affect each others' operation. Also it is quite common that there is a separate bus for on-board components such as on-board IDE or SATA controllers, so these will have a different bus number and will again not affect any of the devices in PCI slots. Some motherboards have several PCI buses, so that for example 2 PCI slots are connected to the first bus and another 2 are connected to the second, if this is the case, moving the cards around physically may be of help. TH 20:02, 20 April 2006 (UTC)
Confirmation of the problem
I was having problems with system glitching, as mentioned on this page, and setting the latency for the IDE bus a bit higher had this exact same effect. Everything worked without a hitch afterwards. No glitching at all.
Doesn't work on nForce4 Ultra
This "tweak" doesn't appear to function on my nForce4 Ultra (ASUSTeK Computer Inc. K8N4-E Mainboard). I run "setpci -v -s 00:08.0 latency_timer=b0" to set "IDE interface: nVidia Corporation CK804 Serial ATA Controller (rev f3)" to 176 (0xB0 in hex), however if I run lspci -v again it doesn't change the latency, it's still stuck at zero. "setpci" fails to work on any of the "onboard" components. It works on other PCI cards though, like the HD and SD input cards. Running Gentoo with gentoo-sources kernel 2.6.18
System Stats Athlon64 3200+ Venice ASUSTeK Computer Inc. K8N4-E Mainboard
Understanding the effects of latency setting
In the section Hauppauge PVR Cards it is stated "this will result in the PVR cards being able to send data faster than the IDE system is able to accept it". This seems to suggest that the PVR card is sending data to the IDE drive. This is not the case. Each flow is going between a device (disk or PVR) and RAM.
In theory a lower latency setting for a device should mean that that device's data transfer bursts have a shorter limit. After each burst, the bus can be allocated to another device.
There are two important resources on a bus: bandwidth (amount of data that can flow across it over a period of time) and latency: how long a device must wait before a request for control of the bus is granted. Each matters in different ways.
Because disks have large buffers, only bandwidth matters to them.
Devices with small buffers need to have fairly low latency access to the bus or their buffers will be overrun (in the case of input devices) or underrun (in the case of output devices).
I don't know how large the buffers are in tuners so I don't know if overrun is a problem.
If bus bandwidth is a problem, the latency settings can still be relevant! The reason is that they are a crude way of dividing up the bandwidth. If there are two devices wanting all the bandwidth they can get, and both are capable of long PCI bursts, then setting the latency of one to 100 and the other to 50 means that one device gets 100/150 of the bandwidth and the other gets 50/150 (assuming that the bus is allocated round robin).
The latency setting has still another affect on bandwidth: each bus handover takes bus cycles, cycles that don't transfer data. So very small latency settings could cause a significant loss of total bandwidth.
This document is meant to provide practical advice. Although I am questioning the basis of this advice, I don't have any advice to replace it. Hugh 05:36, 12 November 2006 (UTC)