vendredi 27 février 2015

How does Linux prevent applications from sending more packets than the link can handle without dropping packets?


I couldn't make the question any clearer so here's an example scenario:


Given a Linux machine connected to an IP network through a physical interface of bandwidth 10Mbps (or a higher speed interface with a tc token bucket filter to limit the rate to 10Mbps).


On a remote machine (with an equal speed or faster link) start an iperf UDP server:


iperf -s -u -i 1


On the local machine start an iperf client with bw=20Mbps:


iperf -c <server ip> -u -i 1 -b 20M


Observation: The sender never exceeds the 10Mbps rate (defined in the link layer either in hardware or a tc qdisc).


I expected to see the sender push out 20Mbits worth of packets a second causing the local tx queue of the interface to build up and packet losses start to happen. But this is not the case. Why?


I have tried to look into the net/sched folder to the Linux kernel but I can't seem to find the source of this behavior.


Appreciate you help. Also feel free to suggest changes to the title to make it more relevant.



Aucun commentaire:

Enregistrer un commentaire