TCP pacing explained

In the field of computer networking, TCP pacing is the denomination of a set of techniques to make the pattern of packet transmission generated by the Transmission Control Protocol less bursty. Where there could be insufficient buffers in switches and routers, TCP Pacing is intended to avoid packet loss due to exhaustion of buffer memory in network devices along the path.[1] It can be conducted by the network scheduler.

Bursty traffic can lead to higher queuing delays, more packet losses and lower throughput.[2] However it has been observed that TCP's congestion control mechanisms may lead to bursty traffic on high bandwidth and highly multiplexed networks,[3] a proposed solution to this problem is TCP pacing. TCP pacing involves evenly spacing data transmissions across a round-trip time. https://homes.cs.washington.edu/~tom/pubs/pacing.pdf

See also

Notes and References

  1. D . Wei . P . Cao . S . Low . Proceedings of IEEE INFOCOM. Vol. 2. 2006 . TCP pacing revisited..
  2. Book: Kleinrock, L . Queueing systems. . 1975 . Wiley J . 25403139.
  3. Book: Zhang . Lixia . Shenker . Scott . Clark . Daivd D. . Proceedings of the conference on Communications architecture & protocols . Observations on the dynamics of a congestion control algorithm . August 1991 . http://dx.doi.org/10.1145/115992.116006 . 133–147 . New York, NY, USA . ACM . 10.1145/115992.116006. 0897914449 . 7824777 .