Bisection bandwidth explained

In computer networking, a network may be bisected into two equal-sized partitions. The bisection bandwidth of a network topology is the minimum bandwidth available between any two such partitions.[1] Given a graph

G

with vertices

V

, edges

E

, and edge weights

w

, the bisection bandwidth of

G

is

BB(G)=

min
S\subsetV:
|S|=1
2
|V|

\sumu\inw(u,v)

.

In other words, the network is bisected s in such a way that the bandwidth between the two partitions is minimum.[2] A network is considered to have full bisection bandwidth if

BB(G)\geq

1
2

|V|

.[3] Intuitively, full bisection bandwidth means that if all vertices in the network are matched as source-destination pairs, then if all pairs send flow at rate 1 simultaneously, there are no bisection bottlenecks. Therefore, bisection bandwidth accounts for the bottleneck bandwidth of the bisected network as a whole.

Bisection bandwidth calculations

For a linear array with n nodes bisection bandwidth is one link bandwidth. For linear array only one link needs to be broken to bisect the network into two partitions.

For ring topology with n nodes two links should be broken to bisect the network, so bisection bandwidth becomes bandwidth of two links. For tree topology with n nodes can be bisected at the root by breaking one link, so bisection bandwidth is one link bandwidth.

For Mesh topology with n nodes,

\sqrt{n}

links should be broken to bisect the network, so bisection bandwidth is bandwidth of

\sqrt{n}

links.

For Hyper-cube topology with n nodes, n/2 links should be broken to bisect the network, so bisection bandwidth is bandwidth of n/2 links.

Significance of bisection bandwidth

Theoretical support for the importance of this measure of network performance was developed in the PhD research of Clark Thomborson (formerly Clark Thompson).[4] Thomborson proved that important algorithms for sorting, Fast Fourier transformation, and matrix-matrix multiplication become communication-limited—as opposed to CPU-limited or memory-limited—on computers with insufficient bisection bandwidth. F. Thomson Leighton's PhD research[5] tightened Thomborson's loose bound [6] on the bisection bandwidth of a computationally-important variant of the De Bruijn graph known as the shuffle-exchange network. Based on Bill Dally's analysis of latency, average-case throughput, and hot-spot throughput of m-ary n-cube networks for various m, it can be observed that low-dimensional networks, in comparison to high-dimensional networks (e.g., binary n-cubes) with the same bisection bandwidth (e.g., tori), have reduced latency and higher hot-spot throughput.[7]

Note, there is also support that bisection bandwidth and network throughput are asymptotically different metrics, which may grow at different rates depending on the network topology. [8]

Notes and References

  1. Book: John L. Hennessy and David A. Patterson. Computer Architecture: A Quantitative Approach. Third. Morgan Kaufmann Publishers, Inc. 2003. 978-1-55860-596-1. 789.
  2. Book: Solihin, Yan. Fundamentals of parallel multicore architecture. CRC Press. 2016. 9781482211191. 371–381.
  3. Book: Namyar . Pooria . Supittayapornpong . Sucha . Zhang . Mingyang . Yu . Minlan . Govindan . Ramesh . A throughput-centric view of the performance of datacenter topologies . 2021-08-09 . Proceedings of the 2021 ACM SIGCOMM 2021 Conference . https://dl.acm.org/doi/10.1145/3452296.3472913 . SIGCOMM '21 . New York, NY, USA . Association for Computing Machinery . 349–369 . 10.1145/3452296.3472913 . 978-1-4503-8383-7.
  4. A complexity theory for VLSI . C. D. Thompson. Carnegie-Mellon University. 1980. Technical Report CMU-CS-80-140.
  5. F. Thomson Leighton. F. Thomson Leighton. Complexity Issues in VLSI: Optimal layouts for the shuffle-exchange graph and other networks. MIT Press. 1983. 0-262-12104-2.
  6. Clark Thompson. Area-time complexity for VLSI. Proc. Caltech Conf. on VLSI Systems and Computations. 81–88. 1979.
  7. Bill Dally. Bill Dally. Performance analysis of k-ary n-cube interconnection networks. IEEE Transactions on Computers. 39. 775–785. 6. 1990. 10.1109/12.53599. 10.1.1.473.5096.
  8. Book: Jyothi . Sangeetha Abdu . Singla . Ankit . Godfrey . P. Brighten . Kolla . Alexandra . Measuring throughput of data center network topologies . 2014-06-16 . The 2014 ACM international conference on Measurement and modeling of computer systems . https://dl.acm.org/doi/10.1145/2591971.2592040 . SIGMETRICS '14 . New York, NY, USA . Association for Computing Machinery . 597–598 . 10.1145/2591971.2592040 . 978-1-4503-2789-3.