For decades, the adage “There’s no replacement for displacement” was considered dogma amongst the go-fast automotive gear-heads crowd. The larger your cylinders, the bigger the internal combustion bang you could make, which, in turn, equated to more power at the wheels and the faster you got to your destination.
This sentiment was held in particularly high regard in North America, where gas was cheap, cars were big, and highways were new and uncongested.
Just making horsepower isn’t the only consideration when determining the best way to get from point A to point B, however; large engines are also heavy and inefficient, making them harder to justify when gas is expensive and highways are filled with traffic, leaving you going nowhere fast. A number of innovations, such as forced induction, variable valve timing, direct injection, turbo and supercharging have made engines both more powerful and more efficient. In addition, route planning applications such as Waze and driver assist technology like adaptive cruise control and blind-spot monitoring have made travel by car much safer and less stressful.
A similar dogmatic belief has gripped the performance networking space for the past 30 years, with bandwidth being the equivalent to displacement as the only real answer to maximizing networking speeds. This made sense in the early days of the internet when a linear, highly centralized network architecture was adopted to support the north-south traffic patterns created by applications reaching into centralized servers with simple pull requests.
But network requirements have evolved since then. Today’s high-performance networks must support workloads for high-performance computing (HPC), artificial intelligence (AI), and machine learning (ML) applications simultaneously, each with a variety of latency and bandwidth requirements. Network traffic is now multi-directional and much more sensitive to congestion and variable tail latency, which can delay overall application processing.
A better measure of performance in today’s networks is how switching bandwidth is used, rather than the raw number of gigabits available. And the only way to truly improve performance is stop focusing on increasing bandwidth and instead prioritize optimizing the network to manage processing concurrent, sophisticated, high-performance workloads.
And just like you can’t go fast in a car if there is too much traffic on the road (no matter how powerful your engine is), packets won’t move though the network any faster with more bandwidth if congestion is present.
Rockport networks CTO, Matt Williams, recently authored a white paper that goes into detail about the key factors modern high-performance networks need to address to enable better performance, including:
- Bandwidth efficiency
- Path diversity
- Short time frame reaction
- Parallel/MPI processing
Download the white paper to learn why the only significant way to address congestion and latency issues in the network and enable high-performance workloads to reach the finish line faster is to adopt an architecture that is distributed, offers more paths, and creates a smarter network that can anticipate and recalculate on the fly, before congestion and latency occur.
Or contact us if you want to speak to an expert about how a modern high-performance network can help address your specific requirements while improving the overall efficiency of your data center.