Faster server speeds on the horizon


Siemon is a Business Reporter client.

With emerging technologies driving higher network speeds in the data center, switch-to-server connection speeds are faster than ever. Siemon's Ryan Harris explains why high-speed, short-reach cable assemblies could help you maximize your budget on next-generation server systems.

Artificial intelligence, machine learning, edge computing, and other emerging technologies have been widely adopted in businesses. This drives the need for data center server systems to offer higher speeds to support these applications. Current server speeds of 10 and 25 Gb/s are good enough for mobile applications with many requests per second and provide enough bandwidth to support high-resolution video content. But text-based generative AI, trained edge inference, and machine learning already need 100 Gb/s and more.

However, recent advances in artificial intelligence have added a whole new level to network bandwidth and speed expectations, with graphics processing units (GPUs) allowing server systems to analyze and train video models in just a couple of weeks processing time. This is possible with server connectivity speeds ranging from 200 Gb/s to 400 Gb/s, and with server speeds of 800 Gb/s on the horizon.

Data center network engineers are tasked with finding a balance between cost and performance to meet the different needs of their organizations. While there are a variety of cabling options available to enable faster connections between the switch and the server in the data center, high-speed, short-reach cable assemblies can help maximize your data center budget. They support high-speed connectivity, but also provide better power efficiency and the lower-latency data transmission needed for emerging applications.

Rapid Deployment Enables Agility

High-speed point-to-point cable assemblies are typically deployed in large and small distributed edge and on-premises data center placements using top-of-rack (ToR) or end-of-row (EoR) designs. .

An EoR topology has a large network switch at the end of the row, with many connection points to various server cabinets in the row. EoR uses fewer access layer switches, making it easier for network engineers to implement system updates, but it is more complex to implement and takes up more space. This cabling approach typically requires cable management and expensive transceiver modules used for longer ranges that consume more power than short-range point-to-point cables.

On the other hand, a top rack design allows for quick deployments in both large and small spaces. In the ToR design, the network switch resides within the cabinet and connects to the servers within that cabinet, making it an ideal choice for easy data center expansion. Cable management and troubleshooting are also simplified, but the drawback lies in the number of switches that need to be managed.

High-speed server connections are typically available as direct attach copper cables (DAC), active optical cables (AOC), or transceiver assemblies. Different cable options can support transmission speeds from 10 Gb/s to 800 Gb/s, meaning data center facilities are well equipped when network equipment needs to be upgraded.

As the increased power consumption of new GPUs increasingly separates servers, new cooling methods are being adopted to ensure server system density remains at rack scale. In addition to advanced cooling technologies, Nvidia, a leading chipmaker, recently unveiled its new Blackwell GPUs used in its GB200 NVL72 rack-scale design at GTC 2024. As AI-enabled GPUs and server designs As they become increasingly denser, their footprint becomes smaller, making ToR rack-scale server systems capable of supporting current needs, as well as possible future needs, such as video training or video inference.

However, large-scale server systems using EoR are still being deployed due to power and infrastructure limitations. When it is not known how much computing power will be needed, implementing scalable ToR systems and using a hybrid cloud approach helps balance resources so that there are no downtime on-premises server systems.

Not all cables are the same

Let's take a closer look at the three cabling options used for server connectivity. Direct connection cables are the most suitable option for making rack connections. DACs feature virtually zero power consumption (just 0.01W to 0.05W), supporting high-density server connections in the data center while delivering low latency and the lowest costs. low. Indoor cabinet server connections can be made from top to bottom with just three meters of cabling. While passive DAC jumper cables lack active chips, limiting their range at higher speeds, active copper cables are emerging as a solution. These active cables offer longer lengths and smaller diameters, expanding the reach of short-reach copper for server connections in the future.

Active optical cables (AOC) support longer lengths at higher speeds and smaller cable diameters for links up to 30 meters. However, AOCs that use low-power, short-throw multimode optics are more expensive than DACs. The small cable diameter is ideal for higher density internal cabinet breakout connections in ToR and can simplify switch-to-switch aggregation connections located several cabinets away. The point-to-point cable assembly design facilitates rapid deployments.

Transceivers using structured fiber cabling can be used to cover lengths of up to 100 meters, for example by connecting multiple rows. Although transceivers are the most expensive of the three options, transceivers using fiber feature a small cable diameter and existing cabling infrastructure can be reused.

The following charts provide a comparison of purchase and power costs based on 500 server connections using 100G DAC cables (25G per lane). These numbers quickly multiply in the volume of connections to the server. As speeds increase, so do the price and power consumption of active chips. Price and power budgets make cabling an important decision when planning a deployment strategy to support next-generation server systems.

(Siemon)

There are several solutions available to connect servers at the edge and in the data center, making forward planning decisions more complex than ever. Following industry standard best practices is a safe bet. In an era of great change, being agile has great advantages in keeping up with the times. When it comes to selecting cable assemblies for next-generation network topologies, network administrators can benefit from an agile deployment model that uses high-speed point-to-point cable assemblies in ToR server systems. Working with a trusted cable assembly manufacturer can help you make an informed decision when reviewing all your options.


For more information visit siemon.com.

scroll to top