|

Blade Servers Boost Data Center Performance

Blade Servers Boost Data Center Performance

Dell Magazines

Dell Magazines

Dell Power Solutions

Dell Power Solutions
Subscription Center

Blade Servers Boost Data Center Performance

By Allen Light and Claus Stetter (August 2002)

In the quest for data center efficiencies, many organizations may turn to blade servers, which can simplify server management and enable more computing power in less floor space. The networking capabilities offered by Gigabit Ethernet* switches help blade servers achieve higher performance, and the characteristics of BaseX physical-layer interconnects make them well suited for use in blade servers.

Data center real estate is expensive. As a result, companies have purchased ever-thinner, rack-optimized servers, bolted to racks in collections that resemble six-foot-tall stacks of pizza boxes. Blade servers extend this consolidation concept and further increase server density. A blade server chassis contains several blades—each one a circuit board with memory, a CPU, and hard disk—stacked side-by-side and interconnected over a common backplane with one or more fabric (switch) blades. The blades share a power supply, cables, and storage, all of which reduce heat production, costs, and space.

These servers have become popular for tasks such as delivering Web pages or housing protective firewalls because they use less floor space and electricity than racks of traditional servers. As they become more powerful, like the DellTM PowerEdgeTM  1655MC, they increasingly will be utilized for many other server applications. Analysts believe that blades eventually will form a substantial part of the overall server market. As Figure 1 illustrates, one Dell PowerEdge 1655MC blade server has the same processing power as six Dell PowerEdge 1650 servers, but occupies 50 percent of the rack space.

Figure 1. One Dell PowerEdge 1655MC blade server provides the same processing power as six Dell PowerEdge 1650 servers
Figure 1. One Dell PowerEdge 1655MC blade server provides the same processing power as six Dell PowerEdge 1650 servers

Compared to their predecessors, blade servers offer easy installation, improved performance and reliability through dedicated software, and flexibility that lets administrators quickly reassign groups of blades to different computing tasks.

Saving power and space with blade servers

Manufacturers have built and customers have deployed rack-mounted servers as individual units. Each unit has its own power supply, cooling system, sheet-metal chassis, and human interfaces (keyboard, video, and mouse). Viewing a fully populated rack of thin 1U (1-3/4") servers clearly demonstrates the duplication of these common components.

The Dell PowerEdge 1655MC chassis features two power supplies—one main supply and a backup—shared among six server blades. No power supply is 100 percent efficient. Consequently, two power supplies shared across six blades consume much less power while providing greater reliability than six servers with their own individual power supplies.

Because blades eliminate the need for the sheet-metal chassis and cooling channels of individual rack servers, a common chassis can accommodate many more blades. Sharing fans among the blades and channeling air between them creates space efficiency. A common six-foot server rack currently reaches capacity with 42 1U servers. However, the same rack can host many blade server chassis, accommodating up to hundreds of blades.

Reducing complexity in the network

Interconnecting a redundant data network, management network, storage network, performance clustering network, and I/O requires a large amount of unsightly cabling. This conglomeration of cables is prone to failure and difficult to maintain, and the sheer mass of cables often interferes with the server cooling systems.

In a blade server chassis, information can flow over circuit-board traces on a backplane as efficiently as over a cable. These backplanes can accommodate all of the interconnect technology mentioned earlier in a fraction of the space. A common backplane connector replaces many cable connectors, reducing the profile of the server blade and often allowing more interconnectivity between the blades (see Figure 2 ).

Figure 2. How blades interconnect in a blade server
Figure 2. How blades interconnect in a blade server

Inserting a server blade into the backplane instantly connects it to a network of switches, storage, and other blades in the chassis. The switch blade includes Broadcom®  Gigabit Ethernet connections that interconnect the Dell PowerEdge 1655MC chassis to the rest of the network.

Simplifying server management through remote capabilities

Managing a rack of 42 servers is challenging enough; managing hundreds of servers in a single rack seems daunting. Fortunately, Dell blade servers have been designed to be operated and managed remotely.

A broad range of technology enables remote management. System-critical sensors are present on each individual blade and within the key components of a blade server chassis. These sensors enable the blade server to send an alert via e-mail, pager, or phone to a network manager when particular events occur, such as the temperature exceeding a certain threshold or a cooling fan not rotating at a predefined speed. The network manager can then view the blade server configuration through a Web browser, pinpoint the problem, and shift the workload to other servers. This capability greatly improves system uptime and often prevents costly damage to the server hardware because problems are caught early—before they create permanent damage.

Advanced management features in blade servers allow network managers to remotely boot individual servers and upgrade the operating system and applications. Previously, these tasks required an administrator's physical presence in the data center to interact with the server through a dedicated keyboard, mouse, and monitor, and to load applications through the local CD drive.

Using Gigabit Ethernet as a backplane technology

The Dell PowerEdge 1655MC blade server chassis features Broadcom Gigabit Ethernet technology for its backplane interconnect. The Broadcom Gigabit Ethernet technology offers several advantages over other networking technologies for blade server backplanes, including easy integration into the LAN, proven standards-based interconnect technology, support for Class of Service (CoS), port aggregation, redundancy, and management protocols.

Ethernet allows easy integration into the network
Using an Ethernet switching fabric between server blades makes connecting the blade server chassis to the rest of the network exceedingly easy and cost-effective. Most enterprise networks use Ethernet in the LAN. When the underlying blade server fabric and the network technology are the same, it eliminates the need for protocol converters.

Standards-based, proven interconnect technology eases transition to blades
Manufacturers ship millions of Gigabit Ethernet ports and the associated chipsets each month. Ethernet itself, in use for more than 20 years, is the dominant enterprise networking technology. It is robust, field proven, and widely deployed.

Blade servers featuring Ethernet backplane technology achieve low-cost connectivity due to significant economies of scale. Using Ethernet in blade servers also avoids extensive training cycles for IT personnel to learn a new networking protocol.

Powerful features enhance networking
The Dell PowerEdge 1655MC blade server supports powerful Ethernet features that facilitate networking capabilities and network management in blade servers.

Class of Service. CoS prioritizes time- sensitive traffic ahead of lower priority traffic through the blade server backplane and fabric.

Port aggregation (trunking). Bandwidth can be increased in 1 Gbps increments between the blades and other devices attached to the switch blade by trunking multiple ports together.

Redundancy. Port aggregation with automatic trunk failover is a simple mechanism that provides redundancy in a blade server. If a port in a trunk fails, traffic automatically will be redistributed over the remaining links.

Built-in management. The use of Remote Monitoring (RMON) and Simple Network Management Protocol (SNMP) in an Ethernet switch provides built-in management for the blade server.

Cascading servers. Direct Ethernet links between two (or more) fabrics allow administrators to cascade multiple blade servers. These links are made with low-cost Category 5 copper cables.

Fabric interconnect offers cost advantages
Fabric blade configurations are much simpler than stand-alone Ethernet switches. As a result, manufacturers can build these switch fabrics using off-the-shelf components and very little software and board design effort, providing cost advantages over other fabric technologies. Cost-effective Gigabit Ethernet switch chipsets from Broadcom offer comprehensive feature sets and integration at many levels, including internal buffer memories, on-chip address tables and management counters, and integrated physical-layer devices (Serializer/Deserializer, or SerDes).

Interconnecting blade servers over the backplane

Category 5 copper networking cable usually connects Ethernet LANs. This cabling technology has been widely deployed in corporate offices, and the interoperability between 10, 100, and 1000BaseT physical-layer devices has made this technology a favorite among networking professionals. Gigabit Ethernet products that utilize the 1000BaseX physical layer have not been as widely deployed as BaseT Ethernet because of the higher cost associated with the optical drivers and fiber-optic cable that traditionally have been deployed with this type of physical-layer device.

Blade servers present a unique operating environment for the small LAN that exists between the server and switch blades. These devices are connected over a backplane material, usually low-cost FR4, using either BaseT or BaseX as the physical layer in the design. For designs likely to standardize on Gigabit Ethernet, a 1000BaseX SerDes physical-layer device offers advantages over 1000BaseT physical-layer devices. For example, using small coupling capacitors instead of magnetics to AC-couple the blades will result in lower power consumption and better per-pin bandwidth utilization. Using 1000BaseX results in blade servers that are more rack dense, less expensive, and consume less power, but have all of the benefits of Gigabit Ethernet.

BaseX devices require no magnetics
BaseT physical-layer devices are designed to operate over 100 meters of Category 5 cable. When two electrical devices are coupled using direct current (DC) over this distance, the ground potential at one location could become different from another, causing a ground loop . In this situation, excessive current could flow through the networking cable and damage it or the circuitry on either end of the connection.

To prevent this situation, the physical layer is decoupled from the networking cable through transformers called magnetics . The server blades and switch blades within a blade server chassis share the same electrical ground, eliminating the need for magnetics. However, most BaseT physical-layer devices are designed for use with magnetics, and they cannot be omitted from the system design. Magnetics add component cost to a blade server design and consume valuable board real estate.

BaseX physical-layer devices (SerDes) do not require magnetics. These devices can be AC-coupled from the backplane with a single capacitor, allowing the blades to be hot plugged into the system at very low cost and almost no real estate penalty.

BaseX devices consume less power
Enabling a 1000BaseT physical-layer device to transmit and receive data at 1000 Mbps across 100 meters of Category 5 unshielded twisted-pair (UTP) cable requires complex, digital-signal processing for echo cancellation and cross-talk compensation. This signal processing consumes power.

Blade server backplanes are not as electrically complex as Category 5 cables and connector patch panels. The compact signaling design of a BaseX physical layer allows integration of several physical-layer devices directly onto switch application-specific integrated circuits (ASICs) that have relatively large port counts. In addition, testing by Broadcom has shown that 1000BaseX physical-layer devices consume approximately one-fifth the power consumed by 1000BaseT devices. Lower power means easier cooling and more rack density—desired goals for blade server design.

BaseX devices need fewer connections
A Category 5 networking cable consists of four twisted pairs of wires, so a 1000BaseT physical-layer device requires eight physical connections. All eight connections must be in place and in good condition for the 1000BaseT device to reach its full speed.

In contrast, a 1000BaseX physical-layer device requires only two pairs, or four connections, to achieve the same speed. Fewer physical connections mean lower pin counts on the physical-layer device, smaller backplane connectors, and less complex backplane routing—simplifying backplane design.

Integrating SerDes in Gigabit Ethernet switch chips

Hot-pluggable capability combined with lower power requirements and faster throughput make 1000BaseX with SerDes a good protocol to use in blade servers. To drive the backplane then requires switch chips with the same interface.

Two design factors become increasingly important in server blade environments: power consumption and board space. Both ultimately limit the feasible port density and prevent further reductions in system cost.

Systems that consume significant power require expensive and often unreliable cooling arrangements. The mean time between failures (MTBF) of a cooling fan is substantially lower than that of a silicon device, and the weakest component determines the overall system MTBF. Such systems also require stronger and more expensive power supplies.

Board space is always valuable, especially in systems such as blade servers. Ideally, the space on the faceplate or the backplane—not the motherboard—would limit the number of ports, achieving the highest possible performance and density per shelf inch.

Modern 0.13u semiconductor process technology and the SerDes core design capabilities used by Broadcom help to counter these constraints. The SerDes component for each port can be integrated inside the switch chip itself—even for very high port-count devices such as a 12-port Gigabit Ethernet switch with a 10 Gbps uplink. This capability greatly reduces the number of components as well as the required board space (see Figure 3 ). Connecting a Gigabit Ethernet switch port to fiber-optic transceivers or to a backplane requires the SerDes component.

Figure 3. Integrated SerDes saves power and space
Figure 3. Integrated SerDes saves power and space

Power savings are mainly achieved through elimination of I/Os on the integrated circuit. The board-space savings do not account for the reduced number of traces on the printed circuit board, so the actual savings should be even greater. Additional benefits of integrated SerDes include elimination of separate packages, fewer components to be sourced, and elimination of integrated-circuit vendor compatibility issues.

Dell PowerEdge 1655MC blade servers include Broadcom switch chips with integrated SerDes, building a high-performance, full-featured network switch into every blade server while keeping rack density high and power consumption and cooling requirements low.

Providing exceptional performance in less space

The Dell PowerEdge 1655MC blade server and its associated technologies, such as Broadcom Gigabit Ethernet controllers and switches, enable high-density, low-power, and relatively small form factor installations. Using blade servers, companies can efficiently and cost-effectively increase compute density in the data center, enable remote software provisioning and server management, and implement new scale-out architectures and load-balancing technologies. At the same time, such blade servers reduce the data-center floor space and associated power and cooling requirements.

Allen Light (alight@broadcom.com) is a product line manager in the High Speed Networking Line of Business at Broadcom Corporation, an Irvine, California-based provider of integrated circuits enabling broadband communications. Allen received a B.S.E.E. from the University of California, Davis, and an M.B.A. from the University of New Mexico. He has held marketing and application engineering positions at Intel and Philips Semiconductors.

Claus Stetter (cstetter@broadcom.com) is a product line manager for the Broadcom MetroSwitchTM  Gigabit Ethernet product line. He received his M.S.E.E. in Germany and has held marketing positions for communications semiconductor products at Allayer Communications (subsequently acquired by Broadcom), Infineon Technologies, and Fujitsu Microelectronics.

FOR MORE INFORMATION

http://www.broadcom.com
http://www.dell.com

Offers subject to change, not combinable with all other offers. Taxes, shipping, handling and other fees apply. U.S. Dell Home new purchases only. Dell reserves the right to cancel orders arising from pricing or other errors.

snWW18