Search Engine

Custom Search

Monday, July 14, 2008

INFINIBAND


INFINIBAND

INTRODUCTION

To meet the I/O needs of the enterprise, the Infiniband architecture (IBA) was formed. Designed from the ground up as anew universal I/O standard, the IBA was developed to connect servers with remote storage, networking devices and other servers, as well as for use inside servers for Interprocessor communications.
Infiniband (Infinite bandwidth) is both I/O architecture and a specification for the transmission of the data between processors and I/O devices. This has come up as a successor to the PCI bus, and has been gradually replacing PCI in high-end servers and PC’s. Instead of sending data in parallel, which is what happens with the PCI, Infiniband sends data in serial mode and can carry multiple channels of data at the same time in a multiplexing signals. Infiniband architecture is capable of supporting tens of thousands of nodes in single node.
Infiniband is point-to-point, switched I/O fabric architecture that increases its bandwidth as switches are added. Each end point, or nodes, can vary from an inexpensive signal SCSI chip or Ethernet adaptor to complex host systems.

NEED OF INFINIBAND

Over the last decade, the speed of CPU’s has grown much faster than the capabilities of the I/O bus, presenting a serious performance mismatch and the bottleneck for servers. The exploding need for greater I/O performance demanded by the internet , e-commerce, symmetric multiprocessing , server clustering and remote storage has outpaced the capabilities of PCI bus .With CPU Performance surpassing 3GHz and network bandwidth reaching 1TB per seconds , there is a critical need for an I/O Architecture that meets and exceeds the performance capabilities of processors and networks .The PCI bus severely Diminishes a processor’s ability to push and retrieve data to external devices quickly .In addition, slow I/O bus contributes to bottlenecks inside the servers

COMPONENTS OF INFINIBAND

The five primary components that make up an Infiniband fabric are listed below:

Host Channel Adaptor (HCA) :
The HCA is an interface that resides with in a server and communicates directly with the server’s memory and processor as well as the IBA fabric. The HCA guarantees delivery of data, performs advanced memory access and can recover from transmission errors. An HCA can be a PCI to Infiniband card or it can be integrated on the system’s motherboard.

Target Channel Adaptor (TCA) :

A TCA enables I/O devices, such as disk to be located within the network , independent of a host computer.TCAs can communicate with an HCA or a switch.

Switch:

The switch allows many HCA’s and TCA’s to connect it and handles network traffic. The switch offers higher availability, higher aggregate bandwidth, load balancing, data mirroring and much more. Fabric is nothing but the group of switches. If servers go down, the switch still continues to operate & it also free up the servers and devices by handling network traffic.

Router:

The router forwards a data packets from one
Local N/W to other external subnets. The router reads the ‘global route header’ and forwards packets based on the IPv6network layer address.the router rebuilds each packet with the proper local address header as it is passed on to the new subnet.

Subnet Manager:

The subnet manager is an application responsible for configuring the local subnet and ensuring its continued operation. Configuration responsibilities include managing switch and router set up, and reconfiguring the subnet if a link goes down or a new one is added.

ADVANTAGES OF INFINIBAND

Unified I/O Fabric:

Servers are often connected to three or four different networks redundantly, with enough wires an cables spilling out. By creating a unified fabric, Infiniband takes I/O outside of the box and provides a mechanism to share I/O interconnects among many servers. Infiniband does not eliminate the need for other interconnects technologies. Instead, it creates a more efficient way to connect storage and communications networks and servers clusters together.
Helping Data Centres:
Infiniband meets the increasing demands of the enterprise data centre. The intense development collaboration marks an unprecedented effort in the computing industry, underscoring the importance of Infiniband to the server’s platform design. The architecture is grounded in the fundamental principles of channel-based I/O, the very I/O model favored by mainframe computers. All Infiniband connection is created with the Infiniband links using both copper wire and fibre optics for transmission. Seemingly simple, this design creates a new way of connecting servers together in the data centres. With Infiniband, new server deployment strategies become possible.

Independent Scaling of processing:

One example of Infiniband’s impact on server design is the ability to design a server with I/O removed from the server chassis. This enables independent scaling of processing and I/O capacity, creating more flexibility for data centre managers. Unlike today’s servers, which contain a defined number of I/O connections per box,
Infiniband servers can share I/O resources across the fabric. This method allows a data centre manager to add processing performance when required, without the need to add more I/O capacity .As data centre managers upgrade and add storage and networking connectivity to keep up with traffic demand, there’s no need to open every server box to add network interface card (NIC’s) or fibre channel host bus adapters . Instead I/O connectivity can be added to the remote side of the fabric through target channel adapter and shared among many servers. This save up time decreases technician time for data centre upgrades and expansion, and provides a new model for managing interconnects. As other networking connections become increasingly powerful, data pipes that could saturate one server can be shared among many servers to effectively balance server requirements. The result is a more efficient use of computer infrastructure and decrease in the cost of deployment of fast interconnects to servers.

Raising Server Density:

Removal of I/O from the server chassis also has a profound impact on the server (the amount of processing power delivered in a defined physical space). As servers transition into rack-mounted configurations for easy deployment and management, floor space is at a premium.
By removing I/O from the server chassis, server designers can fit more processing power into the same physical space. More importantly, compute density- the amount of processing power per U (a U is a measurement of rack height equating to 4.44 cm) will increase through the expansion of available space for processors inside a server. Additionally, the new modular designs will improve serviceability and provide for faster provisioning of incremental resources like CPU modules or I/O expansion.

Enhanced Performance:
Data centre performance is now measured in the performance of individual servers. With Infiniband, this model will shift from individual server capability to the aggregate performance of the fabric. Infiniband will ultimately enable the clustering and management of multiple servers as one entity. Performance will scale by adding additional boxes, without many of the complexities of traditional clustering. Even though more systems can be added, they can be managed as one unit. As processing requirements increase, additional power can be added to the cluster in the form of another server or ‘blade’. Today’s server cluster rely on proprietary interconnects to effectively manage the complex nature of clustering traffic. With Infiniband, server clusters can be configured for the first time with an industry standard I/O interconnect, creating an opportunity for clustered server to become ubiquitous in the data centre deployments. With ability to effectively balance processing and I/O performance through connectivity to the Infiniband fabric, data centre managers will be able to more quickly react to fluctuations in the traffic patterns, upswings in processing demand, and the need to retool to meet changing business needs.

Improved Reliability:

Infiniband increases server’s reliability in a multitude of ways:

a. Because Infiniband is grounded on a channel-based I/O model, connections between fabric’s nodes are inherently more reliable than today’s I/O paradigm.

b. The Infiniband protocol uses an efficient message-passing structure to transfer data.


c. Infiniband fabrics are constructed with multiple levels of redundancy in mind. Nodes can be attached to a fabric for link redundancy. If a link goes down, not only should the fault be limited to the link, the additional link should ensure that connectivity continues to the fabric. Creation of multiple paths through the fabric results in the intra-fabric redundancy. If one path fails, traffic can be routed to the final endpoint destination. Infiniband also support redundant fabrics for the ultimate in fabric reliability. With multiple redundant fabrics, an entire fabric can fail without causing data centres downtime.

CONCLUSION
More than 30 companies have introduced Infiniband-based products into the marketplace with more products announced on a weekly basis. Infiniband implementations today’s are prominent in server clusters where high bandwidth and low latency are key requirements. In addition to server clusters, Infiniband is the interconnect that unifies the compute, communications and storage fabric in the data centre. Several Infiniband blade server designs have been announced by leading server vendors, which is accelerating the proliferation of dense computing.
The final release of the new Linux 2.6.11 kernel will also include driver support for the Infiniband interconnect architecture.


References:

Information Technology.
www.networkworld.com
www.cisco.com

No comments: