ConnectX-2 adapter card with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provides the highest performing and most flexible interconnect solution for Enterprise Data Centers, High-Performance Computing, and Embedded environments. InfiniBand and 10 Gigabit Ethernet provides the highest performing interconnect for clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications and seamless client-server connectivity over Ethernet.
ConnectX-2 delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications. These applications will benefit from the reliable transport connections and advanced multicast support offered by ConnectX-2. Network protocol processing and data movement overhead such as InfiniBand RDMA and Send/Receive semantics are completed in the adapter without CPU intervention improving the server's overall efficiency. ConnectX-2 enables large scalability to tens of thousands of nodes.
Data Center Bridging
ConnectX-2 delivers similar low-latency and high-bandwidth performance leveraging Ethernet with DCB support. Low Latency Ethernet (LLE) provides efficient RDMA transport over Layer 2 Ethernet utilizing DCB's enhancements to IEEE 802.1 bridging. The LLE software stack maintains existing and future compatibility for bandwidth and latency sensitive clustering applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.
Applications utilizing TCP/UDP/IP transport can achieve industry-leading throughput over InfiniBand or 10 Gigabit Ethernet. The hardware-based stateless offload engines in ConnectX-2 reduce the CPU overhead of IP packet transport, freeing more processor cycles to work on the application.
ConnectX-2 support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization with ConnectX-2 gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and complexity.
Storage acceleratedProduct Highlights
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols leveraging InfiniBand RDMA result in high-performance storage access. Fiber Channel frame encapsulation (FCoIB or FCoE) and hardware offloads enable simple connectivity to Fiber Channel SANs.
- One adapter for InfiniBand, 10 Gigabit
- Ethernet and Data Center Bridging fabrics
- World-class cluster performance
- High-performance networking and storage access
- Guaranteed bandwidth and low-latency services
- I/O consolidation
- Virtualization acceleration
- Scales to tens-of-thousands of nodes