Home > Networking > Network Adapters > Mellanox Technologies Network Adapters > Mellanox Technologies Mellanox ConnectX 2 VPI MHQH19B-XTR

Mellanox Technologies Mellanox ConnectX 2 VPI MHQH19B-XTR

Mfg Part #: MHQH19B-XTR


Mellanox Technologies Mellanox ConnectX 2 VPI MHQH19B-XTR

Ordering Information
MSRP$980.00
Our Price$552.02
ConditionNew and Factory Sealed
Out of Stock as of 2:47am 3/29/2014
Qty

ConnectX-2 adapter card with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provides the highest performing and most flexible interconnect solution for Enterprise Data Centers, High-Performance Computing, and Embedded environments. InfiniBand and 10 Gigabit Ethernet provides the highest performing interconnect for clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications and seamless client-server connectivity over Ethernet.

InfiniBand
ConnectX-2 delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications. These applications will benefit from the reliable transport connections and advanced multicast support offered by ConnectX-2. Network protocol processing and data movement overhead such as InfiniBand RDMA and Send/Receive semantics are completed in the adapter without CPU intervention improving the server's overall efficiency. ConnectX-2 enables large scalability to tens of thousands of nodes.

Data Center Bridging
ConnectX-2 delivers similar low-latency and high-bandwidth performance leveraging Ethernet with DCB support. Low Latency Ethernet (LLE) provides efficient RDMA transport over Layer 2 Ethernet utilizing DCB's enhancements to IEEE 802.1 bridging. The LLE software stack maintains existing and future compatibility for bandwidth and latency sensitive clustering applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.

TCP/UDP/IP acceleration
Applications utilizing TCP/UDP/IP transport can achieve industry-leading throughput over InfiniBand or 10 Gigabit Ethernet. The hardware-based stateless offload engines in ConnectX-2 reduce the CPU overhead of IP packet transport, freeing more processor cycles to work on the application.

I/O virtualization
ConnectX-2 support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization with ConnectX-2 gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and complexity.

Storage accelerated
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols leveraging InfiniBand RDMA result in high-performance storage access. Fiber Channel frame encapsulation (FCoIB or FCoE) and hardware offloads enable simple connectivity to Fiber Channel SANs.


Product Highlights
  • One adapter for InfiniBand, 10 Gigabit
  • Ethernet and Data Center Bridging fabrics
  • World-class cluster performance
  • High-performance networking and storage access
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Scales to tens-of-thousands of nodes

In The Box
  • Mellanox ConnectX 2 VPI MHQH19B-XTR
  • Mounting kit


General Specifications

ManufacturerMellanox Technologies
Manufacturer Part #MHQH19B-XTR
Cost Central Item #11051518
Product DescriptionMellanox ConnectX 2 VPI MHQH19B-XTR - Network adapter - PCI Express 2.0 x8 - 10 GigE, InfiniBand - 4x InfiniBand (QSFP)
Device TypeNetwork adapter
Form FactorPlug-in card
Interface (Bus) TypePCI Express 2.0 x8
PCI Specification RevisionPCIe 2.0
Approximate Dimensions (WxDxH)5.6 in x 2.1 in
Cabling Type4x InfiniBand (QSFP)
Data Link Protocol10 GigE, InfiniBand
Data Transfer Rate40 Gbps
Network / Transport ProtocolTCP/IP, UDP/IP
Compliant StandardsIBTA 1.2.1
System RequirementsSuSE Linux Enterprise Server, Microsoft Windows Server 2003, Red Hat Enterprise Linux, Red Hat Fedora Core, Microsoft Windows Compute Cluster Server 2003, Microsoft Windows Server 2008, VMware ESX Server 3.5
Microsoft CertificationCompatible with Windows 7
Manufacturer Warranty1 year warranty

Extended Specifications

General
Device TypeNetwork adapter
Form FactorPlug-in card
Interface (Bus) TypePCI Express 2.0 x8
PCI Specification RevisionPCIe 2.0
Networking
Connectivity TechnologyWired
Cabling Type4x InfiniBand (QSFP)
Data Link Protocol10 GigE, InfiniBand
Data Transfer Rate40 Gbps
Network / Transport ProtocolTCP/IP, UDP/IP
Status IndicatorsLink/activity
FeaturesAuto-negotiation, firmware upgradable, InfiniBand QDR Link support, Virtual Protocol Interconnect (VPI), TCP/IP offloading, Quality of Service (QoS), RDMA support, CORE-Direct Technology
Compliant StandardsIBTA 1.2.1
Expansion / Connectivity
Expansion Slots1 x QSFP
Compatible Slots1 x PCI Express 2.0 x8
Miscellaneous
Mounting KitIncluded
Compatible with Windows 7"Compatible with Windows 7" software and devices carry Microsoft’s assurance that these products have passed tests for compatibility and reliability with 32-bit and 64-bit Windows 7.
Compliant StandardsETSI, C-Tick, EN 61000-3-2, EN 61000-3-3, EN55024, EN55022 Class A, CB, EMC, ICES-003 Class A, AS/NZS 3548, IEC 60950-1, EN 60950-1, KCC, IEC 60068-2-32, RoHS, FCC CFR47 Part 15 B, cTUVus, IEC 60068-2-29, IEC 60068-2-64, VCCI V-3
Software / System Requirements
OS RequiredSuSE Linux Enterprise Server, Microsoft Windows Server 2003, Red Hat Enterprise Linux, Red Hat Fedora Core, Microsoft Windows Compute Cluster Server 2003, Microsoft Windows Server 2008, VMware ESX Server 3.5
Dimensions & Weight
Approximate Depth5.6 in
Approximate Height2.1 in
Manufacturer Warranty
Service & Support1 year warranty
Service & Support DetailsLimited warranty - 1 year
Environmental Parameters
Min Operating Temperature32 °F
Max Operating Temperature131 °F

Customer Care
Help Desk
Web Site Features
Company Information

Total time: 0.0765 seconds