• InfiniteFix Product Image
  • InfiniteFix Product Image
NVIDIA ConnectX-6 VPI MCX653106A-ECAT

NVIDIA ConnectX-6 VPI MCX653106A-ECAT - Single Pack - network adapter - PCIe 4.0 x16 - 100Gb Ethernet / 100Gb Infiniband QSFP56 x 2


Brand
NVIDIA NBU HW
Product code
900-9X6AF-0056-MT0
SKU
A5XVNE1HQ6AB7
EAN/GTIN
7290108488487
  • FIPS capable
  • Advanced storage capabilities, including block-level encryption and checksum offloads
  • PCIe Gen 3.0 and Gen 4.0 support
  • RoHS Compliant
  • ODCC compatible
  • Low CPU utilization and high message rate
  • High performance and intelligent fabric for compute and storage infrastructures
  • Steady performance in virtualized networks, including Network Function Virtualization (NFV)
  • Mellanox Host Chaining technology for economical rack design
  • Smart interconnect for x86, Power, Arm, GPU and FPGA-based compute and storage platforms
  • Flexible programmable pipeline for network flows
  • Efficient service chaining enablement
  • Increased I/O consolidation efficiencies, reducing data center costs and complexity
€1126.14 (Inc. VAT)
Sold Out
(Delivery 2-4 working days)

Description
HPC environments

By delivering 100 Gb/s HDR100 InfiniBand and Ethernet speeds, ConnectX-6 VPI is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability. ConnectX-6 supports the evolving co-design paradigm, which transforms the network into a distributed processor. With its In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads computation even further to the network, saving CPU cycles and increasing network efficiency. ConnectX-6 VPI utilizes both IBTA RDMA (Remote Direct Memory Access) and RoCE (RDMA over Converged Ethernet) technologies, delivering low-latency and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet level flow control.

Machine learning and big data environments

Data analytics has become an essential function within many enterprise data centers, clouds and Hyperscale platforms. Machine learning relies on especially high throughput and low latency to train deep neural networks and to improve recognition and classification accuracy. ConnectX-6 is the perfect solution to provide machine learning applications with the levels of performance and scalability that they require. ConnectX-6 utilizes the RDMA technology to deliver low-latency and high performance. ConnectX-6 enhances RDMA network capabilities even further by delivering end-to-end packet level flow control.

Security

The ConnectX-6 block-level encryption offers a critical innovation to network security. As data in transit is stored or retrieved, it undergoes encryption and decryption. ConnectX-6 hardware offloads the IEEE AES-XTS encryption/decryption from the CPU, saving latency and CPU utilization. It also ensures protection for users sharing the same resources through the use of dedicated encryption keys. By performing block-storage encryption in the adapter, ConnectX-6 excludes the need for self-encrypted disks. This allows customers the freedom to choose their preferred storage device, including byteaddressable and NVDIMM devices that traditionally do not provide encryption. Moreover, ConnectX-6 can support Federal Information Processing Standards (FIPS) compliance. ConnectX-6 also includes a hardware Root-of-Trust (RoT) that uses HMAC relying on a device-unique key. This provides both secure boot as well as cloning protection. Delivering reliable device and firmware protection, ConnectX-6 also provides secured debugging capabilities, without the need for physical access.

Storage environments

The evolving NVMe over Fabric (NVMe-oF) protocol leverages RDMA connectivity to remotely access NVMe storage devices efficiently, while keeping the end-to-end NVMe model at lowest latency. With its NVMe-oF target and initiator offloads, ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU utilization and scalability.

Cloud and Web 2.0 environment

Open vSwitch (OVS) is an example of a virtual switch that allows Virtual Machines to communicate among themselves and with the outside world. Software-based virtual switches, traditionally residing in the hypervisor, are CPU intensive, affecting system performance and preventing full utilization of available CPU for compute functions. To address such performance issues, ConnectX-6 offers Mellanox ASAP2 - Accelerated Switch and Packet Processing technology. ASAP2 offloads the vSwitch/vRouter by handling the data plane in the NIC hardware while maintaining the control plane unmodified. As a result, significantly higher vSwitch/vRouter performance is achieved minus the associated CPU load. The vSwitch/vRouter offload functions supported by ConnectX-5 and ConnectX-6 include encapsulation and de-capsulation of overlay network headers, as well as stateless offloads of inner packets, packet headers re-write (enabling NAT functionality), hairpin, and more. In addition, ConnectX-6 offers intelligent flexible pipeline capabilities, including programmable flexible parser and flexible match-action tables, which enable hardware offloads for future protocols.

General
Device type Network adapter
Form factor Plug-in card
Interface (Bus) Type PCI Express 4.0 x16
PCI Specification Revision PCIe 1.1, PCIe 2.0, PCIe 3.0, PCIe 4.0
Networking
Ports 100Gb Ethernet / 100Gb Infiniband QSFP56 x 2
Connectivity technology Wired
Data Link Protocol 10 Gigabit Ethernet, 40 Gigabit Ethernet, 100 Gigabit Ethernet, 25 Gigabit Ethernet, 50 Gigabit Ethernet, 100 Gigabit InfiniBand
Network / Transport Protocol TCP/IP, UDP/IP, iSCSI, SMB, NFS, SRP
Features MPLS support, InfiniBand QDR Link support, VPI, QoS, PXE support, LSO, LRO, RSS, UEFI support, SR-IOV, Energy Efficient Ethernet, InfiniBand FDR Link support, AER, checksum offload support, MSI-X, iSCSI remote boot, HDS, Ethernet remote boot, InfiniBand remote boot, TSS, RoCE, PFC, NC-SI, VXLAN, NVGRE, Link Aggregation, TPH, SDN, XRC transport, DCT, UMR, ODP, MSI, NPAR, DPDK, iSCSI Extensions over RDMA (iSER), Jumbo Frames support (up to 9600 bytes), VLAN Tagging, InfiniBand HDR100 Link support, ASAP, MCTP over SMBus, NFV, DPC, ACS, PASID, ATS, Rendezvous protocol offload, MPI Tag Matching, burst buffer offload, In-Network Memory registration-free RDMA memory access, ETS, QCN, GENEVE support, NVMe-oF
Compliant Standards IEEE 802.1Q, IEEE 802.1p, IEEE 802.3ad (LACP), IEEE 802.3ae, IEEE 802.3ap, IEEE 802.3az, IEEE 802.3ba, IEEE 802.1AX, IEEE 802.1Qbb, IEEE 802.1Qaz, IEEE 1149.1, IEEE 802.1Qau, IBTA 1.3, IEEE 802.1Qbg, IEEE 1588v2, IEEE 802.3bj, IEEE 802.3bm, IEEE 802.3by, IEEE 1149.6
Expansion / Connectivity
Interfaces 2 x 100Gb Ethernet / 100Gb Infiniband - QSFP56
Miscellaneous
Included Accessories Tall bracket
Encryption Algorithm AES-XTS
Compliant Standards RoHS
Software / System Requirements
OS Required FreeBSD, SUSE Linux Enterprise Server, Microsoft Windows, Red Hat Enterprise Linux, Ubuntu, VMware ESX
Dimensions & Weight
Depth 16.765 cm
Height 6.89 cm
Manufacturer Warranty
Service & Support Limited warranty - 1 year
We do not guarantee the completeness and accuracy of the product details, which are obtained from outside sources.
We use cookies!
By remembering your preferences and frequent visits, we can provide you with the most relevant experience on our website through the use of cookies. You can provide your permission for the usage of all cookies by clicking "Accept". To give a controlled consent, you can explore the Cookie Settings.