NVIDIA ConnectX-5 - Network adapter - PCIe 3.0 x16 - 100Gb Ethernet / 100Gb Infiniband QSFP28 x 1
- Brand
- NVIDIA NBU HW
- Product code
- 900-9X5AD-0016-ST0
- SKU
- TNXWW1GLA54UV
- EAN/GTIN
- 7290108480313
- Industry-leading throughput, low latency/CPU utilization and high message rate
- Innovative rack design for storage and ML based on Host Chaining technology
- Smart interconnect for x86, Power, ARM, and GPU-based compute and storage
- Advanced storage capabilities including NVMe over Fabric offloads
- Cutting-edge performance in virtualized networks including NFV
- Enabler for efficient service chaining capabilities
- Enhanced vSwitch offloads
- Adaptive routing on reliable transport
- NVMe over Fabric (NVMf) target offloads
- Hardware offloads for NVGRE and VXLAN encapsulated traffic
- End-to-end QoS and congestion control
Description
RoCE
The NVIDIA RoCE technology encapsulates packet transport over Ethernet and lowers CPU load to enable a high bandwidth and low latency network infrastructure for networking and storage-intensive applications.
ASAP²
The groundbreaking NVIDIA ASAP² technology delivers innovative SR-IOV and VirtIO acceleration by offloading Open vSwitch datapath from the host's CPU to the adapter to enable extreme performance and scalability.
SR-IOV
ConnectX adapters leverage SR-IOV to separate access to physical resources and functions in virtualized environments. This diminishes I/O overhead and allows adapters to maintain near non-virtualized performance.
Cloud and Web 2.0 Environments
ConnectX-5 adapter cards enable data center administrators to benefit from better server utilization and reduced costs, power usage, and cable complexity, allowing for more virtual appliances, virtual machines (VMs) and tenants to co-exist on the same hardware.
Storage Environments
NVMe storage devices are gaining popularity by offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improving performance and reducing latency.
| General | |
|---|---|
| Device type | Network adapter |
| Form factor | Plug-in card |
| Interface (Bus) Type | PCI Express 3.0 x16 |
| PCI Specification Revision | PCIe 2.0, PCIe 3.0 |
| Networking | |
| Ports | 100Gb Ethernet / 100Gb Infiniband QSFP28 x 1 |
| Connectivity technology | Wired |
| Data Link Protocol | 100 Gigabit Ethernet, 100 Gigabit InfiniBand |
| Data transfer rate | 100 Gbps |
| Network / Transport Protocol | TCP/IP, UDP/IP, SMB, NFS |
| Features | QoS, SR-IOV, RoCE, InfiniBand EDR Link support, VXLAN, NVGRE, ASAP, NVMf Offloads, vSwitch Acceleration |
| Compliant Standards | IEEE 802.1Q, IEEE 802.1p, IEEE 802.3ad (LACP), IEEE 802.3ae, IEEE 802.3ap, IEEE 802.3az, IEEE 802.3ba, IEEE 802.1AX, IEEE 802.1Qbb, IEEE 802.1Qaz, IEEE 802.1Qau, IEEE 802.1Qbg, IEEE 1588v2, IEEE 802.3bj, IEEE 802.3bm, IEEE 802.3by, OCP 3.0 |
| Expansion / Connectivity | |
| Interfaces | 1 x 100Gb Ethernet / 100Gb Infiniband - QSFP28 |
| Miscellaneous | |
| Compliant Standards | RoHS |
| Software / System Requirements | |
| OS Required | FreeBSD, Microsoft Windows, Red Hat Enterprise Linux, CentOS, VMware ESX |