DUMPS BASE
EXAM DUMPS
NVIDIA NCP-AIN 28% OFF Automatically For You NVIDIA-Certified Professional AI Networking
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
hl y
1.You are designing a new AI data center for a research institution that requires highperformance computing for large-scale deep learning models. The institution wants to leverage NVIDIA's reference architectures for optimal performance. Which NVIDIA reference architecture would be most suitable for this highperformance AI research environment? A. NVIDIA Base Command Platform B. NVIDIA DGX Cloud C. NVIDIA LaunchPad D. NVIDIA DGX SuperPOD Answer: D Explanation: The NVIDIA DGX Super POD is a turnkey AI supercomputing infrastructure designed for large-scale deep learning and high-performance computing workloads. It integrates multiple DGX systems with high-speed networking and storage solutions, providing a scalable and efficient platform for AI research institutions. The architecture supports rapid deployment and is optimized for training complex models, making it the ideal choice for environments demanding top-tier AI performance. Reference: DGX SuperPOD Architecture VIDIA Docs
N
V
ID
IA
N
C
P
-A
IN
D
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
2.What is the basic functionality of an IB Router? A. Connecting Ethernet switches to an InfiniBand fabric B. Connecting SAN to an InfiniBand fabric C. Connecting NVLink domains to an InfiniBand fabric D. Connecting two (or more) InfiniBand fabrics Answer: D Explanation: AnInfiniBand (IB) Router connects two or more InfiniBand subnets, making it possible for nodes in different subnets to communicate through route-managed communication. From the official NVIDIA InfiniBand Routers Documentation: "An InfiniBand router provides connectivity between two or more InfiniBand subnets, enabling communication between hosts that are not on the same subnet while preserving isolation and scalability." Ensures fabric scalability by allowing subnet segmentation. Uses LID routing across subnet managers (SMs). Essential in large clusters with thousands of nodes. Incorrect Options: AandBare incorrect: InfiniBand does not connect directly to Ethernet or SANs without a gateway. Cis unrelated: NVLink is a GPU interconnect, not tied to InfiniBand routers. Reference: NVIDIA InfiniBand Routers Guide
P
-A
IN
E
xa m
S
m
oo t
hl y
3.Which of the following scenarios would the Network Traffic Map in UFM be least useful for troubleshooting? A. When investigating reports of network congestion or latency problems. B. After making changes to network configuration. C. When troubleshooting a single node's hardware failure. D. When optimizing job placement and workload distribution across the cluster. Answer: C Explanation: The Network Traffic Map in NVIDIA's Unified Fabric Manager (UFM) provides a visual representation of the network topology and traffic flows, which is particularly useful for identifying congestion points, verifying network configurations, and optimizing workload distribution. However, when troubleshooting a single node's hardware failure, the Network Traffic Map is less effective, as it focuses on network-level issues rather than individual hardware components.
N
V
ID
IA
N
C
P
-A
IN
D
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
4.You have recently implemented NVIDIA Spectrum-X in your data center to optimize AI workloads. You need to verify the performance improvements and create a baseline for future comparisons. Which tool would be most appropriate for creating performance baseline results in this Spectrum-X environment? A. NetQ B. CloudAI Benchmark C. MLNX-OS D. Ansible Answer: B Explanation: The CloudAI Benchmark is designed to evaluate and establish performance baselines in AI-optimized networking environments like NVIDIA Spectrum-X. It assesses various performance metrics, including throughput and latency, ensuring that the network meets the demands of AI workloads. This benchmarking is essential for validating the benefits of Spectrum-X and for ongoing performance monitoring. Reference: NVIDIA Spectrum-X Validated Solution Stack
5.Which tool would you use to gather telemetry data in a SpectrumX network? A. NVIEW B. UFM C. NetQ D. BCM Answer: C
-A
IN
D
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
hl y
Explanation: The NVIDIA Spectrum-X networking platform is an Ethernet-based solution optimized for AI workloads, combining Spectrum-4 switches, BlueField-3 SuperNICs, and advanced software to deliver high performance and low latency. Gathering telemetry data is critical for optimizing Spectrum-X networks, as it provides visibility into network performance, congestion, and potential issues. The question asks for the tool used to collect telemetry data in a Spectrum-X network. According to NVIDIA’s official documentation, NVIDIA NetQ is the primary tool for gathering telemetry data in Ethernet-based networks, including those running on Spectrum-X platforms with Cumulus Linux or SONiC. NetQ is a network operations toolset that provides real-time monitoring, telemetry collection, and analytics for network health, enabling administrators to optimize performance, troubleshoot issues, and validate configurations. It collects detailed telemetry data such as link status, packet drops, latency, and congestion metrics, which are essential for Spectrum-X optimization. Exact Extract from NVIDIA Documentation: “NVIDIA NetQ is a highly scalable network operations tool that provides telemetrybased monitoring and analytics for Ethernet networks, including NVIDIA Spectrum-X platforms. NetQ collects real-time telemetry data from switches and hosts, offering insights into network performance, congestion, and connectivity. It supports Cumulus Linux and SONiC environments, making it ideal for optimizing Spectrum-X networks by providing visibility into key metrics like latency, throughput, and packet loss.” ?NVIDIA NetQ User Guide This extract confirms that option C, NetQ, is the correct tool for gathering telemetry data in a Spectrum-X network. NetQ’s integration with Spectrum-X switches and its ability to collect and analyze telemetry data make it the go-to solution for network optimization tasks.
N
V
ID
IA
N
C
P
6.In a multi-tenant InfiniBand environment managed by UFM, you need to configure access controls to prevent unauthorized users from altering the fabric configuration. Which method is used within UFM to manage user access and ensure authorized modifications only? A. Digital Certification Management (DCM) B. Network Access Control (NAC) C. Virtual Network Segmentation (VNS) D. Role-Based Access Control (RBAC) Answer: D Explanation: Role-Based Access Control (RBAC)is implemented within NVIDIA's Unified Fabric Manager (UFM) to manage user permissions effectively. RBAC allows administrators to assign roles to users, each with specific permissions, ensuring that only authorized individuals can make changes to the fabric configuration. This structured approach to
access control enhances security by limiting the potential for unauthorized modifications and streamlines the management of user privileges across the network. Reference: Role-Based Access Control (RBAC) - One Identity
N
C
P
-A
IN
D
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
hl y
7.You are configuring an InfiniBand network for an AI cluster and need to install the appropriate software stack. Which NVIDIA software package provides the necessary drivers and tools for InfiniBand configuration in Linux environments? A. NVIDIA GPU Cloud B. NVIDIA Container Runtime C. CUDA Toolkit D. MLNX_OFED Answer: D Explanation: MLNX_OFED (Mellanox OpenFabrics Enterprise Distribution) is an NVIDIA-tested and packaged version of the OpenFabrics Enterprise Distribution (OFED) for Linux. It provides the necessary drivers and tools to support InfiniBand and Ethernet interconnects using the same RDMA (Remote Direct Memory Access) and kernel bypass APIs. MLNX_OFED enables high-performance networking capabilities essential for AI clusters, including support for up to 400Gb/s InfiniBand and RoCE (RDMA over Converged Ethernet). Reference Extracts from NVIDIA Documentation: "MLNX_OFED is an NVIDIA tested and packaged version of OFED that supports two interconnect types using the same RDMA (remote DMA) and kernel bypass APIs called OFED verbs C InfiniBand and Ethernet." "Up to 400Gb/s InfiniBand and RoCE (based on the RDMA over Converged Ethernet standard) over 10 /25/40/50/100/200/400GbE are supported."
N
V
ID
IA
8.You are automating the deployment of a Spectrum-X network using Ansible. You need to ensure that the playbooks can handle different switch models and configurations efficiently. Which feature of the NVIDIA NVUE Collection helps simplify the automation by providing pre-built roles for common network configurations? A. Collection libraries B. Collection modules C. Collection roles D. Collection plugins Answer: C Explanation: The NVIDIA NVUE Collection for Ansible includes pre-built roles designed to streamline automation tasks across various switch models and configurations. These
roles encapsulate common network configurations, allowing for efficient and consistent deployment. By utilizing these roles, network administrators can: Apply standardized configurations across different devices. Reduce the complexity of playbooks by reusing modular components. Ensure consistency and compliance with organizational policies. This approach aligns with Ansible best practices, promoting maintainability and scalability in network automation. Reference: NVIDIA NVUE Collection Documentation C Ansible Roles
N
V
ID
IA
N
C
P
-A
IN
D
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
hl y
9.When creating a simulation in NVIDIA AIR, what syntax would you use to define a link between port 1 on spine-01 and port 41 on gpu-leaf-01? A. "spine-01":*swp01" - *gpu-leaf-01":"swp41" B. "spine-01":"swp1" to "gpu-leaf-01":"swp41" C. "spine-01 'eth1" to "gpu-leaf-01":"eth41" D. "spine-01":"eth1" - "gpu-leaf-01":"eth41" Answer: A Explanation: NVIDIA AIR (AI-Ready Infrastructure) is a cloud-based simulation platform designed to model and validate data center network deployments, including Spectrum-X Ethernet networks, using realistic topologies and configurations. When creating a custom topology in NVIDIA AIR, users can define network links between devices (e.g., spine and leaf switches) using a DOT file format, which is based on the Graphviz graph visualization software. The question asks for the correct syntax to define a link between port 1 on a spine switch (spine-01) and port 41 on a leaf switch (gpu-leaf-01) in a NVIDIA AIR simulation. According to NVIDIA’s official NVIDIA AIR documentation, the DOT file format is used to specify network topologies, including nodes (devices) and links (connections between ports). The syntax for defining a link in a DOT file uses a double dash (--) to indicate a connection between two ports, with each port specified in the format "
":"". For Spectrum-X networks, which typically use Cumulus Linux or SONiC on NVIDIA Spectrum switches, ports are commonly labeled as swpX (switch port X) rather than ethX (Ethernet interface), especially for switch-to-switch connections in a leaf-spine topology. The correct syntax for the link between port 1 on spine-01 and port 41 on gpu-leaf-01 is: "spine-01":"swp01" -- "gpu-leaf-01":"swp41" This syntax uses swp01 and swp41 to denote switch ports, consistent with Cumulus Linux conventions, and the double dash (--) to indicate the link, as required by the DOT file format. Exact Extract from NVIDIA Documentation: “You can create custom topologies in Air using a DOT file, which is the file type used
N
V
ID
IA
N
C
P
-A
IN
D
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
hl y
with the open-source graph visualization software, Graphviz. DOT files define nodes, attributes, and connections for generating a topology for a network. The following is an example of a link definition in a DOT file: "leaf01":"swp31" -- "spine01":"swp1" This specifies a connection between port swp31 on leaf01 and port swp1 on spine01. Port names typically follow the switch port naming convention (e.g., swpX) for Cumulus Linux-based switches.” ?NVIDIA Air Custom Topology Guide This extract confirms that option A is the correct answer, as it uses the proper DOT file syntax with swp01 and swp41 for port names and the double dash (--) for the link, aligning with NVIDIA AIR’s topology definition process for Spectrum-X simulations. Analysis of Other Options: B. "spine-01":"swp1" to "gpu-leaf-01":"swp41": This option uses the correct port naming convention (swp1 and swp41) but incorrectly uses the word to as the connector instead of the double dash (--). The DOT file format requires -o define links, making this syntax invalid for NVIDIA AIR. C. "spine-01":"eth1" to "gpu-leaf-01":"eth41": This option uses ethX port names, which are typically used for host interfaces (e.g., servers) rather than switch ports in Cumulus Linux or SONiC environments. Switch ports in Spectrum-X topologies are labeled swpX. Additionally, the use of to instead of -s incorrect for DOT file syntax, making this option invalid. D. "spine-01":"eth1" - "gpu-leaf-01":"eth41": This option uses a single dash (-) instead of the required double dash (--) and incorrectly uses ethX port names instead of swpX. The ethX naming is not standard for switch ports in Spectrum-X, and the single dash is not valid DOT file syntax, making this option incorrect. Why "spine-01":"swp01" -- "gpu-leaf-01":"swp41" is the Correct Answer Option A correctly adheres to the DOT file syntax used in NVIDIA AIR for defining network links: Node and Port Naming: The nodes spine-01 and gpu-leaf-01 are specified with their respective ports swp01 and swp41, following the swpX convention for switch ports in Cumulus Linux-based Spectrum-X switches. Link Syntax: The double dash (--) is the standard connector in DOT files to indicate a link between two ports, as required by Graphviz and NVIDIA AIR. Spectrum-X Context: In a Spectrum-X leaf-spine topology, connections between spine and leaf switches (e.g., Spectrum-4 switches) use switch ports labeled swpX, making swp01 and swp41 appropriate for this simulation. This syntax ensures that the NVIDIA AIR simulation accurately models the physical connection between spine-01 port 1 and gpu-leaf-01 port 41, enabling validation of the Spectrum-X network topology. The DOT file can be uploaded to NVIDIA AIR to generate the topology, as described in the documentation.
10.What is the purpose of configuring NVUE to ignore Linux files?
oo t
hl y
A. Enable pushing of configuration through Ansible template files. B. Enable the persistent manipulation of specific settings using both NVUE and flatfile approaches. C. Reduce NVUE memory utilization to optimize performance. D. Improve Cumulus security by reducing the attack surface. Answer: B Explanation: Configuring NVUE to ignore certain underlying Linux files allows administrators to manage specific settings manually or through automation tools like Ansible without NVUE overwriting these configurations. This approach enables the persistent manipulation of settings using both NVUE and flat-file methods, providing flexibility in network management.
-A
IN
D
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
11.You are optimizing an InfiniBand network for AI workloads that require low-latency and high-throughput data transfers. Which feature of InfiniBand networks minimizes CPU overhead during data transfers? A. TCP/IP Offloading B. SHARP C. Direct Memory Access (DMA) D. PKey Answer: C Explanation: Direct Memory Access (DMA) in InfiniBand networks allows data to be transferred directly between the memory of two devices without involving the CPU. This capability significantly reduces CPU overhead, lowers latency, and increases throughput, making it ideal for AI workloads that demand efficient data transfers.
N
V
ID
IA
N
C
P
12.You are planning to deploy a large-scale Spectrum-X network for AI workloads. Before physical implementation, you want to validate the network design and configuration using a digital twin approach. Which NVIDIA tool would be most appropriate for creating and simulating a digital twin of your Spectrum-X network? A. NVIDIA Base Command Manager B. NVIDIA Omniverse C. NVIDIA NetQ D. NVIDIA Air Answer: D Explanation: NVIDIA Air is a cloud-based network simulation tool designed to create digital twins of data center infrastructure, including Spectrum-X networks. It allows users to model switches, SuperNICs, and storage components, enabling the simulation, validation,
oo t
hl y
and automation of network configurations before physical deployment. This facilitates Day 0, 1, and 2 operations, ensuring that network designs are tested and optimized for AI workloads. Reference Extracts from NVIDIA Documentation: "NVIDIA Air enables cloud-scale efficiency by creating identical replicas of real-world data center infrastructure deployments." "NVIDIA Air allows users to model data center deployments with full software functionality, creating a digital twin. Transform and accelerate time to AI by simulating, validating, and automating changes and updates." "NVIDIA Air supports simulation of NVIDIA Spectrum Ethernet (Cumulus Linux and SONiC) switches and NVIDIA BlueField DPUs and SuperNICs as well as the NetQ network operations toolset."
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
13.How is congestion evaluated in an NVIDIA Spectrum-X system? A. By assessing the physical distance between network devices. B. By monitoring the CPU and power usage of network devices. C. By measuring the number of connected devices in the network. D. By analyzing the egress queue loads ensuring all ports are well-balanced. Answer: D Explanation: In NVIDIA Spectrum-X, congestion is evaluated based on egress queue loads. Spectrum-4 switches assess the load on each egress queue and select the port with the minimal load for packet transmission. This approach ensures that all ports are wellbalanced, optimizing network performance and minimizing congestion.
N
V
ID
IA
N
C
P
-A
IN
D
14.A leading AI research center is upgrading its infrastructure to support large language model projects. The team is debating whether to implement a dedicated storage fabric for their AI workloads. Which of the following best explains why a dedicated storage fabric is crucial for this AI network architecture? Pick the 2 correct responses below A. To enable parallel data access and improve storage performance for distributed AI workloads. B. To ensure data security and isolation from other network traffic. C. To provide high-bandwidth, low-latency data access that prevents I/O bottlenecks during AI model training. D. To reduce the overall cost of the storage infrastructure. Answer: A C Explanation: Modern AI training (especially with LLMs) requires extremely high-speed, parallel
hl y
access to large datasets. A dedicated storage fabric separates data I/O traffic from the training compute path and avoids contention. From NVIDIA DGX Infrastructure Reference Architectures: "Dedicated storage networks eliminate I/O bottlenecks by providing low-latency, highbandwidth access to distributed storage for large-scale training jobs." "Parallel access to datasets is key for performance, especially in multi-node, multiGPU AI clusters." Security (B)is important, but not the core reason for a storage fabric. Cost (D)is typically increased, not reduced, with dedicated fabrics. Reference: NVIDIA Base POD/AI Infrastructure Deployment Guidelines C Storage Section
N
V
ID
IA
N
C
P
-A
IN
D
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
15.What command sequence is used to identify the exact name of the server that runs as the master SM in a multi-node fabric? A. sminfo smpquery ND B. sminfo smpquery Nl C. ibstat sminfo D. ibis ibsim Answer: A Explanation: To identify the activeSubnet Manager (SM)node in an InfiniBand fabric, the correct command sequence is: sminfo Displays general information about the active SM in the fabric, including itsLID. smpquery ND Resolves theNode Description (ND)at the given LID, revealing the exact hostname or label of the SM server. From the InfiniBand Tools Guide: "The sminfo utility provides the LID of the master SM. Use smpquery ND to resolve the node name hosting the SM." This two-step approach is standard for locating and validating the SM identity in fabric diagnostics. Incorrect Options: B (Nl)is an invalid query type. CandDdo not identify SMs. Reference: InfiniBand SM Tools C sminfo & smpquery Usage
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
hl y
16.You suspect there might be connectivity issues in your InfiniBand fabric and need to perform a comprehensive check. Which tool should you use to run a full fabric diagnostic and generate a report? A. ibnetdiscover B. perfquery C. ibdiagnet D. taping Answer: C Explanation: The ibdiagnet utility is a fundamental tool for InfiniBand fabric discovery, error detection, and diagnostics. It provides comprehensive reports on the fabric's health, including error reporting, switch and Host Channel Adapter (HCA) configuration dumps, various counters reported by the switches and HCAs, and parameters of devices such as switch fans, power supply units, cables, and PCI lanes. Additionally, ibdiagnet performs validation for Unicast Routing, Adaptive Routing, and Multicast Routing to ensure correctness and a credit-loop-free routing environment. Reference Extracts from NVIDIA Documentation: "The ibdiagnet utility is one of the basic tools for InfiniBand fabric discovery, error detection and diagnostic. The output files of the ibdiagnet include error reporting, switch and HCA configuration dumps, various counters reported by the switches and the HCAs." "ibdiagnet also performs Unicast Routing, Adaptive Routing and Multicast Routing validation for correctness and credit-loop free routing."
N
V
ID
IA
N
C
P
-A
IN
D
um
ps
17.When utilizing the ib_write_bw tool for performance testing, what does the -S flag define? A. The burst size B. The number of QP's C. The maximum rate of sent packages D. Which service level to use Answer: D Explanation: From NVIDIA Performance Tuning Guide (ib_write_bw Tool Usage): "-S : Specifies the Service Level (SL) to use for the InfiniBand traffic. SL is used for setting priority and mapping to virtual lanes (VLs) on the IB fabric." This flag is useful when testing QoS-aware setups or validating SL/VL mappings. Incorrect Options: AC No such flag for burst size. BC -q defines number of QPs. CC --rate or -R is used for rate-limiting.
Reference: NVIDIA InfiniBand Performance Guide C ib_write_bw Options Section
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
hl y
18.When upgrading DOCA on a BlueField DPU, what command should first be run on the host? A. sudo apt-get autoremove B. /usr/sbin/ofed_uninstall.sh -force C. sudo apt-get upgrade doca D. sudo apt-get install doca Answer: B Explanation: Before upgrading the DOCA SDK on a BlueField DPU, it is mandatory to uninstall the existing OFED drivers to prevent compatibility conflicts. From the NVIDIA DOCA Installation Guide: "Before upgrading DOCA or BlueField-related software, you must remove existing OFED packages using: /usr/sbin/ofed_uninstall.sh -force." This ensures: Clean driver state No residual kernel modules or user space libraries Proper registration of new DOCA/OFED versions Incorrect Options: AandCmay not resolve conflicts. Dinstalls but doesn’t remove conflicting packages. Reference: DOCA SDK Installation C Uninstall OFED Requirement
N
V
ID
IA
N
C
P
-A
IN
D
um
ps
19.You are troubleshooting InfiniBand connectivity issues in a cluster managed by the NVIDIA Network Operator. You need to verify the status of the InfiniBand interfaces. Which command should you use to check the state and link layer of InfiniBand interfaces on a node? A. rdma show devices B. ibstat -d mlx5_X C. ifconfig ib0 D. ip link show dev ib0 Answer: B Explanation: To check the status and link layer of InfiniBand interfaces, the ibstat command is used. For example: ibstat -d mlx5_0 This command provides detailed information about the InfiniBand device, including its state (e.g., Active), physical state (e.g., LinkUp), and link layer (e.g., InfiniBand). Reference: NVIDIA DGX BasePOD Deployment Guide C Network Operator Section
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
hl y
20.You are concerned about potential security threats and unexpected downtime in your InfiniBand data center. Which UFM platform uses analytics to detect security threats, operational issues, and predict network failures in InfiniBand data centers? A. Host Agent B. Enterprise Platform C. Cyber-AI Platform D. Telemetry Platform Answer: C Explanation: The NVIDIA UFM Cyber-AI Platform is specifically designed to enhance security and operational efficiency in InfiniBand data centers. It leverages AI-powered analytics to detect security threats, operational anomalies, and predict potential network failures. By analyzing real-time telemetry data, it identifies abnormal behaviors and performance degradation, enabling proactive maintenance and threat mitigation. This platform integrates with existing UFM Enterprise and Telemetry services to provide a comprehensive view of the network's health and security posture. It utilizes machine learning algorithms to establish baselines for normal operations and detect deviations that may indicate security breaches or hardware issues. Reference: NVIDIA UFM Cyber-AI Documentation v2.9.1
N
V
ID
IA
N
C
P
-A
IN
D
um
ps
(V 8.
02
)
21.You are tasked with configuring multi-tenancy using partition key (PKey) for a highperformance storage fabric running on InfiniBand. Each tenant’s GPU server is allowed to access the shared storage system but cannot communicate with another tenant’s GPU server. Which of the following partition key membership configurations would you implement to set up multi-tenancy in this environment? A. Assign full membership to both GPU servers and storage system. B. Assign limited membership to both GPU servers and storage system. C. Assign limited membership PKey to the shared storage system and full membership PKey to each tenant's GPU servers. D. Assign full membership PKey to the shared storage system and limited membership PKey to each tenant’s GPU servers. Answer: D Explanation: To enforce strictmulti-tenancy, where: Tenant A’s GPU cannot talk to Tenant B’s GPU But both can access shared storage The correct solution is: Storage system # Full PKey membership
N
V
ID
IA
N
C
P
-A
IN
D
um
ps
(V 8.
02
)
-H el p
Y
ou
P
as
s
th
e
N
C
P
-A
IN
E
xa m
S
m
oo t
hl y
Each tenant’s GPU # Limited PKey membership From the NVIDIA InfiniBand P_Key Partitioning Guide: "A port with limited membership can only communicate with full members of the same PKey. It cannot communicate with other limited members, even within the same partition." This isolates tenants from each other, while allowing shared access to storage. Incorrect Options: Apermits tenant-to-tenant communication. Bisolates everything, including access to storage. Cprevents GPU access to storage. Reference: NVIDIA InfiniBand C Multi-Tenant PKey Partitioning Design
GET FULL VERSION OF NCP-AIN DUMPS
Powered by TCPDF (www.tcpdf.org)