PowerEdge: Useful "NVIDIA-SMI" queries for troubleshooting

Summary: This article shows useful "NVIDIA-SMI" queries for NVIDIA GPU card troubleshooting.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Instructions

VBIOS Version

Query the VBIOS version of each device:

$ nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv

name, pci.bus_id, vbios_version
GRID K2, 0000:87:00.0, 80.04.D4.00.07
GRID K2, 0000:88:00.0, 80.04.D4.00.08

 

Query Description
timestamp The timestamp of where the query was made in format "YYYY/MM/DD HH:MM:SS.msec".
gpu_name The official product name of the GPU. 
This is an alphanumeric string. For all products.
gpu_bus_id PCI bus id as "domain:bus:device.function", in hex.
vbios_version The BIOS of the GPU board.

Query GPU metrics for host-side logging

This query is good for monitoring the hypervisor-side GPU metrics.

This query works for both ESXi and XenServer:

$ nvidia-smi --query-gpu=timestamp,name,pci.bus_id,driver_version,pstate,pcie.link.gen.max,
pcie.link.gen.current,temperature.gpu,utilization.gpu,utilization.memory,
memory.total,memory.free,memory.used --format=csv -l 5
When adding additional parameters to a query, ensure that no spaces are added between the queries options.
Query Description
timestamp The timestamp of where the query was made in format "YYYY/MM/DD HH:MM:SS.msec".
name The official product name of the GPU. 
This is an alphanumeric string. For all products.
pci.bus_id PCI bus id as "domain:bus:device.function", in hex.
driver_version The version of the installed NVIDIA display driver. 
This is an alphanumeric string.
pstate The current performance state for the GPU. States range from P0 (maximum performance) to P12 (minimum performance).
pcie.link.gen.max The maximum PCI-E link generation possible with this GPU and system configuration. 
For example, if the GPU supports a higher PCIe generation than the system supports then this reports the system PCIe generation.
pcie.link.gen.current The current PCI-E link generation. These may be reduced when the GPU is not in use.
temperature.gpu Core GPU temperature. in degrees C.

utilization.gpu

Percent of time over the past sample period during which one or more kernels was executing on the GPU.
The sample period may be between 1 second and 1/6 second depending on the product.

utilization.memory

Percent of time over the past sample period during which global (device) memory was being read or written.
The sample period may be between 1 second and 1/6 second depending on the product.

memory.total

Total installed GPU memory.

memory.free

Total free memory.

memory.used

Total memory allocated by active contexts.

You can get a complete list of the query arguments by issuing: nvidia-smi --help-query-gpu


nvidia-smi Usage for logging

Short-term logging

Add the option "-f <filename>" to redirect the output to a file.

Prepend "timeout -t <seconds>" to run the query for <seconds> and stop logging.

Ensure that your query granularity is appropriately sized for the use required:

 

Purpose nvidia-smi "-l" value interval timeout "-t" value Duration
Fine-grain GPU behavior 5 5 seconds 600 10 minutes
General GPU behavior 60 1 minute 3600 1 hour
Broad GPU behavior 3600 1 hour 86400 24 hours

 

Long-term logging

Create a shell script to automate the creation of the log file with timestamp data added to the filename and query parameters.

Add a custom cron job to /var/spool/cron/crontabs to call the script at the intervals required.


ADDITIONAL LOW LEVEL COMMANDS USED FOR CLOCKS AND POWER

Enable "Persistence" mode.

Any settings below for clocks and power get reset between program runs unless you enable persistence mode (PM) for the driver.

Also the nvidia-smi command runs faster if PM mode is enabled.

nvidia-smi -pm 1 - Make clock, power, and other settings persist across program runs and driver invocations.


Clocks

Command Detail
nvidia-smi -ac <MEM clock, Graphics clock>   View clocks supported
nvidia-smi –q –d SUPPORTED_CLOCKS Set one of supported clocks
nvidia-smi -q –d CLOCK View current clock
nvidia-smi --auto-boost-default=ENABLED -i 0    Enable boosting GPU clocks (K80 and later)
nvidia-smi --rac                                                             Reset clocks back to base

Power

nvidia-smi –pl N  Set power cap (maximum wattage the GPU will use)
nvidia-smi -pm 1 Enable persistence mode
nvidia-smi stats -i <device#> -d pwrDraw Command that provides continuous monitoring of detail stats such as power
nvidia-smi --query-gpu=index, timestamp,power.draw,clocks.sm,clocks.mem,clocks.gr --format=csv -l 1 Continuously provide time stamped power and clock

Other useful commands 

Command Description
nvidia-smi -q Query all the GPUs seen by the driver and display all readable attributes for a GPU.
nvidia-smi Displays current GPU status, driver information and host of other statistics.
nvidia-smi -l Scrolls the output of nvidia-smi continuously until stopped. 
nvidia-smi --query gpu=index,timestamp,power.draw,clocks.sm,clocks.mem,clocks.gr --format=csv Continuously provides time stamped power and clock information.
nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv Query the VBIOS version of each GPU in a system.
lspci -n | grep 10de Determines if the GPU is in compute mode or graphics mode.
nvidia-smi nvlink -s -i<device#> Displays NVLink state for a specific GPU.
gpuswitchmode --listgpumodes Displays the capability of GRID 2.0 cards and switching between compute and graphics. The package is not in the normal CUDA or NVIDIA driver.
nvidia-smi -h Displays the smi commands and syntax form.
nvidia-bug-report.sh Pulls out a bug report which is sent to Level 3 support technician/NVIDIA.
nvidia-smi --query-retired-pages=gpu_uuid,retired_pages.address,retired_page.cause --format=csv Pulls out retired pages, GPU UUID, page fault address and the cause of page fault.
nvidia-smi stats Displays device statistics.
nvcc --version Shows installed CUDA version.
nvidia-smi pmon Displays process statistics in scrolling format.
nvidia-smi nvlink -c -i<device#> Displays NVLink capabilities for a specific GPU.
gpuswitchmode --gpumode graphics Changes the personality of the GPU to graphics from compute (M6 and M60 GPUs).
gpuswitchmode --gpumode compute Changes the personality of the GPU to compute from graphics (M6 and M60 GPUs).

Affected Products

PowerEdge XR2, OEMR R640, OEMR R650, OEMR R650xs, OEMR R6525, OEMR R660, OEMR XL R660, OEMR R660xs, OEMR R6625, OEMR R740, OEMR XL R740, OEMR R740xd, OEMR XL R740xd, OEMR R740xd2, OEMR R7425, OEMR R750, OEMR R750xa, OEMR R750xs, OEMR R7525, OEMR R760 , OEMR R760xa, OEMR R760XD2, OEMR XL R760, OEMR R760xs, OEMR R7625, OEMR R840, OEMR R860, OEMR R940, OEMR R940xa, OEMR R960, OEMR T440, OEMR T550, OEMR T560, OEMR T640, OEMR XL R660xs, OEMR XL R6625, OEMR XL R6725, OEMR XL R760xs, OEMR XL R7625, OEMR XL R7725, OEMR XR11, OEMR XR12, OEMR XR5610, OEMR XR7620, PowerEdge HS5610, PowerEdge HS5620, PowerEdge R640, PowerEdge R6415, PowerEdge R650, PowerEdge R650xs, PowerEdge R6525, PowerEdge R660, PowerEdge R660xs, PowerEdge R6625, PowerEdge R670, PowerEdge R740, PowerEdge R740XD, PowerEdge R740XD2, PowerEdge R7425, PowerEdge R750, PowerEdge R750XA, PowerEdge R750xs, PowerEdge R7525, PowerEdge R760, PowerEdge R760XA, PowerEdge R760xd2, PowerEdge R760xs, PowerEdge R7625, PowerEdge R770, PowerEdge R7725, PowerEdge R840, PowerEdge R860, PowerEdge R940, PowerEdge R940xa, PowerEdge R960, PowerEdge T440, PowerEdge T550, PowerEdge T560, PowerEdge T640, PowerEdge XR11, PowerEdge XR12, PowerEdge XR5610, PowerEdge XR7620 ...
Article Properties
Article Number: 000190243
Article Type: How To
Last Modified: 22 Jul 2025
Version:  3
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.