PowerEdge: Useful "NVIDIA-SMI" queries for troubleshooting
Summary: This article shows useful "NVIDIA-SMI" queries for NVIDIA GPU card troubleshooting.
Instructions
VBIOS Version
Query the VBIOS version of each device:
$ nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv
name, pci.bus_id, vbios_version
GRID K2, 0000:87:00.0, 80.04.D4.00.07
GRID K2, 0000:88:00.0, 80.04.D4.00.08
Query |
Description |
|---|---|
timestamp |
The timestamp of where the query was made in format "YYYY/MM/DD HH:MM:SS.msec". |
gpu_name |
The official product name of the GPU. This is an alphanumeric string. For all products. |
gpu_bus_id |
PCI bus id as "domain:bus:device.function", in hex. |
vbios_version |
The BIOS of the GPU board. |
Query GPU metrics for host-side logging
This query is good for monitoring the hypervisor-side GPU metrics.
This query works for both ESXi and XenServer:
$ nvidia-smi --query-gpu=timestamp,name,pci.bus_id,driver_version,pstate,pcie.link.gen.max,
pcie.link.gen.current,temperature.gpu,utilization.gpu,utilization.memory,
memory.total,memory.free,memory.used --format=csv -l 5
When adding additional parameters to a query, ensure that no spaces are added between the queries options.
Query |
Description |
|---|---|
timestamp |
The timestamp of where the query was made in format "YYYY/MM/DD HH:MM:SS.msec". |
name |
The official product name of the GPU. This is an alphanumeric string. For all products. |
pci.bus_id |
PCI bus id as "domain:bus:device.function", in hex. |
driver_version |
The version of the installed NVIDIA display driver. This is an alphanumeric string. |
pstate |
The current performance state for the GPU. States range from P0 (maximum performance) to P12 (minimum performance). |
pcie.link.gen.max |
The maximum PCI-E link generation possible with this GPU and system configuration. For example, if the GPU supports a higher PCIe generation than the system supports then this reports the system PCIe generation. |
pcie.link.gen.current |
The current PCI-E link generation. These may be reduced when the GPU is not in use. |
temperature.gpu |
Core GPU temperature. in degrees C. |
|
|
|
|
|
|
|
|
Total installed GPU memory. |
|
|
Total free memory. |
|
|
Total memory allocated by active contexts. |
You can get a complete list of the query arguments by issuing: nvidia-smi --help-query-gpu
nvidia-smi Usage for logging
Short-term logging
Add the option "-f <filename>" to redirect the output to a file.
Prepend "timeout -t <seconds>" to run the query for <seconds> and stop logging.
Ensure that your query granularity is appropriately sized for the use required:
Purpose |
nvidia-smi "-l" value |
interval |
timeout "-t" value |
Duration |
|---|---|---|---|---|
Fine-grain GPU behavior |
5 |
5 seconds |
600 |
10 minutes |
General GPU behavior |
60 |
1 minute |
3600 |
1 hour |
Broad GPU behavior |
3600 |
1 hour |
86400 |
24 hours |
Long-term logging
Create a shell script to automate the creation of the log file with timestamp data added to the filename and query parameters.
Add a custom cron job to /var/spool/cron/crontabs to call the script at the intervals required.
ADDITIONAL LOW LEVEL COMMANDS USED FOR CLOCKS AND POWER
Enable "Persistence" mode.
Any settings below for clocks and power get reset between program runs unless you enable persistence mode (PM) for the driver.
Also the nvidia-smi command runs faster if PM mode is enabled.
nvidia-smi -pm 1 - Make clock, power, and other settings persist across program runs and driver invocations.
Clocks
Command |
Detail |
|---|---|
nvidia-smi -ac <MEM clock, Graphics clock> |
View clocks supported |
nvidia-smi –q –d SUPPORTED_CLOCKS |
Set one of supported clocks |
nvidia-smi -q –d CLOCK |
View current clock |
nvidia-smi --auto-boost-default=ENABLED -i 0 |
Enable boosting GPU clocks (K80 and later) |
nvidia-smi --rac |
Reset clocks back to base |
Power
nvidia-smi –pl N |
Set power cap (maximum wattage the GPU will use) |
nvidia-smi -pm 1 |
Enable persistence mode |
nvidia-smi stats -i <device#> -d pwrDraw |
Command that provides continuous monitoring of detail stats such as power |
nvidia-smi --query-gpu=index, timestamp,power.draw,clocks.sm,clocks.mem,clocks.gr --format=csv -l 1 |
Continuously provide time stamped power and clock |
Other useful commands
| Command | Description |
nvidia-smi -q |
Query all the GPUs seen by the driver and display all readable attributes for a GPU. |
nvidia-smi |
Displays current GPU status, driver information and host of other statistics. |
nvidia-smi -l |
Scrolls the output of nvidia-smi continuously until stopped. |
nvidia-smi --query gpu=index,timestamp,power.draw,clocks.sm,clocks.mem,clocks.gr --format=csv |
Continuously provides time stamped power and clock information. |
nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv |
Query the VBIOS version of each GPU in a system. |
lspci -n | grep 10de |
Determines if the GPU is in compute mode or graphics mode. |
nvidia-smi nvlink -s -i<device#> |
Displays NVLink state for a specific GPU. |
gpuswitchmode --listgpumodes |
Displays the capability of GRID 2.0 cards and switching between compute and graphics. The package is not in the normal CUDA or NVIDIA driver. |
nvidia-smi -h |
Displays the smi commands and syntax form. |
nvidia-bug-report.sh |
Pulls out a bug report which is sent to Level 3 support technician/NVIDIA. |
nvidia-smi --query-retired-pages=gpu_uuid,retired_pages.address,retired_page.cause --format=csv |
Pulls out retired pages, GPU UUID, page fault address and the cause of page fault. |
nvidia-smi stats |
Displays device statistics. |
nvcc --version |
Shows installed CUDA version. |
nvidia-smi pmon |
Displays process statistics in scrolling format. |
nvidia-smi nvlink -c -i<device#> |
Displays NVLink capabilities for a specific GPU. |
gpuswitchmode --gpumode graphics |
Changes the personality of the GPU to graphics from compute (M6 and M60 GPUs). |
gpuswitchmode --gpumode compute |
Changes the personality of the GPU to compute from graphics (M6 and M60 GPUs). |