PowerEdge:用于故障处理的有用“NVIDIA-SMI”查询
摘要: 本文介绍用于 NVIDIA GPU 卡故障处理的有用“NVIDIA-SMI”查询。
说明
VBIOS版本
查询每个设备的VBIOS版本:
$ nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv
name, pci.bus_id, vbios_version
GRID K2, 0000:87:00.0, 80.04.D4.00.07
GRID K2, 0000:88:00.0, 80.04.D4.00.08
Query |
Description |
|---|---|
timestamp |
The timestamp of where the query was made in format "YYYY/MM/DD HH:MM:SS.msec". |
gpu_name |
The official product name of the GPU. This is an alphanumeric string. For all products. |
gpu_bus_id |
PCI bus id as "domain:bus:device.function", in hex. |
vbios_version |
The BIOS of the GPU board. |
查询主机端日志记录的 GPU 指标
此查询适用于监视虚拟机管理程序端 GPU 指标。
此查询适用于 ESXi 和 XenServer:
$ nvidia-smi --query-gpu=timestamp,name,pci.bus_id,driver_version,pstate,pcie.link.gen.max,
pcie.link.gen.current,temperature.gpu,utilization.gpu,utilization.memory,
memory.total,memory.free,memory.used --format=csv -l 5
向查询添加其他参数时,请确保查询选项之间不添加空格。
Query |
Description |
|---|---|
timestamp |
The timestamp of where the query was made in format "YYYY/MM/DD HH:MM:SS.msec". |
name |
The official product name of the GPU. This is an alphanumeric string. For all products. |
pci.bus_id |
PCI bus id as "domain:bus:device.function", in hex. |
driver_version |
The version of the installed NVIDIA display driver. This is an alphanumeric string. |
pstate |
The current performance state for the GPU. States range from P0 (maximum performance) to P12 (minimum performance). |
pcie.link.gen.max |
The maximum PCI-E link generation possible with this GPU and system configuration. For example, if the GPU supports a higher PCIe generation than the system supports then this reports the system PCIe generation. |
pcie.link.gen.current |
The current PCI-E link generation. These may be reduced when the GPU is not in use. |
temperature.gpu |
Core GPU temperature. in degrees C. |
|
|
|
|
|
|
|
|
Total installed GPU memory. |
|
|
Total free memory. |
|
|
Total memory allocated by active contexts. |
您可以通过发出以下命令来获取查询参数的完整列表: nvidia-smi --help-query-gpu
nvidia-smi 日志记录的用法
短期日志记录
添加选项”-f <filename>“将输出重定向到文件。
Prepend ”timeout -t <seconds>“来运行查询 <seconds> 并停止日志记录。
确保您的查询粒度大小适合所需的用途:
Purpose |
nvidia-smi "-l" value |
interval |
timeout "-t" value |
Duration |
|---|---|---|---|---|
Fine-grain GPU behavior |
5 |
5 seconds |
600 |
10 minutes |
General GPU behavior |
60 |
1 minute |
3600 |
1 hour |
Broad GPU behavior |
3600 |
1 hour |
86400 |
24 hours |
长期日志记录
创建 shell 脚本以自动创建日志文件,并将时间戳数据添加到文件名和查询参数中。
添加自定义 cron job 设置为 /var/spool/cron/crontabs 按所需的时间间隔调用脚本。
用于时钟和电源的其他低电平命令
启用“Persistence”模式。
除非为驱动程序启用持久模式 (PM),否则以下任何时钟和电源设置都会在程序运行之间重置。
此外 nvidia-smi 如果启用了 PM 模式,则命令的运行速度更快。
nvidia-smi -pm 1 - 使时钟、电源和其他设置在程序运行和驱动程序调用中持续存在。
时钟
Command |
Detail |
|---|---|
nvidia-smi -ac <MEM clock, Graphics clock> |
View clocks supported |
nvidia-smi –q –d SUPPORTED_CLOCKS |
Set one of supported clocks |
nvidia-smi -q –d CLOCK |
View current clock |
nvidia-smi --auto-boost-default=ENABLED -i 0 |
Enable boosting GPU clocks (K80 and later) |
nvidia-smi --rac |
Reset clocks back to base |
电源
nvidia-smi –pl N |
Set power cap (maximum wattage the GPU will use) |
nvidia-smi -pm 1 |
Enable persistence mode |
nvidia-smi stats -i <device#> -d pwrDraw |
Command that provides continuous monitoring of detail stats such as power |
nvidia-smi --query-gpu=index, timestamp,power.draw,clocks.sm,clocks.mem,clocks.gr --format=csv -l 1 |
Continuously provide time stamped power and clock |
其他有用的命令
| 命令 | 描述 |
nvidia-smi -q |
Query all the GPUs seen by the driver and display all readable attributes for a GPU. |
nvidia-smi |
Displays current GPU status, driver information and host of other statistics. |
nvidia-smi -l |
Scrolls the output of nvidia-smi continuously until stopped. |
nvidia-smi --query gpu=index,timestamp,power.draw,clocks.sm,clocks.mem,clocks.gr --format=csv |
Continuously provides time stamped power and clock information. |
nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version --format=csv |
Query the VBIOS version of each GPU in a system. |
lspci -n | grep 10de |
Determines if the GPU is in compute mode or graphics mode. |
nvidia-smi nvlink -s -i<device#> |
Displays NVLink state for a specific GPU. |
gpuswitchmode --listgpumodes |
Displays the capability of GRID 2.0 cards and switching between compute and graphics. The package is not in the normal CUDA or NVIDIA driver. |
nvidia-smi -h |
Displays the smi commands and syntax form. |
nvidia-bug-report.sh |
Pulls out a bug report which is sent to Level 3 support technician/NVIDIA. |
nvidia-smi --query-retired-pages=gpu_uuid,retired_pages.address,retired_page.cause --format=csv |
Pulls out retired pages, GPU UUID, page fault address and the cause of page fault. |
nvidia-smi stats |
Displays device statistics. |
nvcc --version |
Shows installed CUDA version. |
nvidia-smi pmon |
Displays process statistics in scrolling format. |
nvidia-smi nvlink -c -i<device#> |
Displays NVLink capabilities for a specific GPU. |
gpuswitchmode --gpumode graphics |
Changes the personality of the GPU to graphics from compute (M6 and M60 GPUs). |
gpuswitchmode --gpumode compute |
Changes the personality of the GPU to compute from graphics (M6 and M60 GPUs). |