How-to-guide: Using nvidia-smi on host to monitor GPU behavior with vGPU and outputting to csv format

Answer ID 4137
Published 05/24/2016 10:24 AM
Updated 06/27/2016 11:29 AM

How-to-guide: Using nvidia-smi on host to monitor GPU behavior with vGPU and outputting to csv format


Monitoring: How-to

Use a command on-host to collect data, for example:

ยท $ timeout -t 2700 nvidia-smi --query-gpu=timestamp,name,pci.bus_id,driver_version,pstate,pcie.link.gen.max,pcie.link.gen.current,temperature.gpu,utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv -l 1 > results-file.csv

It will run nvidia-smi and query every 1 second, log to csv format and stop after 2,700 seconds.

The user can then sort the resulting csv file to filter the GPU data of most interest from the output. The .csv file can then be visualized and plotted in Excel or a similar application

Caveats

It is not at the time of writing possible to measure individual VM usage of a vGPU or the GPU from host or from within a VM. The nvidia-smi will return information about the hosts GPU usage across all VMs.

Relevant Products

NVIDIA GRID GPUs including K1, K2, M6, M60, M10

NVIDIA GRID used on hypervisors e.g. VMware ESXi/vSphere, Citrix XenServer and in conjunction with products such as XenDesktop/XenApp and Horizon View

Was this answer helpful?
Your rating has been submitted, please tell us how we can make this answer more useful.

LIVE CHAT

Chat online with one of our support agents

CHAT NOW

ASK US A QUESTION

Contact Support for assistance

CONTACT US