Use a command on-host to collect data, for example:
· $ timeout -t 2700 nvidia-smi --query-gpu=timestamp,name,pci.bus_id,driver_version,pstate,pcie.link.gen.max,pcie.link.gen.current,temperature.gpu,utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv -l 1 > results-file.csv
It will run nvidia-smi and query every 1 second, log to csv format and stop after 2,700 seconds.
The user can then sort the resulting csv file to filter the GPU data of most interest from the output. The .csv file can then be visualized and plotted in Excel or a similar application
It is not at the time of writing possible to measure individual VM usage of a vGPU or the GPU from host or from within a VM. The nvidia-smi will return information about the hosts GPU usage across all VMs.
NVIDIA GRID GPUs including K1, K2, M6, M60, M10
NVIDIA GRID used on hypervisors e.g. VMware ESXi/vSphere, Citrix XenServer and in conjunction with products such as XenDesktop/XenApp and Horizon View