Use the disk charts to monitor average disk loads and to determine trends in disk usage. For example, you might notice a performance degradation with applications that frequently read from and write to the hard disk. If you see a spike in the number of disk read/write requests, check if any such applications were running at that time.

The value for the kernelLatency data counter is greater than 4ms.

The value for the deviceLatency data counter is greater than 15ms indicates there are probably problems with the storage array.

The queueLatency data counter measures above zero.

Spikes in latency.

Unusual increases in read/write requests.

The virtual machines on the host are trying to send more throughput to the storage system than the configuration supports.

The storage array probably is experiencing internal problems.

The workload is too high and the array cannot process the data fast enough.

The virtual machines on the host are trying to send more throughput to the storage system than the configuration supports. Check the CPU usage, and increase the queue depth.

Move the active VMDK to a volume with more spindles or add disks to the LUN.

Increase the virtual machine memory. This should allow for more operating system caching, which can reduce I/O activity. Note that this may require you to also increase the host memory. Increasing memory might reduce the need to store data because databases can utilize system memory to cache data and avoid disk access.

Check swap statistics in the guest operating system to verify that virtual machines have adequate memory. Increase the guest memory, but not to an extent that leads to excessive host memory swapping. Install VMware Tools so that memory ballooning can occur.

Defragment the file systems on all guests.

Disable antivirus on-demand scans on the VMDK and VMEM files.

Use the vendor's array tools to determine the array performance statistics. When too many servers simultaneously access common elements on an array, the disks might have trouble keeping up. Consider array-side improvements to increase throughput.

Use Storage vMotion to migrate I/O-intensive virtual machines across multiple hosts.

Balance the disk load across all physical resources available. Spread heavily used storage across LUNs that are accessed by different adapters. Use separate queues for each adapter to improve disk efficiency.

Configure the HBAs and RAID controllers for optimal use. Verify that the queue depths and cache settings on the RAID controllers are adequate. If not, increase the number of outstanding disk requests for the virtual machine by adjusting the Disk.SchedNumReqOutstanding parameter.

For resource-intensive virtual machines, separate the virtual machine's physical disk drive from the drive with the system page file. This alleviates disk spindle contention during periods of high use.

On systems with sizable RAM, disable memory trimming by adding the line MemTrimRate=0 to the virtual machine's .VMX file.

If the combined disk I/O is higher than a single HBA capacity, use multipathing or multiple links.

For ESXi hosts, create virtual disks as preallocated. When you create a virtual disk for a guest operating system, select Allocate all disk space now. The performance degradation associated with reassigning additional disk space does not occur, and the disk is less likely to become fragmented.

Use the most current hypervisor software.