Slow Performance VMware Workstation 17.0.x – Windows 10 / Windows 11

Noticed horrible performance using VMware Workstation 17 on my system. I was running Hyper-V side-by-side, so I decided to nuke Hyper-V and the subsystems from that.

  1. Removed the Hyper-V, Virtual Machine Platform, and Windows Hypervisor Platform from Windows Features.
  2. Checked to see if Memory Core Isolation was disabled, and it was. Start > Core Isolation
  3. Disabled power throttling for the VMware process:
powercfg /powerthrottling disable /path "C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe"
  1. Turned off ULM/Hyper-V mode
bcdedit /set hypervisorlaunchtype off
  1. Disabled Accelerated 3D Graphics in the VM settings in VMware Workstation 17.0.2.

Step 5 was the winner, for me.

I had noticed before that my GPU (Intel UHD 630 Graphics) was pegged 80%+ when attempting to work with a VMware Workstation 17.0.x virtual machine. I never put the two together. You can add the following configuration value, mks.enable3d = "FALSE", to your .vmx file, or you can edit the VM and uncheck Accelerate 3D Graphics in the Display portion of the VM configuration in VMware Workstation.

Reclaiming space from thin provisioned disks in VMware ESXi

This is the procedure that has worked for me. There are various methods dating as far back as 2011 across the internet, but this step-by-step works in my environment for Windows and Linux virtual machines.

Step 1 – Prepare the Guest Operating Systems

Windows

  • Download sdelete from Sysinternals (https://live.sysinternals.com/sdelete.exe)
  • In an elevated command prompt, run the following against the volume (in this example, C drive)
    • sdelete -z c:
  • Go to Step 2

Linux

  • For each volume you’ll want to run the following
    • dd if=/dev/zero of=/volume/zero.fill bs=1024k
  • After that completes, remove the zero.fill file from each volume.
    • rm /volume/zero.fill
  • To to Step 2

Step 2 – Punch some Holes

  • Shut down the virtual machine and power it off in vCenter or on the ESXi host
  • Log into the ESXi host via SSH
  • Navigate to the volume where the VM is stored
    • cd /vmfs/volumes/datastore/VirtualMachine
  • Punch the holes
    • vmkfstools -K VirtualMachine.vmdk
  • When completed, back in vCenter or the ESXi host web management, remove the VM from inventory (do not delete from disk). Use the following commands as root on the ESXi host also.
    • Unregister VM (More Info)
      • vim-cmd vmsvc/unregistervm vmid
        • You can get the vmid using vim-cmd vmsvc/getallvms
  • Add the VM back into inventory (go to the datastore and browse for where it is at; select the .vmx and click Register VM). Use the following commands as root on the ESXi host also.
    • Register VM (More Info)
      • vim-cmd solo/registervm /vmfs/volumes/datastore_name/VM_directory/VM_name.vmx

Backup VMware ESXi Host

Backup using vim-cmd

To ensure that the configuration of the target ESXi host is synchronized with persistent storage, run the following command:

vim-cmd hostsvc/firmware/sync_config

To back up ESXi configuration, run this command:

vim-cmd hostsvc/firmware/backup_config

The command will produce a link for downloading the configBundle.tgz archive.

Note that you have to replace the asterisk in the provided link with your IP/FQDN. Alternatively, access the backup file in the /scratch/downloads directory, where it is stored as configBundle-HostFQDN.tgz.

Example output

~ # vim-cmd hostsvc/firmware/backup_config
Bundle can be downloaded at : http://*/downloads/52a0a904-27aa-02ad-a6d9-ad629a51b012/configBundle-ESX2.CORP.LOCAL.tgz

Restore using vim-cmd

Before taking the first step, ensure that the ESXi version, build number, and UUID of the target host match the version, build number, and UUID of the ESXi configuration that needs to be recovered.

Then, connect to the target ESXi host via SSH and put the host into maintenance mode:

esxcli system maintenanceMode set –enable yes

or

vim-cmd hostsvc/maintenance_mode_enter

Use an SCP client to copy the archive with the ESXi configuration (configBundle-xxxx.tgz) to the target ESXi host directory.

Rename the configBundle-xxxx.tgz file to configBundle.tgz:

mv /tmp/configBundle-ESX2.CORP.LOCAL.tgz /tmp/configBundle.tgz

Recover the ESXi configuration:

vim-cmd hostsvc/firmware/restore_config /tmp/configBundle.tgz

The ESXi host will restart automatically.

Exit the maintenance mode:

esxcli system maintenanceMode set –enable no

or

vim-cmd hostsvc/maintenance_mode_exit

Configuration mismatch: The virtual machine cannot be restored because the snapshot was taken with VHV enabled.

Upon migrating a VM from one ESXi host to another ESXi host, I received the following message.

Configuration mismatch: The virtual machine cannot be restored because the snapshot was taken with VHV enabled. To restore, set vhv.enable to true

Check and confirm on both hosts that /etc/vmware/config has vhv.enabled = "TRUE" in the configuration.

This modification on the mismatched host requires a reboot.

Case of Dead Path on ESXi

I had 8 paths go down to a dead state on an ESXi host.  The paths were MRU via Fiber Channel to a storage array.  One path worked and it was configured as RR path.

I knew this wasn’t a physical issue, it had to be a software/configuration issue on my host because there were:

  • No storage array errors
  • Additional hosts in the cluster had no problems
  • One path still worked from the HBA

Looking at the log (/var/log/vmkernel.log) I searched for one of the LUN identifiers, in my case “:L30” which was one of the dead paths.  This yielded a result showing an error with NMP plugin driver invalid command.
Next step was to figure out and verify what NMP details were and compare against a working host.

esxcli storage nmp device list |grep "Path Selection Policy:" |sort |uniq -c

I saw nothing out of the ordinary.
Apparently the storage did not like the use of RR so I removed the SATP claim rule though, so I removed it:

esxcli storage nmp satp rule remove -V IBM -M "^1746*" -P VMW_PSP_RR -s VMW_SATP_ALUA

Storage paths are happy now.