Tag Archives: VMware

During datastore creation in vSphere using the Nimble vCenter plugin, I get the following error:

A general system error occurred: esxi1: CHAP setting not compatible. hba=vmhba33

In vSphere Client I went to my ESXi host then Configuration, Storage Adapters, iSCSI Software Adapter.  Taking a look at vmhba33 Properties then CHAP… I see that there was a CHAP setting in there for an old Drobo system I had connected at one point.

Changing the option to Do not use CHAP resolved my issue.  I’m not using CHAP on my lab setup.

Here’s an ESXi console script to loop through each Nimble eui.* adapter and set IOPS=0 and BYTES=0 (per Nimble recommendations).

for x in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do
echo $x 
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t bytes -B 0;
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t iops -I 0 ;
esxcli storage nmp psp roundrobin deviceconfig get -d $x;
done

Note: If you change the order above and set bytes after iops, then the policy will be based on bytes and not IOPS.

To reset defaults, use the following script on the ESXi host console:

for x in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do
echo $x 
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t bytes -B 10485760;
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t iops -I 1000 ;
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t default;
esxcli storage nmp psp roundrobin deviceconfig get -d $x;
done

To make sure this survives a reboot, you can set a policy:

esxcli storage nmp satp rule add --psp=VMW_PSP_RR --satp=VMW_SATP_ALUA --vendor=Nimble --psp-option="policy=iops;iops=0"

Note that if you previously configured a user-defined SATP rule for Nimble volumes to simply use the Round Robin PSP (per the Nimble VMware best practices guide), you will first need to remove that simpler rule, before you can add the above rule, or else you will get an error message that a duplicate user-defined rule exists. The command to remove the simpler rule is: –Bill

esxcli storage nmp satp rule remove --psp=VMW_PSP_RR --satp=VMW_SATP_ALUA --vendor=Nimble

In order to work around the issue processor power management has to be disabled in system UEFI and vSphere Client.

To change power policies using server UEFI settings:

  1. Turn on the server.
    Note: If necessary, connect a keyboard, monitor, and mouse to the console breakout cable and connect the console breakout cable to the compute node.
  2. When the prompt ‘Press <F1> Setup’ is displayed, press F1 and enter UEFI setup. Follow the instructions on the screen.
  3. Select System Settings –> Operating Modes and set it to ‘Custom Mode’ as shown in ‘Custom Mode’ figure, then set UEFI settings as follows:
    Choose Operating Mode <Custom>
    Memory Speed <Max Performance>
    Memory Power Management <Disabled>
    Proc Performance States <Disabled>
    C1 Enhanced Mode <Disabled>
    QPI Link Frequency <Max Performance>
    QPI Link Disable <Enable All Links>
    Turbo Mode <Enable>
    CPU C-States <Disable>
    Power/Performance Bias <Platform Controlled>
    Platform Controlled Type <Maximum Performance>
    Uncore Frequency Scaling <Disable>

  4. Press Escape key 3 times, and Save Settings.
  5. Exit Setup and restart the server so that UEFI changes take effect.

Next, change power policies using the vSphere Client:

  1. Select the host from the inventory and click the Manage tab and then the Settings tab as shown in ‘Power Management view from the vSphere Web Client’ figure.
  2. In the left pane under Hardware, select Power Management.
  3. Click Edit on the right side of the screen.
  4. The Edit Power Policy Settings dialog box appears as shown in ‘Power policy settings’ figure.
  5. Choose ‘High performance’ and confirm selection by pressing ‘OK’ radio button.

I’ve spent some time exploring and studying the use and configuration of VMware Flash Read Cache (vFRC) and its benefits.  These are my notes.

Useful Resources

On a guest virtual machine, vFRC is configured in Disk configuration area.  The virtual machine needs to be on version 10 hardware.  vSphere needs to be minimum version 5.5.

Benchmarks

I took a baseline benchmark of a simple Windows Server 2016 virtual machine that had a thin provisioned 20GB disk using DskSpd (formerly sqlio).  The virtual machine disk disk is connected to an IBM DS3400 LUN with 4 x 300GB 15k RPM disks in RAID-10.

Baseline Virtual Machine

  • OS: Windows Server 2016
  • vCPU: 1
  • vRAM: 4GB
  • SCSI Controller: LSI Logic SAS
  • Virtual Disk:  40GB thin provisioned
  • Virtual machine hardware:  vmx-10
  • Virtual Flash Read Cache: 0

Some notes before running a test.  This is geared toward SQL workloads and identifies the type of I/O for the different SQL workload.

DskSpd test

diskspd.exe -c30G -d300 -r -w0 -t8 -o8 -b8K -h -L E:\testfile.dat

Results of testing.

Command Line: diskspd.exe -c30G -d300 -r -w0 -t8 -o8 -b8K -h -L E:\testfile.dat

Input parameters:

        timespan:   1
        -------------
        duration: 300s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        random seed: 0
        path: 'E:\testfile.dat'
                think time: 0ms
                burst size: 0
                software cache disabled
                hardware write cache disabled, writethrough on
                performing read test
                block size: 8192
                using random I/O (alignment: 8192)
                number of outstanding I/O operations: 8
                thread stride size: 0
                threads per file: 8
                using I/O Completion Ports
                IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time:       301.18s
thread count:           8
proc count:             1

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  99.23%|   7.05%|   92.18%|   0.77%
-------------------------------------------
avg.|  99.23%|   7.05%|   92.18%|   0.77%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      7940784128 |       969334 |      25.14 |    3218.43 |    2.471 |    22.884 | E:\testfile.dat (30GB)
     1 |      8152604672 |       995191 |      25.81 |    3304.28 |    2.401 |    22.211 | E:\testfile.dat (30GB)
     2 |      8116256768 |       990754 |      25.70 |    3289.55 |    2.408 |    22.080 | E:\testfile.dat (30GB)
     3 |      8180006912 |       998536 |      25.90 |    3315.38 |    2.394 |    22.936 | E:\testfile.dat (30GB)
     4 |      8192147456 |      1000018 |      25.94 |    3320.30 |    2.395 |    22.569 | E:\testfile.dat (30GB)
     5 |      8283185152 |      1011131 |      26.23 |    3357.20 |    2.375 |    21.607 | E:\testfile.dat (30GB)
     6 |      7820320768 |       954629 |      24.76 |    3169.60 |    2.508 |    21.745 | E:\testfile.dat (30GB)
     7 |      7896784896 |       963963 |      25.00 |    3200.59 |    2.479 |    21.981 | E:\testfile.dat (30GB)
-----------------------------------------------------------------------------------------------------
total:       64582090752 |      7883556 |     204.49 |   26175.34 |    2.428 |    22.258

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      7940784128 |       969334 |      25.14 |    3218.43 |    2.471 |    22.884 | E:\testfile.dat (30GB)
     1 |      8152604672 |       995191 |      25.81 |    3304.28 |    2.401 |    22.211 | E:\testfile.dat (30GB)
     2 |      8116256768 |       990754 |      25.70 |    3289.55 |    2.408 |    22.080 | E:\testfile.dat (30GB)
     3 |      8180006912 |       998536 |      25.90 |    3315.38 |    2.394 |    22.936 | E:\testfile.dat (30GB)
     4 |      8192147456 |      1000018 |      25.94 |    3320.30 |    2.395 |    22.569 | E:\testfile.dat (30GB)
     5 |      8283185152 |      1011131 |      26.23 |    3357.20 |    2.375 |    21.607 | E:\testfile.dat (30GB)
     6 |      7820320768 |       954629 |      24.76 |    3169.60 |    2.508 |    21.745 | E:\testfile.dat (30GB)
     7 |      7896784896 |       963963 |      25.00 |    3200.59 |    2.479 |    21.981 | E:\testfile.dat (30GB)
-----------------------------------------------------------------------------------------------------
total:       64582090752 |      7883556 |     204.49 |   26175.34 |    2.428 |    22.258

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     1 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     2 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     3 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     4 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     5 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     6 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     7 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
-----------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A


  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      0.068 |        N/A |      0.068
   25th |      0.261 |        N/A |      0.261
   50th |      0.274 |        N/A |      0.274
   75th |      0.305 |        N/A |      0.305
   90th |      0.413 |        N/A |      0.413
   95th |      3.097 |        N/A |      3.097
   99th |     57.644 |        N/A |     57.644
3-nines |    198.563 |        N/A |    198.563
4-nines |    995.725 |        N/A |    995.725
5-nines |   1896.496 |        N/A |   1896.496
6-nines |   1954.282 |        N/A |   1954.282
7-nines |   1954.318 |        N/A |   1954.318
8-nines |   1954.318 |        N/A |   1954.318
9-nines |   1954.318 |        N/A |   1954.318
    max |   1954.318 |        N/A |   1954.318

The important part of this shows that at 204MB/s throughput and 26k IOPs, I had average 2ms latency.

thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev
----------------------------------------------------------------------------------------
total:       64582090752 |      7883556 |     204.49 |   26175.34 |    2.428 |    22.258

Here is a view from my monitoring software, essentially validating the latency.

A good starting point for SQL workload testing would be something like:

diskspd –b8K –d30 –o4 –t8 –h –r –w25 –L –Z1G –c20G D:\iotest.dat > DiskSpeedResults.txt

At this point, I just need to get the SSD installed on the host and test VMware Flash Read Cache.

To be continued…

Run the esxcli storage vmfs extent list command to generate a list of extents for each volume and mapping from device name to UUID.

You see output similar to:

Volume Name VMFS UUID Extent Number Device Name Partition
------------ ----------------------------------- ------------- ------------------------------------ ---------
esxi-local 4e0d86e1-0db6f826-6991-d8d3855ff8d6 0 mpx.vmhba2:C0:T0:L0 3
datastore1 4d4ac840-c1386fa0-9f6d-0050569300a7 0naa.6006016094602800364ce22e3825e011 1
vmfs5 4dad8f16-911648ca-d660-d8d38563e658 0naa.600601609460280052eb8621b73ae011 1

I’m working on converting a physical 2008 R2 server to a virtual machine for ESX 4.1.  I installed vCenter Converter Standalone 4.3 on this machine and ran through the wizard.  When I got to the disk configuration, nothing was listed.

2015-01-27_113507

Some research pointed me to this VMware KB article indicating to check the logs which are located in C:\ProgramData\VMware\VMware vCenter Converter Standalone\logs.

2015-01-27_113533

Upon reviewing the logs, I see the following error:

[#1] [2015-01-27 11:31:16.775 04316 warning 'App'] Failed to get info for \\.\PhysicalDrive0: error Read \\.\PhysicalDrive0 disk layout: Incorrect function (1)

Researching this error, I arrive at another VMware KB article which indicates GPT partition support was not available in vCenter Converter editions lower than 5.1 (I had installed 4.3).

2015-01-27_113540