Tag Archives: VMware

My Notes and Benchmarks on VMware Flash Read Cache

I’ve spent some time exploring and studying the use and configuration of VMware Flash Read Cache (vFRC) and its benefits.  These are my notes.

Useful Resources

On a guest virtual machine, vFRC is configured in Disk configuration area.  The virtual machine needs to be on version 10 hardware.  vSphere needs to be minimum version 5.5.

Benchmarks

I took a baseline benchmark of a simple Windows Server 2016 virtual machine that had a thin provisioned 20GB disk using DskSpd (formerly sqlio).  The virtual machine disk disk is connected to an IBM DS3400 LUN with 4 x 300GB 15k RPM disks in RAID-10.

Baseline Virtual Machine

  • OS: Windows Server 2016
  • vCPU: 1
  • vRAM: 4GB
  • SCSI Controller: LSI Logic SAS
  • Virtual Disk:  40GB thin provisioned
  • Virtual machine hardware:  vmx-10
  • Virtual Flash Read Cache: 0

Some notes before running a test.  This is geared toward SQL workloads and identifies the type of I/O for the different SQL workload.

DskSpd test

diskspd.exe -c30G -d300 -r -w0 -t8 -o8 -b8K -h -L E:\testfile.dat

Results of testing.

Command Line: diskspd.exe -c30G -d300 -r -w0 -t8 -o8 -b8K -h -L E:\testfile.dat

Input parameters:

        timespan:   1
        -------------
        duration: 300s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        random seed: 0
        path: 'E:\testfile.dat'
                think time: 0ms
                burst size: 0
                software cache disabled
                hardware write cache disabled, writethrough on
                performing read test
                block size: 8192
                using random I/O (alignment: 8192)
                number of outstanding I/O operations: 8
                thread stride size: 0
                threads per file: 8
                using I/O Completion Ports
                IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time:       301.18s
thread count:           8
proc count:             1

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  99.23%|   7.05%|   92.18%|   0.77%
-------------------------------------------
avg.|  99.23%|   7.05%|   92.18%|   0.77%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      7940784128 |       969334 |      25.14 |    3218.43 |    2.471 |    22.884 | E:\testfile.dat (30GB)
     1 |      8152604672 |       995191 |      25.81 |    3304.28 |    2.401 |    22.211 | E:\testfile.dat (30GB)
     2 |      8116256768 |       990754 |      25.70 |    3289.55 |    2.408 |    22.080 | E:\testfile.dat (30GB)
     3 |      8180006912 |       998536 |      25.90 |    3315.38 |    2.394 |    22.936 | E:\testfile.dat (30GB)
     4 |      8192147456 |      1000018 |      25.94 |    3320.30 |    2.395 |    22.569 | E:\testfile.dat (30GB)
     5 |      8283185152 |      1011131 |      26.23 |    3357.20 |    2.375 |    21.607 | E:\testfile.dat (30GB)
     6 |      7820320768 |       954629 |      24.76 |    3169.60 |    2.508 |    21.745 | E:\testfile.dat (30GB)
     7 |      7896784896 |       963963 |      25.00 |    3200.59 |    2.479 |    21.981 | E:\testfile.dat (30GB)
-----------------------------------------------------------------------------------------------------
total:       64582090752 |      7883556 |     204.49 |   26175.34 |    2.428 |    22.258

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      7940784128 |       969334 |      25.14 |    3218.43 |    2.471 |    22.884 | E:\testfile.dat (30GB)
     1 |      8152604672 |       995191 |      25.81 |    3304.28 |    2.401 |    22.211 | E:\testfile.dat (30GB)
     2 |      8116256768 |       990754 |      25.70 |    3289.55 |    2.408 |    22.080 | E:\testfile.dat (30GB)
     3 |      8180006912 |       998536 |      25.90 |    3315.38 |    2.394 |    22.936 | E:\testfile.dat (30GB)
     4 |      8192147456 |      1000018 |      25.94 |    3320.30 |    2.395 |    22.569 | E:\testfile.dat (30GB)
     5 |      8283185152 |      1011131 |      26.23 |    3357.20 |    2.375 |    21.607 | E:\testfile.dat (30GB)
     6 |      7820320768 |       954629 |      24.76 |    3169.60 |    2.508 |    21.745 | E:\testfile.dat (30GB)
     7 |      7896784896 |       963963 |      25.00 |    3200.59 |    2.479 |    21.981 | E:\testfile.dat (30GB)
-----------------------------------------------------------------------------------------------------
total:       64582090752 |      7883556 |     204.49 |   26175.34 |    2.428 |    22.258

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     1 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     2 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     3 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     4 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     5 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     6 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
     7 |               0 |            0 |       0.00 |       0.00 |    0.000 |       N/A | E:\testfile.dat (30GB)
-----------------------------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00 |    0.000 |       N/A


  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      0.068 |        N/A |      0.068
   25th |      0.261 |        N/A |      0.261
   50th |      0.274 |        N/A |      0.274
   75th |      0.305 |        N/A |      0.305
   90th |      0.413 |        N/A |      0.413
   95th |      3.097 |        N/A |      3.097
   99th |     57.644 |        N/A |     57.644
3-nines |    198.563 |        N/A |    198.563
4-nines |    995.725 |        N/A |    995.725
5-nines |   1896.496 |        N/A |   1896.496
6-nines |   1954.282 |        N/A |   1954.282
7-nines |   1954.318 |        N/A |   1954.318
8-nines |   1954.318 |        N/A |   1954.318
9-nines |   1954.318 |        N/A |   1954.318
    max |   1954.318 |        N/A |   1954.318

The important part of this shows that at 204MB/s throughput and 26k IOPs, I had average 2ms latency.

thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev
----------------------------------------------------------------------------------------
total:       64582090752 |      7883556 |     204.49 |   26175.34 |    2.428 |    22.258

Here is a view from my monitoring software, essentially validating the latency.

A good starting point for SQL workload testing would be something like:

diskspd –b8K –d30 –o4 –t8 –h –r –w25 –L –Z1G –c20G D:\iotest.dat > DiskSpeedResults.txt

At this point, I just need to get the SSD installed on the host and test VMware Flash Read Cache.

To be continued…

List VMFS extents in ESXi

Run the esxcli storage vmfs extent list command to generate a list of extents for each volume and mapping from device name to UUID.

You see output similar to:

Volume Name VMFS UUID Extent Number Device Name Partition
------------ ----------------------------------- ------------- ------------------------------------ ---------
esxi-local 4e0d86e1-0db6f826-6991-d8d3855ff8d6 0 mpx.vmhba2:C0:T0:L0 3
datastore1 4d4ac840-c1386fa0-9f6d-0050569300a7 0naa.6006016094602800364ce22e3825e011 1
vmfs5 4dad8f16-911648ca-d660-d8d38563e658 0naa.600601609460280052eb8621b73ae011 1

vCenter Converter unable to see disks on source

I’m working on converting a physical 2008 R2 server to a virtual machine for ESX 4.1.  I installed vCenter Converter Standalone 4.3 on this machine and ran through the wizard.  When I got to the disk configuration, nothing was listed.

2015-01-27_113507

Some research pointed me to this VMware KB article indicating to check the logs which are located in C:\ProgramData\VMware\VMware vCenter Converter Standalone\logs.

2015-01-27_113533

Upon reviewing the logs, I see the following error:

[#1] [2015-01-27 11:31:16.775 04316 warning 'App'] Failed to get info for \\.\PhysicalDrive0: error Read \\.\PhysicalDrive0 disk layout: Incorrect function (1)

Researching this error, I arrive at another VMware KB article which indicates GPT partition support was not available in vCenter Converter editions lower than 5.1 (I had installed 4.3).

2015-01-27_113540

 

vCenter Converter fails to import machine at 1%

I got kicked in the face by this again, and I even had the resolution documented internally.  Lost 45 minutes looking through logs before I finally search VMware KB for it.  Argv!

Unexpected exception: converter.fault.clonefault
(converter.fault.CloneFault) {
dynamicType = ,
faultCause = (vmodl.MethodFault) null,
description = "Unknown exception",
msg = "",

This is caused by the source computer not being able to communicate with the ESX server by DNS resolution. Simply added the DNS entry into c:\windows\system32\etc\hosts and I was good to go.

VMware’s KB on this:  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1034292