Tag Archives: iops

Here’s an ESXi console script to loop through each Nimble eui.* adapter and set IOPS=0 and BYTES=0 (per Nimble recommendations).

for x in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do
echo $x 
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t bytes -B 0;
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t iops -I 0 ;
esxcli storage nmp psp roundrobin deviceconfig get -d $x;
done

Note: If you change the order above and set bytes after iops, then the policy will be based on bytes and not IOPS.

To reset defaults, use the following script on the ESXi host console:

for x in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do
echo $x 
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t bytes -B 10485760;
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t iops -I 1000 ;
esxcli storage nmp psp roundrobin deviceconfig set -d $x -t default;
esxcli storage nmp psp roundrobin deviceconfig get -d $x;
done

To make sure this survives a reboot, you can set a policy:

esxcli storage nmp satp rule add --psp=VMW_PSP_RR --satp=VMW_SATP_ALUA --vendor=Nimble --psp-option="policy=iops;iops=0"

Note that if you previously configured a user-defined SATP rule for Nimble volumes to simply use the Round Robin PSP (per the Nimble VMware best practices guide), you will first need to remove that simpler rule, before you can add the above rule, or else you will get an error message that a duplicate user-defined rule exists. The command to remove the simpler rule is: –Bill

esxcli storage nmp satp rule remove --psp=VMW_PSP_RR --satp=VMW_SATP_ALUA --vendor=Nimble

I made this 5 years ago in Excel and it was pretty popular. I think there are some errors in the math formulas that were pointed out for “RAW” calculations of backend IOPs. So use with caution, but mostly it should give a good enough idea on whatever it is you might be doing.

If you make modifications, please drop me a line in the comments or get in touch with me. I’ll update my post to include fixes.

RAID-and-IOPS Cheat Sheet Download

This is a great KB article from VMware worth reposting.  I’m going to start analyzing storage I/O more today on all the arrays I have and LUNs.  Article Source:  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1019687

Setting the congestion Threshold Value for Storage I/O Control

Details

The congestion threshold value for a datastore is the upper limit of latency that is allowed for a datastore before Storage I/O Control begins to assign importance to the virtual machine workloads according to their shares. You do not need to adjust the congestion threshold setting in most environments.
CAUTION Storage I/O Control will not function correctly unless all datatores that share the same spindles on the array have the same congestion threshold.

Solution

 If you do change the threshold setting, set the value based on the following considerations.
  • A higher value typically results in higher aggregate throughput and weaker isolation. Throttling will not occur unless the overall average latency is higher than the threshold
  • If throughput is more critical than latency, do not set the value too low. For example, for Fibre Channel disks, a value below 20 ms could lower peak disk throughput. On the other hand, a very high value (above 50 ms) might allow very high latency without any significant gain in overall throughput.
  • A lower value will result in lower device latency and stronger virtual machine I/O performance isolation. Stronger isolation means that the shares controls are enforced more often. Lower device latency translates into lower I/O latency for the virtual machines with the highest shares, at the cost of higher I/O latency experienced by the virtual machines with fewer shares.
  • If latency is more important, a very low value (lower than 20 ms) will result in lower device latency and better isolation among IOs at the cost of a decrease in aggregate datastore throughput.

Procedure

  1. Select a datastore in the vSphere Client inventory and click the Configuration tab.
  2. Click Properties.
  3. Under Storage I/O Control, select the Enabled check box.
  4. Click Advanced to edit the congestion threshold value for the datastore. The value must be between 10 and 100. You can click Reset to restore the congestion threshold setting to the default value (30 ms).
  5. Click Close.
For more information see the section “Managing Storage I/O Resources” on the “vSphere Resource Management Guide” (pdf) http://www.vmware.com/support/pubs/.

This is just a note for myself since I often find myself trying to analyze an application to performance tune a server/system.

Using Windows Performance Monitoring I use the following metrics when analyzing an application.

MetricExample DataDescription
Logical Disk: Avg. Disk Bytes/Read0.00IO Size Read (Data block size)
Logical Disk: Avg. Disk Bytes/Transfer8192.00IO Size
Logical Disk: Avg. Disk Bytes/Write8192.00IO Size Read (Data block size)
Logical Disk: Disk Read Bytes/sec0.00Total Read data bytes per second
Logical Disk: Disk Write Bytes/sec125000.00Total Write data bytes per second
Logical Disk: Disk Transfers/sec15.258Total IOPS
Logical Disk: Disk Reads/sec0.00Read IOPS
Logical Disk: Disk Writes/sec15.258Write IOPS

To calculate Write IOPS for the application take “Disk Write Bytes/sec” and divide by “Avg. Disk Bytes/Write”.

125000.00 / 8192.00 = 15.258 IOPS

Notice that Disk Reads/sec and Disk Writes/sec corresponds to Read IOPS and Write IOPS already.

 

Here are some of the links I have for IOPS calculators on my site:

https://techish.net/hardware/raid-iops-mbps-and-more-excel-cheat-sheet/
https://techish.net/hardware/iops-calculator-and-raid-calculators-estimators/

Some useful information on some of the graphing I had done with SQLIO measurements

https://techish.net/windows/sqlio-scripts-and-graphs/
https://techish.net/windows/visualizing-sqlio-disk-benchmark-results-using-a-pivotchart/

Testing Disk IO in Linux

https://techish.net/linux/testing-disk-in-linux-using-fio/

I’m working on an Excel spreadsheet (cheatsheet) that will allow for user input to calculate some of the following:

  • Calculate IOPS, Usable Space, MB/s  based on Number of Disks, Spindle Speed, RAID type and Read/Write Percentages.
  • Calculate Number of Disks required for IOPs
  • Calculate/Convert MB/s to IOPS
  • Calculate/Convert IOPS to MB/s

It also contains some basic information and formulas:

  • Formula for Total Raw IOPS
  • Formula for Functional IOPS
  • Formula for MB/s from IOPS
  • Formula for IOPS from MB/s
  • Formula to determine number disks required for IOPs based on RAID type and spindle speed.

Here’s a screenshot:

RAID/IOPS Calculator Cheat Sheet

RAID/IOPS Calculator Cheat Sheet

Here are some of the formulas used:

  • Total Raw IOPS = Disk Speed IOPS * Number of Disks
  • Functional IOPS = (((Total Raw IOPS*Write%))/(RAID Penalty))+(Total Raw IOPS*Read%)
  • MB/s = (IOPS * KB per IO) / 1024
  • IOPS=(MBps Throughput / KB per IO) * 1024
  • Formula to determine disks required for IOPs (total required IOPS * read%) + (total required IOPS * write% * RAID penalty) = total IOPS required take that and divide by IOPS provided by disk type (15k=175, 10k=125, etc.)

Download the Spreadsheet (XLSX):  Download