View Multiple esxtop Outputs Side-by-Side

The Problem

I often use esxtop to review realtime performance metrics of various aspects of my virtual infrastructure.  One annoying thing is that I cannot view multiple ESX hosts in a single esxtop session.  That would be such a nice feature to have, really.

The Solution

My solution is to use Linux screen command to accomplish side-by-side viewing of multiple esxtop output windows.
Screen does not come with ESX(I) but I have access to the ESX hosts from one of my management servers that runs a Linux distribution and has screen installed.  So I use PuTTy to access the Linux management server, and then from there launch screen and create two windows to then ssh to my ESX servers.

How I Did It

SSH to my Linux server and start screen.

  1. Created two windows (Ctrl-a c)
  2. Named each window (Ctrl-a A) ESX#
  3. Split the window vertically in screen (Ctrl-a |)
  4. On the first split, I SSH’d to the first ESX box
  5. Then Ctrl-a Tab to get to the second region of my vertical split
  6. Issue Ctrl-a 1 to access screen window #2
  7. SSH to second ESX server and run esxtop command.
Ctrl-a Anew window
Ctrl-a nnext window
Ctrl-a pprevious window
Ctrl-a Ssplit terminal horizontally
Ctrl-a |split terminal vertically
Ctrl-a :resizeresize current region
Ctrl-a :fitfit screen size to new terminal size
Ctrl-a :removeremove region
Ctrl-a tabmove to next region
Ctrl-a Aset window title
Ctrl-aselect window from list
esxtop
esxtop in screen with vertical window split for side-by-side viewing of 2 ESX server’s esxtop output

Setting the Congestion Threshold Value for Storage I/O Control

This is a great KB article from VMware worth reposting.  I’m going to start analyzing storage I/O more today on all the arrays I have and LUNs.  Article Source:  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1019687

Setting the congestion Threshold Value for Storage I/O Control

Details

The congestion threshold value for a datastore is the upper limit of latency that is allowed for a datastore before Storage I/O Control begins to assign importance to the virtual machine workloads according to their shares. You do not need to adjust the congestion threshold setting in most environments.
CAUTION Storage I/O Control will not function correctly unless all datatores that share the same spindles on the array have the same congestion threshold.

Solution

 If you do change the threshold setting, set the value based on the following considerations.
  • A higher value typically results in higher aggregate throughput and weaker isolation. Throttling will not occur unless the overall average latency is higher than the threshold
  • If throughput is more critical than latency, do not set the value too low. For example, for Fibre Channel disks, a value below 20 ms could lower peak disk throughput. On the other hand, a very high value (above 50 ms) might allow very high latency without any significant gain in overall throughput.
  • A lower value will result in lower device latency and stronger virtual machine I/O performance isolation. Stronger isolation means that the shares controls are enforced more often. Lower device latency translates into lower I/O latency for the virtual machines with the highest shares, at the cost of higher I/O latency experienced by the virtual machines with fewer shares.
  • If latency is more important, a very low value (lower than 20 ms) will result in lower device latency and better isolation among IOs at the cost of a decrease in aggregate datastore throughput.

Procedure

  1. Select a datastore in the vSphere Client inventory and click the Configuration tab.
  2. Click Properties.
  3. Under Storage I/O Control, select the Enabled check box.
  4. Click Advanced to edit the congestion threshold value for the datastore. The value must be between 10 and 100. You can click Reset to restore the congestion threshold setting to the default value (30 ms).
  5. Click Close.
For more information see the section “Managing Storage I/O Resources” on the “vSphere Resource Management Guide” (pdf) http://www.vmware.com/support/pubs/.

End of Availability of VMware ESX 4.x

  Dear Valued Customer,
VMware is announcing an End of Availability (“EoA”) date for VMware vSphere®   ESX hypervisor 4.x and for VMware Management Assistant (“vMA”) versions 1 and   4. The end of   availability date is August 15, 2013. This is a follow-on   communication to the general announcement made in July 2011 in connection   with the launch of vSphere 5.0.
This notification has NO IMPACT on existing vSphere ESXi 4.x   environments, and customers are NOT required to take any action.   However, it is recommended that customers make a backup or keep an archived   copy of these binaries and generate any necessary license keys in order to   maintain or expand a vSphere ESX hypervisor version 4.x or vMA versions 1 and   4 environment. These steps should be completed prior to August 15, 2013.   VMware will not provide any binaries or license keys for vSphere ESX   hypervisor 4.x or vMA versions 1 and 4 after August 15, 2013.
Additional information can be found at:
www.vmware.com/go/esx-end-of-availability
Please note:

  • vSphere ESX hypervisor 4.X and vMA support lifecycle
    The end of support life (“EOSL”) date remains May 21, 2014. VMware’s        support lifecycle page can be found at: www.vmware.com/support/policies/lifecycle/enterprise-infrastructure/eos.html
  • Customer’s ability to use the binaries of vSphere ESX        hypervisor 4.x or vMA versions 1 and 4 past August 15, 2013
    Customers retain the ability to use licensed binaries past the EoA or        EOSL dates. However, they will not be able to download binaries or        generate new license keys after the EoA date or obtain technical support        and subscription after the EOSL date.
  • vSphere ESXi 4.X availability and support – There is NO impact
  • vMA 4.1, 5, or 5.1 availability and support for all        versions – There is        NO impact

Can't Bind Windows 2008 R2 VM to Interface for DHCP

UPDATE: I had originally thought I had resolved this by removing/re-adding the adapter. Turns out the issue reappeared this morning. After more researching, I found that it was caused by WDS! Since I no longer need WDS for testing, I have disabled the service. Went into bindings for DHCP and checked the interface with the static IP and it kept the setting. NOW DHCP should be working permanently. =)

Ran into a weird issue this morning when moving DHCP from an ASA to a 2008 R2 virtual machine on VMware ESX 4.1. The 2008 R2 machine had a standard network, static, adapter. The DHCP server role installed fine. I could not bind the DHCP server to the interface though! I restarted the VM a few times while troubleshooting then decided to yank currently installed adapter out (uninstalled from 2008 R2 first, then removed from VM via vSphere client).

I added a VXNET3 adapter, re-assigned the IP address it previously had then restarted DHCP server. I immediately saw the DHCP server bind to the interface.

Just a strange issue.