Performance Tuning OwnCloud 6.02

I am using OwnCloud for some personal file storage and synchronizing of Contacts on my Linux server. The web interface is horribly slow with a default install. Here are some of the things I did to adjust performance and make it a bit faster.
PHP Specific

  • Increase memory_limit to 512MB
  • Installed php-apc

OwnCloud Specific

  • Installed using MySQL instead of SQLite3
  • Disabled addons that I did not need
  • Changed from AJAX Cron to Cron

Linux Server Specific

  • Nothing

I still see a request times on scan.php to be 1 second+, however, performance overall is much improved.
My System Setup

  • OwnCloud 6.02
  • MySQL 5.5.35
  • PHP FastCGI
  • Debian Linux 7.4
  • Memory – 4GB
  • CPU – 2x2GHz

 

MySQL Check For Fragmentation

I was working with a mail archive MySQL database today and was twiddling my thumbs waiting for simple queries to complete.  The database has about 12 million rows and is on a 2x2GHz 2GB Linux Server x64.  I wanted to try to optimize the database and found this little gem from Lee at SoftLayer blog post.

SELECT
TABLE_SCHEMA,
TABLE_NAME,
CONCAT(ROUND(data_length / ( 1024 * 1024 ), 2), 'MB') DATA,
CONCAT(ROUND(data_free / ( 1024 * 1024 ), 2), 'MB') FREE
from
information_schema.TABLES
where
TABLE_SCHEMA NOT IN ('information_schema','mysql')
and
Data_free > 0

After you run the SQL on your database you can use optimize table table_name to optimize it.

My performance increased enough to be noticeable. =)

Monitor SQL Performance

Here is a list of Windows’ performance counters to use in monitoring performance of an SQL server.
Create Performance Collection rules targeted to a SQL Server computer group for the following performance counters:

  • LogicalDisk(*)Avg Disk sec/Read
    Should be under 20ms. Beyond 50ms is very bad
  • LogicalDisk(*)Avg Disk sec/Write
    Should be under 20ms. Beyond 50ms is very bad
  • LogicalDisk(*)Disk Read Bytes/sec
  • LogicalDisk(*)Disk Reads/sec
    The Reads and Read Bytes/sec counters can be used on conjunction with the Writes and Write Bytes/sec counters to see the ratio of Reads/Writes your database is doing. This will help determine the optimal RAID configuration to optimize for reads, writes, or a balance of both.
  • LogicalDisk(*)Disk Write Bytes/sec
  • LogicalDisk(*)Disk Writes/sec
  • MSSQL:Buffer MangerBuffer cache hit ratio
    This should be as close to 100% as possible. Below 97-98% indicates SQL server needs more physical memory. If you are much below that, the SQL Server needs more memory.
  • MSSQL:Buffer MangerPage Lookups/sec
  • MSSQL:Buffer MangerPage reads/sec
  • MSSQL:Buffer MangerPage writes/sec
    You can use this to see how many writes you are performing to your disk. Each Page is 8KB.

View Multiple esxtop Outputs Side-by-Side

The Problem

I often use esxtop to review realtime performance metrics of various aspects of my virtual infrastructure.  One annoying thing is that I cannot view multiple ESX hosts in a single esxtop session.  That would be such a nice feature to have, really.

The Solution

My solution is to use Linux screen command to accomplish side-by-side viewing of multiple esxtop output windows.
Screen does not come with ESX(I) but I have access to the ESX hosts from one of my management servers that runs a Linux distribution and has screen installed.  So I use PuTTy to access the Linux management server, and then from there launch screen and create two windows to then ssh to my ESX servers.

How I Did It

SSH to my Linux server and start screen.

  1. Created two windows (Ctrl-a c)
  2. Named each window (Ctrl-a A) ESX#
  3. Split the window vertically in screen (Ctrl-a |)
  4. On the first split, I SSH’d to the first ESX box
  5. Then Ctrl-a Tab to get to the second region of my vertical split
  6. Issue Ctrl-a 1 to access screen window #2
  7. SSH to second ESX server and run esxtop command.
Ctrl-a Anew window
Ctrl-a nnext window
Ctrl-a pprevious window
Ctrl-a Ssplit terminal horizontally
Ctrl-a |split terminal vertically
Ctrl-a :resizeresize current region
Ctrl-a :fitfit screen size to new terminal size
Ctrl-a :removeremove region
Ctrl-a tabmove to next region
Ctrl-a Aset window title
Ctrl-aselect window from list
esxtop
esxtop in screen with vertical window split for side-by-side viewing of 2 ESX server’s esxtop output

Setting the Congestion Threshold Value for Storage I/O Control

This is a great KB article from VMware worth reposting.  I’m going to start analyzing storage I/O more today on all the arrays I have and LUNs.  Article Source:  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1019687

Setting the congestion Threshold Value for Storage I/O Control

Details

The congestion threshold value for a datastore is the upper limit of latency that is allowed for a datastore before Storage I/O Control begins to assign importance to the virtual machine workloads according to their shares. You do not need to adjust the congestion threshold setting in most environments.
CAUTION Storage I/O Control will not function correctly unless all datatores that share the same spindles on the array have the same congestion threshold.

Solution

 If you do change the threshold setting, set the value based on the following considerations.
  • A higher value typically results in higher aggregate throughput and weaker isolation. Throttling will not occur unless the overall average latency is higher than the threshold
  • If throughput is more critical than latency, do not set the value too low. For example, for Fibre Channel disks, a value below 20 ms could lower peak disk throughput. On the other hand, a very high value (above 50 ms) might allow very high latency without any significant gain in overall throughput.
  • A lower value will result in lower device latency and stronger virtual machine I/O performance isolation. Stronger isolation means that the shares controls are enforced more often. Lower device latency translates into lower I/O latency for the virtual machines with the highest shares, at the cost of higher I/O latency experienced by the virtual machines with fewer shares.
  • If latency is more important, a very low value (lower than 20 ms) will result in lower device latency and better isolation among IOs at the cost of a decrease in aggregate datastore throughput.

Procedure

  1. Select a datastore in the vSphere Client inventory and click the Configuration tab.
  2. Click Properties.
  3. Under Storage I/O Control, select the Enabled check box.
  4. Click Advanced to edit the congestion threshold value for the datastore. The value must be between 10 and 100. You can click Reset to restore the congestion threshold setting to the default value (30 ms).
  5. Click Close.
For more information see the section “Managing Storage I/O Resources” on the “vSphere Resource Management Guide” (pdf) http://www.vmware.com/support/pubs/.