Matt Pson Just another dead sysadmin blog


Moving to Zimbra 8 (and the 24 hour clock)

I recently took some decisions that gave me a most needed kick in the behind to upgrade/replace things that has been in my personal infrastructure for ages. Among those things was my mailserver that even in early 2013 was running pretty much the same Qmail installation I made back in 2005 (which in turn was based upon the 2002/2003 one I did at work).

Three fundamental things has changed since 2005:

  1. I'm 100% more convenient today and really into less work on simple things so I can direct my attention to things that are fun and requires creative thinking - ie. not fiddling with compiling my own mailserver.
  2. I'm 1000% more mobile in my usage of mail and Internet. In 2005 I probably read all my mail sitting at a computer using some kind of mail client (Alpine or Thunderbird). Today it's 99% in my mobile phone  my tablet or in a web browser  I'm also an frequent user of a calendar - that I blame on my bad memory. I think the people around appreciates that I can almost remember half a appointment these days.
  3. There is 10000% (figure not statistically proven but it feels like it) more spam hitting my mailbox that needs dealing with and that doesn't even make my top 500000 list of fun things to do.

zimbraPutting my experiences from a recent VMware Zimbra project at work I decided that the Open Source Edition was probably overkill for me but yet I wanted the standard features (works on all my devices, uses SSL, low cost since it's for personal use) plus a nice  adminstration panel and the (really) excellent webmail client.

So, off to one of my favourite VPS providers and signed up for a new 2GB RAM server, downloading Zimbra, purchasing a proper SSL certificate (got a nice deal on a 5-year one, no need to update until 2018) and spent about 2 hours installing everything. Compared to poking around with my previous installation this was probably about half the time spent. Instant success!

Thanks to VMware for improving the installation experience in Zimbra 8 compared to Zimbra 6 or 7 that was a bit dodgy at times, especially when installing a SSL certificate. I can really recommned Zimbra 8 if you are a little experienced and know your way around a normal Linux system and don't want to spend time on getting mailserver, antispam, webbmail, calendar and some kind of control panel to play nicely together. Just be aware of that it needs more than 1GB of RAM to run smoothly even in a minimal installation these days.

The only thing that kept bugging me using the webmail was that I couldn't find any setting to change the, for us Europeans  confusing AM/PM clock. The metric system is used in most of the world except a handful of countries  yet so much software assumes everybody uses it by default. To change it in Zimbra you have to change language from the default "English (United States)" to "English (United Kingdom)". Doh! Why not a simple choice that lets you pick either "12 hours" or "24 hours"? There is also a choice "English (Australia)" but who knows what time format you get then (I didn't dare to try).

(this last thing was a post in the use-the-blog-as-a-external-memory category)


VMware ESXi 5? We just got rid of 3.5

After weeks (months?) of active preparations and step-by-step migrations I made the final transfer of data from our old VMware ESXi 3.5 environment. No way that I will start planing the move to version 5 this week/month (fact is that there was not any burning "wowah! gotta have now!" things in the feature list when I took a quick glance).

Tagged as: , No Comments

Proving a point – building a SAN

A few weeks back I noticed that patching a bunch of virtual machines running Windows felt slower than it used to do. Not that patching Windows servers ever felt quick and there was a big list of patched going onto these systems, but something was not quite right that afternoon. All these VMs had in common that they ran of a Sun Storage 7210 SAN that we got a few years back that has served us well. After some detective work using the awesome DTrace analytics of this box combined with the ESXi statistics it was quite obvious that write latency was suffering when patching several VMs in parallel.

As this SAN is using the ZFS filesystem I knew that there was an option when we bought it to add a "Logzilla", a SSD device to speed up and cache write requests which in combination with VMware ESXi always doing sync-writes over NFS should solve the write latency problems (well, there wasn't a problem with those just yet unless we did heavy random writes in multiple VMs at the same time - but it was a first indication that we would sooner or later have a problem with that). VMware provides a troubleshooting guide that recommends that the average write latency should stay below 20ms under load and let's just say that we saw numbers way higher than that when we patched.

Ok, so I dropped a mail to our Oracle dealer asking for a price on those SSD devices. They phoned back a bit later sounding really hesitant to give me a quote, telling me that the price obviously had went up a bit since Oracle bought Sun. Uh-huh, so what was the price then? Over €10000 for a 18GB SSD disk?! Wait? What? So, um, we were expected to pay big for something that, probably, could improve write latency on this SAN.

What to do? Our first decision was to figure out if that "Logzilla" would actually improve the situation if we were to add it (doubtful given the price, but anyway) . But how?  Why not build a SAN? To the batcave...

Using mostly equipment that we had spare I managed to get hold of a Cisco UCS C200 M2 server with 4GB RAM and 3 Samsung 1Tb consumer-grade desktop SATA disks. We picked up a €150 standard 60Gb SSD disk from the local computer shop (we picked a cheap one that had decent write speed according to the datasheet, probably around 220Mb/s). Adding the free version of NexentaStor (SAN-appliance software based on Solaris, good stuff) on top of that turning our equipment into something best descibed as a getto-SAN.

Doing a next-next-next install using one of the SATA disks as a single system disk, using the other two as a mirror with the SSD as a log device - all using the onboard SATA controller in the server. Hooking up the SAN to the storage network using a single gigabit port and using Storage vMotion to move a couple of, non-critical but live production, VMs to our testing SAN. They have been running there for almost two weeks now in order to gather some real numbers and the statistics tell us that the average write latency stay well below the 20ms level defined by VMware, most about 1/10th of that, 2ms. Throughput maxed out the gigabit connection more or less, we saw speeds around 80-100MB/s.

Hm, yeah, our getto-SAN that cost us less than €1000 to put together outperformed our €30000+ SAN from 2 years back thanks to a log device for the ZFS filesystem. Sure, the workload maybe wasn't comparable but we still regard this as proof that ZFS performance improves greatly if you give it a "Logzilla". Still, €10000 for a 18GB SSD disk feels a bit steep as it doesn't add anything else to the SAN. So next step for us is to build a "real" SAN to use in production to see what performance we can get if we use 'proper' hardware.

I'll write more when I have put together what I can get for my limited budget of €4-5000.


VMware Tools on Debian Squeeze (a short howto)

(this post is more of en extended memory, now I know where I have written this down 🙂 )

So, installing VMware Tools on a virtual machine running Debian Squeeze (6.0.1 in this case) is really comparable to a walk in the park. First, prepare your installation by installing even more packages required to build the kernel modules:

# apt-get install make gcc linux-headers-$(uname -r)

(this will install a whole bunch of packages needed depending on what you already installed before)

Next, select your virtual machine in your vSphere Client and right-click and select Guest->Install/Upgrade VMware tools. This will put a virtual CD into your virtual machines CD-rom drive (hopefully you didn't remove that, did you?). Next mount it, extract the VMware tools package to /tmp (or any other location of your choice):

# mount /media/cdrom
# cd /tmp
# tar xfz  /media/cdrom/VMwareTools*.tar.gz

Then it's time to build the tools and install them:

# cd vmware-tools-distrib
# ./

This will trigger a bunch of questions and while it is safe to accept the default on all of them I usually like to keep my non-Debian stuff in /usr/local rather than in /usr to avoid any future conflicts I change that (one of the first questions). When the installation is finished VMware ESX/ESXi can communicate with the virtual machine which is really handy and there is some other useful perks like the balloon driver and some custom vmware drivers.

After the installation I clean up after me:

# cd /tmp
# rm -rf vmware-tools.distrib
# umount /media/cdrom

All done and we're finished, just a quick check in the Summary tab in my vSphere Client tells me that my virtual machine now has VMware Tools installed.


That darn VCP-410 test

So today was the day I should have been a VCP if I had anything to say about it. But I'm not it seems. I failed... by 2% (passing score is 300, max was 500 and I scored 290). Time to write off some post-exam frustation.