Matt Pson Just another dead sysadmin blog


Getting that VMware home lab

So for quite some time I have wanted a small home lab in order to try out some tricks that I read about on the internet that isn't, in lack of another work, inappropriate to do at work (no, nothing naughty!). I also have like 4-5 USB harddrives scattered around with various stuff on them (media, backups etc.).

My solution arrived a couple of days ago, the HP Microserver. It's a small, not massively powerful server that after some research seemed perfect to have at home.

It came with a AMD Turion II dual-core CPU /1,5GHz), 2GB RAM and a 250GB disk. It also have some kind of simple RAID card from what I could determine, not that it mattered for me. My box did not have the DVD displayed in the pictore to the right. Best of all? This thing runs VMware ESXi 5.0 without a hitch.

But the initial configuration is a little lacklustre for my needs so I took the 250GB disk and put it where the DVD would normally be (took some powerconverter adapter, a SATA cable and some cable ties to secure the disk safely), upgraded the RAM to 8GB (2x4GB sticks) and finally installed 4 x 2TB disks to be used as a replacement for all those random external disks I had. To top it all I installed VMware ESXi 5.0 on a 4GB nano USB stick using the 250GB disk as datastore. Now the server boots up into ESXi nicely and can work as my home lab as well as my mediaserver at the same time.

Now it's when it turns nerdy ūüôā

I made a VM on the 250GB datastore which uses the 4 2TB disks with RDM (Raw Device Mapping). Installed my favourite Linux distribution, set up a raid5 using mdadm, formatted the array as a 5.5TB disk and installed Samba on it. Shared the disk on my home network and suddenly I had something to copy all my data to. In retrospect it would have been more fun/useful to have used the "Sun ZFS Storage" appliance maybe as that is a system which I find rather solid and an awesome product when the hardware or the company selling it (read: Oracle) isn't handicapping it.

Anyway, I have a home lab again! /happy


Getting Cacti working with Zend’s PHP packages

In short: you don't.

Backstory: having a server running some webbapplications using the Zend PHP packages for Debian using their repository (from the file /etc/apt/sources.list):

# zend server community edition via zend's repository
deb server non-free

...and now we wanted to move our existing Cacti installation to this server in order to put a old server to sleep. It's should be an easy task we thought after doing some Google searches and made up a small checklist (borrowed from [HOWTO] Migrating Cacti From One Server to Another):

  1. Install the official Debian Cacti packages on the new server and make sure it works
  2. Turn off Cacti at the old server in order to have a known state of the database
  3. Migrate the database to the new server
  4. Copy the RRD files as XML
  5. Reconvert the XML back to RRD files
  6. Activate the new Cacti

As it would turn out the first item on the checklist was the one that gave us some major troubles.

It soon became apparent that Zend's version of PHP did not include any support for SNMP and any attempt to install PHP related packages via Debians own repositories threatened with the uninstallation of the Zend specific packages - thus breaking the existing applications.

Some further investigation also showed that Cacti used the PHP Data Objects (PDO) interface to connect to MySQL which also wasn't available in the Zend version.

Both these things were used only by the datacollector poller.php and not by the webinterface bit of Cacti which worked straight out of the box.

Our solution: download a recent PHP source (5.3.8) and compile it with the needed options like:

configure  --with-snmp --disable-cgi --with-zlib --with-bz2 --with-curl --with-mysql --with-pdo-mysql --enable-sockets

install it in /usr/local and point Cacti to it in order to use it for all scripts (the option is at Console -> Settings -> Paths, a bit down the page you'll find "PHP Binary Path"). Then it works, data started to flow in from the 1000's of datasources we have.

Not that when we got to point 3 on our list, "Migrate the database to the new server", we had reconfigure this again as the settings from the old server overwrote the one on the new server. That one took us more than 10 minutes to figure out ūüėČ


NFS and Debian Squeeze followup

I previously wrote about some troubles with using a server running Debian Squeeze (6.0) with NFS. I'm now inclined to think that either the problem was fixed (a clean install of 6.0.2 does not have any of those problems) or that it was related to using an old server that probably was installed back in the Woody days and just 'apt-get dist-upgrade' from there. The fact that it (the server) worked well in general when upgraded from Woody to Squeeze, via Sarge, Etch and Lenny , is a great testament to how well Debian works imho.


VMware Tools on Debian Squeeze (a short howto)

(this post is more of en extended memory, now I know where I have written this down ūüôā )

So, installing VMware Tools on a virtual machine running Debian Squeeze (6.0.1 in this case) is really comparable to a walk in the park. First, prepare your installation by installing even more packages required to build the kernel modules:

# apt-get install make gcc linux-headers-$(uname -r)

(this will install a whole bunch of packages needed depending on what you already installed before)

Next, select your virtual machine in your vSphere Client and right-click and select Guest->Install/Upgrade VMware tools. This will put a virtual CD into your virtual machines CD-rom drive (hopefully you didn't remove that, did you?). Next mount it, extract the VMware tools package to /tmp (or any other location of your choice):

# mount /media/cdrom
# cd /tmp
# tar xfz  /media/cdrom/VMwareTools*.tar.gz

Then it's time to build the tools and install them:

# cd vmware-tools-distrib
# ./

This will trigger a bunch of questions and while it is safe to accept the default on all of them I usually like to keep my non-Debian stuff in /usr/local rather than in /usr to avoid any future conflicts I change that (one of the first questions). When the installation is finished VMware ESX/ESXi can communicate with the virtual machine which is really handy and there is some other useful perks like the balloon driver and some custom vmware drivers.

After the installation I clean up after me:

# cd /tmp
# rm -rf vmware-tools.distrib
# umount /media/cdrom

All done and we're finished, just a quick check in the Summary tab in my vSphere Client tells me that my virtual machine now has VMware Tools installed.


Using a Debian server as NFS storage for VMware ESXi

When installing a new VMware environment at work I thought up the idea of not using expensive SAN disks to provide various ISO files for installing virtual machines or software. In my environment I have access to several physical linux servers running Debian (Squeeze) so why not use one of those for a (readonly) share of ISOs via NFS, especially since we have way too much unused storage in most of those servers anyway.

After picking a suitable server, scp'ing some ISO files to a directory - time to install a NFS server. Really easy, just:

# apt-get install nfs-kernel-server

and after installing some additional packages, there it was ready to use. A quick edit to the /etc/exports file to share the directory /export/iso as read-only to hosts on my backend ESXi network:


and reloading the NFS server to enable the new config. Brilliant, that should be all (I thought).

Trying to mount the NFS share on one of my ESXi hosts it failed and gave a quite cryptic error message ("Unable to connect to NAS volume mynfs: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details").

Ok, a quick look at the ESXi logs produced another message, "MOUNT RPC failed with RPC status 9 (RPC program version mismatch)". That's pretty odd. I know ESXi wants to use NFS version 3 over TCP but I also know that Linux has been NFSv3 capable since a long time by now. Time to see what the Linux server thinks about this. Glancing at the logfiles on the Linux side gave nothing useful. A rcpinfo maybe?

$ rpcinfo -p  | grep nfs

100003    2   tcp   2049  nfs
100003    3   tcp   2049  nfs
100003    4   tcp   2049  nfs

So I clearly see a NFS version 3 over TCP up there. Something else must be wrong, after checking all configuration again to make sure that there wasn't any weird stuff anywhere. Mounting the share from another Linux server worked (of course) so the functionality was there. Insert some headscratching here and a new cup of tea.

After looking at the output of "nfsstat" I realized that the NFS mount by the other Linux server generated statistics under "Server nfs v2" and not under "Server nfs v3" (all zeroes there). So the Linux-Linux mount used version 2 and not 3. Why? A "ps auxwww" later I noticed that the mountd process looked quite odd:

/usr/sbin/rpc.mountd --manage-gids --no-nfs-version 3

So mountd has version 3 disabled? No configuration I saw included that "--no-nfs-version 3" option so it had to come from somewhere else. A quick read of the /etc/init.d/nfs-kernel-server file gave the answer:

$PREFIX/bin/rpcinfo -u localhost nfs 3 >/dev/null 2>&1 ||                    RPCMOUNTDOPTS="$RPCMOUNTDOPTS --no-nfs-version 3"

The startup script looks for NFS version 3 over UDP (the -u flag) and if it doesn't find it, it adds the "--no-nfs-version 3" flag. The "rpcinfo" command above did not list any NFS over UDP so it would always add that flag.

A quick edit of the /etc/init.d/nfs-kernel-server file to change the '-u' to '-t' to look for TCP instead of UDP and a restart of the NFS service via:

# /etc/init.d/nfs-kernel-server restart

and voila! - it works. The ESXi host mounted the directory without a problem. Problem solved.

So in short, Debian Squeeze provides a NFS version 3 service over TCP but always seems to disable it in the start-up script as there is no such service over UDP. I guess there is a reason for that somewhere but it would have been nice to see a bit more flexible start-up script, it would certainly have saved me some time.

Edit: when googling on the above problem I came across posts mentioning DNS problems and systemclocks that was out of sync so I guess in theory that those things can cause troubles too.