Matt Pson Just another dead sysadmin blog


Install a SSL certificate on Zimbra 8

The quickest guide ever, follow it at your own risk (your mileage may vary etc.). This is how I did it (IIRC), as root:

# openssl req -nodes -newkey rsa:2048 -keyout server.key -out server.csr


(then off to buy me a SSL certificate from some trustworthy provider ...or, as in my case, one that gives you a great deal. what I go back was 2 files; a certificate (.crt) and a bundle (ca-bundle) to provide a certificate chain for authentication)


# cp server.key /opt/zimbra/ssl/zimbra/commercial/commercial.key
# cp server.crt /opt/zimbra/ssl/zimbra/commercial/commercial.crt
# cp /opt/zimbra/ssl/zimbra/commercial/commercial_ca.crt
# /opt/zimbra/openssl/bin/openssl verify -CAfile commercial_ca.crt commercial.crt

(if the last step fail and give any error message you probably have an incomplete bundle. download a more complete one (you may have to merge the files yourself) from the SSL provider)

# /opt/zimbra/bin/zmcertmgr deploycrt comm /opt/zimbra/ssl/zimbra/commercial/commercial.crt /opt/zimbra/ssl/zimbra/commercial/commercial_ca.crt

(this deploys the certificate into Zimbra. Now just restart Zimbra to activate it all across the board)

# su - zimbra
$ zmcontrol stop
$ zmcontrol start


There, done.


NFS and Debian Squeeze followup

I previously wrote about some troubles with using a server running Debian Squeeze (6.0) with NFS. I'm now inclined to think that either the problem was fixed (a clean install of 6.0.2 does not have any of those problems) or that it was related to using an old server that probably was installed back in the Woody days and just 'apt-get dist-upgrade' from there. The fact that it (the server) worked well in general when upgraded from Woody to Squeeze, via Sarge, Etch and Lenny , is a great testament to how well Debian works imho.


HP 2810 Max number of VLANs?

Since it's not my day today I ran into a odd little problem when changing around some VLANs on a HP ProCurve 2810-24G switch. Using the webinterface (because I'm lazy combined with the fact that it actually works on the 2810 compared to the old 2650 switches where it doesn't most of the time) I encountered this:

"Yeah right, I doubt we got a switch ever with a limit of 8 VLANs". A quick Google for the datasheet  reports that this switch support up to 256 VLANs simultaneously. It's just that you have to configure the maximum number of VLANs manually and HP in their infinite wisdom decided that 8 was a good default. Fine so I just pop into the cli and reconfigure it then, as it was not available though the webconfiguration. Well, yes, the (re)configuration to a maximum of 32 VLANs was easy enough but to make the configuration 'bite' I'll have to reboot the switch, something that is always easier said than done in a production environment. /facepalm

Edit: seems that HP took this decision to save RAM in the switch (I think that was in the manual). A sound decision until you look at the memory currently used by the switch:

Our switch has 24MB RAM and of those 17MB is unused. 🙂


Clustering Juniper SRX240 gateways

So unboxing a pair of Juniper SRX240's at work today (actually not really, they were the re-branded Dell versions but as far as I'm told it's only the color of the chassis and the logo that differs). These units will hopefully replace some various old Linux firewall systems that we still have around - old habits and old hardware that has served us wonderfully for several years now.

Still waiting for some literature to arrive I thought that I could cluster them and take them down to the datacenter where they should live before we started to to work on the final security configuration.

Juniper provides a handy guide named "SRX Getting Started - Configure Chassis Cluster (High Availability) on a SRX240 device" (link) and it looked easy enough. It was not. Much because the configuration steps in that document was obviously not written for the current factory installed configuration in 10.3R2.11 (the version our units was shipped with).

Let's just say that if you follow it, and you aren't very familiar with clustering SRX devices, you will end up with several errors and a particular one that says "ge-0/0/0 - HA management interface cannot be configured" (or something like that) and your configuration can't be commited. Deadlock. Interestingly enough you can not restore the factory configuration (using the button at the front of the unit) either if you have issued the "set chassis cluster" command since it'll complain about the configuration. You first need a working configuration, then remove the clustering, then restore the default configuration, then do it the right way. Oh well...

Never mind my first attempt, I'll describe how to do it the right in a way that works. Quick and easy:

1. Boot the unit as you would the first time after unboxing it (or as in my case after carefully restoring it to factory default with some troubles...).

2. Login as root and enter configuration mode then:

delete system autoinstallation
delete system services dhcp
delete system services web-management http interface vlan.0
delete system services web-management https interface vlan.0
delete interfaces interface-range interfaces-trust
delete interfaces ge-0/0/0
delete interfaces vlan
delete security zones security-zone trust
delete security nat
delete security zones security-zone untrust
delete security policies
delete vlans

(actually instead of 'delete interfaces ge-0/0/0' I managed to do 'delete interfaces' removing all ge-0/0/* interfaces but it worked anyway)

'delete interfaces interface-range interfaces-trust' gave me an error but I've seen it mentioned in several documents so I included it here, I guess it's for another version of configuration than the one I had.

3. If you haven't setup a password for root do so now:

set system root authentication plain-text

4. Do 'commit check', if there still are any errors use 'delete' to remove those sections of the configuration and then do a 'commit'.

5. Do the same thing on the other unit that will be part of your cluster.

6. Now you are probably ready to start configuring your cluster using the Juniper document I linked at the start. Don't forget that you should connect ge-0/0/1 on both units as well as ge-0/0/2 (for the FAB links). I did this when they were rebooting after giving the 'set chassis cluster' command.

After doing it this way configuration was smooth and the cluster worked as expected. Tomorrow we'll throw some security configuration at it and see it in action - I have no doubts it'll do fine for what we'll use it for.