Matt Pson Just another dead sysadmin blog

26Sep/11Off

The HP / 3Com confusion…

Just gotta love the confusion when a customer orders a HP ProCurve switch and unboxes a 3Com switch with a completely different modelnumber.

Ordering a "HP ProCurve v1905" will get you a "3Com Baseline Switch 2250-SFP Plus", really?

Thankfully most customers are understanding when they get the explanation that HP bought 3Com a while back.

Wonder when we'll get accused of fraud? "Hey, I ordered a HP, not a crappy 3Com". I'm just waiting for it.

28Aug/11Off

VMware ESXi 5? We just got rid of 3.5

After weeks (months?) of active preparations and step-by-step migrations I made the final transfer of data from our old VMware ESXi 3.5 environment. No way that I will start planing the move to version 5 this week/month (fact is that there was not any burning "wowah! gotta have now!" things in the feature list when I took a quick glance).

Tagged as: , No Comments
27Aug/11Off

NFS and Debian Squeeze followup

I previously wrote about some troubles with using a server running Debian Squeeze (6.0) with NFS. I'm now inclined to think that either the problem was fixed (a clean install of 6.0.2 does not have any of those problems) or that it was related to using an old server that probably was installed back in the Woody days and just 'apt-get dist-upgrade' from there. The fact that it (the server) worked well in general when upgraded from Woody to Squeeze, via Sarge, Etch and Lenny , is a great testament to how well Debian works imho.

18Aug/11Off

On using Cisco’s UCS servers as normal servers

"So you are using the whole Cisco UCS system ...and how?"

I have gotten the above question in some different flavors quite a few times lately. And the answer is: no we do not, we use them as normal, single, rackservers just as you would with a rackserver from any other vendor.

The Cisco C200 M2 comes with a decent CPU (has 2 CPU sockets) and a few slots for RAM (up to 192GB), 3 PCI Express slots (one is low-profile) and you can install up to 4 standard 3,5" SATA disks of  your choice (no more having to stick with what other vendors can supply) plus a very decent out-of-band management card as standard (compare DRAC or iLo) and all that at quite a lower price than a comparable system from say Dell. Sure, the standard included warranty and support is far from what Dell or HP offers but I'm sure that Cisco will happily supply that for an additional fee if you want. We recently saw a increase in baseprice of this system but it is still 25-50% off from the prices that Dell keeps flooding our mailboxes with (which I know is not the prices we would pay after phoning our Dell representative).

The Cisco C210 M2 is much the same basic system but still quite a bit different as it has 16 2,5" diskslots in a 2U chassis where you have to get the disks from Cisco (which can take quite some time if unlucky) and 5 PCI Express slots. Apart from those two things it seems that the C200 and C210 shares all other properties.

I have earlier reported about a newer system, the C260 monster (64 DIMM slots and 16 drives in a 2U chassis with the new Intel Xeon family processors!), but I have yet to see it in real life (and more importantly in a pricelist).

But yes, these are usable as normal rackservers without any modifications and provides good value in the low-end range of rackservers. I would recommend them to anyone just looking for a decent rackserver any day.

18Jul/11Off

Proving a point – building a SAN (part 4 – the neverending story?)

Finally after weeks and weeks of waiting all parts for my SAN project had arrived and I unpacked the last cables and installed them and they sure did work just fine. All 16 disks jumped online and agreed to my configuration. Victory!

(Well, one of the Seagate Constellation 500GB 2,5" disks failed quite early when I started to transfer some data onto the filesystem. No way I would wait another couple of weeks for Cisco to ship me a new so I went by the local PC dealer and picked up a Seagate Momentus 500GB which worked perfectly.)

I decided to spread disks evenly among the 2 physical controllers and the channels so I got:

Card 1, channel 1: 1 SAS disk (system) and 3 SATA (storage pool)
Card 1, channel 2: 1 SSD disk (zil) and 3 SATA (storage pool)
Card 2, channel 1: 1 SAS disk (system) and 3 SATA (storage pool)
Card 2, channel 2: 1 SSD disk (zil) and 3 SATA (storage pool)

Brilliant, now to install Nexentastor CE and make some zpools. I was happy to see that during my wait for the hardware to arrive a new version had been released 3.0.5 which hopefully addressed some evil bugs I had read about.

I set up my zpool with two mirrored logdevices and two raidz2 pools with 6 disks each to be safe even if more than one disk would fail later on. The end result looks like this:

config:

        NAME         STATE     READ WRITE CKSUM
        stor         ONLINE       0     0     0
          raidz2-0   ONLINE       0     0     0
            c0t1d0   ONLINE       0     0     0
            c0t2d0   ONLINE       0     0     0
            c0t3d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0
            c1t5d0   ONLINE       0     0     0
            c1t6d0   ONLINE       0     0     0
          raidz2-1   ONLINE       0     0     0
            c0t4d0   ONLINE       0     0     0
            c0t8d0   ONLINE       0     0     0
            c0t6d0   ONLINE       0     0     0
            c1t7d0   ONLINE       0     0     0
            c1t8d0   ONLINE       0     0     0
            c1t9d0   ONLINE       0     0     0
        logs
          mirror-2   ONLINE       0     0     0
            c0t0d0   ONLINE       0     0     0
            c1t4d0   ONLINE       0     0     0

errors: No known data errors

 

Sure, there was alot of diskspace 'wasted' with this configuration as I ended up with approximately 3.5TB of usable space from 6TB (12x500GB) of disks. Better safe than sorry when it comes to data and in this case the data would be virtual disks for a bunch of servers and if there is one thing I hate it's sitting in the middle of the night and restoring terabytes of data in case of an emergency (it's a fact that disks never break during daytime).

Enough labtesting, everything seemed to work just fine. Down to the datacenter to rack the server so it could be put into pre-production before my summer vacation in a couple of days. Took the 4 gigabit ports made a nice LACP of them in order to make no single NFS-client be able to use up all network bandwidth.

Made a NFS share and mounted it in our VMware ESXi cluster and transfered some uncritical VMs to the new SAN. and...

"Damn! That's fast..."

The difference when using proper a proper hardware setup (read: SSD as ZIL in a zpool) was like day and night compared to our old, SSD-less, Sun 7210. Average write latency reports in VMware was in the single digits compared to the 3 digit numbers we were used to.

Edit: now a few weeks later we have done even more test and even transfered some critical system to the new SAN and it just works without a single hickup so far (knock on wood). Next up is to transfer all data off the Sun 7210 system so we can upgrade the firmware and re-install the system with the latest updates.