Matt Pson Just another dead sysadmin blog


On using Cisco’s UCS servers as normal servers

"So you are using the whole Cisco UCS system ...and how?"

I have gotten the above question in some different flavors quite a few times lately. And the answer is: no we do not, we use them as normal, single, rackservers just as you would with a rackserver from any other vendor.

The Cisco C200 M2 comes with a decent CPU (has 2 CPU sockets) and a few slots for RAM (up to 192GB), 3 PCI Express slots (one is low-profile) and you can install up to 4 standard 3,5" SATA disks of  your choice (no more having to stick with what other vendors can supply) plus a very decent out-of-band management card as standard (compare DRAC or iLo) and all that at quite a lower price than a comparable system from say Dell. Sure, the standard included warranty and support is far from what Dell or HP offers but I'm sure that Cisco will happily supply that for an additional fee if you want. We recently saw a increase in baseprice of this system but it is still 25-50% off from the prices that Dell keeps flooding our mailboxes with (which I know is not the prices we would pay after phoning our Dell representative).

The Cisco C210 M2 is much the same basic system but still quite a bit different as it has 16 2,5" diskslots in a 2U chassis where you have to get the disks from Cisco (which can take quite some time if unlucky) and 5 PCI Express slots. Apart from those two things it seems that the C200 and C210 shares all other properties.

I have earlier reported about a newer system, the C260 monster (64 DIMM slots and 16 drives in a 2U chassis with the new Intel Xeon family processors!), but I have yet to see it in real life (and more importantly in a pricelist).

But yes, these are usable as normal rackservers without any modifications and provides good value in the low-end range of rackservers. I would recommend them to anyone just looking for a decent rackserver any day.


Installing Solaris via CIMC – nope, not possible

Well, actually it very much possible to install Sun Oracle Solaris via the CIMC on the C200/C210 servers as long as one does not rely on the 'Virtual Media' function to mount a iso-file to install from. The installation just grinds to a halt when it tries to figure out where the CD-Rom is. From the look of it the installer finds the physical drive and tries to attach itself to that for the rest of the installation and just ignores the virtual one.

Now if I had known that before I left work as going there, to the data centre,  just to insert a CD and go back home again seems like a lot of time for very little work (erhm... no, I'm not working from home again, am I?).


Datacenter cable management woes

The other day I spent the evening rewiring some racks in our datacentre, mainly I was replacing our old, unmanaged, PDU-strips from Bachmann with new, managed, ones from APC. The goal was to keep better track of powerusage and get more outlets in our racks. Standing there I realized the development in cable usage in our datacenter over the last few years.

10 years ago most servers were 2-8 U and had a single PSU and a single network cable attached to it and that was it (this was before we had KVM systems in place - go-go console cart).

5 years ago every server were like 2-4 U, had redundant PSUs and two network cables (one for public services and one for doing backups) and 3 cables for KVM (VGA, PS/2 keyboard and mouse).

Yesterday, looking at our latest servers, 1-2 U, still redundant PSUs (2 cables), KVM (2 cables, now VGA and USB) and ...up to 11 network cables (VMware ESXi servers using 2 on-board network ports, two 4-port network cards and a CIMC port. Madness!

This means that in the space of half a rack of 15 servers we have, not counting switches and other infrastructure, 30 powercables, 30 KVM cables and 165 (!)  colour-coded network cables (more about those colours some other time). Keeping it tidy and neat is a full-time job I tell you!

I think I'll spend quite some time with our cables this summer...

Disclaimer: the picture is not of our datacenter, I found it on the next when googling for "datacenter cable management" ...and scary as it looks I have seen worse when visiting clients at times.