Matt Pson Just another dead sysadmin blog

18Jul/11Off

Proving a point – building a SAN (part 4 – the neverending story?)

Finally after weeks and weeks of waiting all parts for my SAN project had arrived and I unpacked the last cables and installed them and they sure did work just fine. All 16 disks jumped online and agreed to my configuration. Victory!

(Well, one of the Seagate Constellation 500GB 2,5" disks failed quite early when I started to transfer some data onto the filesystem. No way I would wait another couple of weeks for Cisco to ship me a new so I went by the local PC dealer and picked up a Seagate Momentus 500GB which worked perfectly.)

I decided to spread disks evenly among the 2 physical controllers and the channels so I got:

Card 1, channel 1: 1 SAS disk (system) and 3 SATA (storage pool)
Card 1, channel 2: 1 SSD disk (zil) and 3 SATA (storage pool)
Card 2, channel 1: 1 SAS disk (system) and 3 SATA (storage pool)
Card 2, channel 2: 1 SSD disk (zil) and 3 SATA (storage pool)

Brilliant, now to install Nexentastor CE and make some zpools. I was happy to see that during my wait for the hardware to arrive a new version had been released 3.0.5 which hopefully addressed some evil bugs I had read about.

I set up my zpool with two mirrored logdevices and two raidz2 pools with 6 disks each to be safe even if more than one disk would fail later on. The end result looks like this:

config:

        NAME         STATE     READ WRITE CKSUM
        stor         ONLINE       0     0     0
          raidz2-0   ONLINE       0     0     0
            c0t1d0   ONLINE       0     0     0
            c0t2d0   ONLINE       0     0     0
            c0t3d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0
            c1t5d0   ONLINE       0     0     0
            c1t6d0   ONLINE       0     0     0
          raidz2-1   ONLINE       0     0     0
            c0t4d0   ONLINE       0     0     0
            c0t8d0   ONLINE       0     0     0
            c0t6d0   ONLINE       0     0     0
            c1t7d0   ONLINE       0     0     0
            c1t8d0   ONLINE       0     0     0
            c1t9d0   ONLINE       0     0     0
        logs
          mirror-2   ONLINE       0     0     0
            c0t0d0   ONLINE       0     0     0
            c1t4d0   ONLINE       0     0     0

errors: No known data errors

 

Sure, there was alot of diskspace 'wasted' with this configuration as I ended up with approximately 3.5TB of usable space from 6TB (12x500GB) of disks. Better safe than sorry when it comes to data and in this case the data would be virtual disks for a bunch of servers and if there is one thing I hate it's sitting in the middle of the night and restoring terabytes of data in case of an emergency (it's a fact that disks never break during daytime).

Enough labtesting, everything seemed to work just fine. Down to the datacenter to rack the server so it could be put into pre-production before my summer vacation in a couple of days. Took the 4 gigabit ports made a nice LACP of them in order to make no single NFS-client be able to use up all network bandwidth.

Made a NFS share and mounted it in our VMware ESXi cluster and transfered some uncritical VMs to the new SAN. and...

"Damn! That's fast..."

The difference when using proper a proper hardware setup (read: SSD as ZIL in a zpool) was like day and night compared to our old, SSD-less, Sun 7210. Average write latency reports in VMware was in the single digits compared to the 3 digit numbers we were used to.

Edit: now a few weeks later we have done even more test and even transfered some critical system to the new SAN and it just works without a single hickup so far (knock on wood). Next up is to transfer all data off the Sun 7210 system so we can upgrade the firmware and re-install the system with the latest updates.

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.