Matt Pson Just another dead sysadmin blog

7May/11Off

Proving a point – building a SAN (part 2)

So, lets make a little update on the last post as I have now put together (and ordered) a bunch of hardware in order to build a better SAN than my last experiment. This is what I ended up with:

  • Cisco C210 M2 server with 24GB RAM ( a nice 2U server with 16 x 2,5" diskslots, starting off with one 2,4GHz quadcore Intel E5620 CPU  )
  • a Intel Quad Gigabit network card ( I plan to aggregate these ports into a 4Gb NFS port using LACP )
  • two LSI MegaRAID SAS3081E-R RAID cards ( to get 16 channels to connect disks to )
  • 2 x 146GB 10000rpm SAS disks ( mirrored for os )
  • 2 x 120GB OCZ Vertex 3 Max IOPS SSD ( mirrored for ZFS logging )
  • 12 x 500GB 7200rpm SATA disks ( to create the storage )
  • NexentaStor ( storage os based on solaris )

That would be it. It will probably be a few weeks until all parts arrive which gives me time to think about the setup.

When it comes to disklayout I'm still deciding between using 6 mirror sets of 2 disks ( effectively a raid-10 giving a total of 3TB usable disk but with what should be the best performance possible. creating the mirrors between controllers, disk 1 on controller one mirrored with disk 1 on controller two, would give quite high resilience when it comes to faulting disks - if the 'right' disks (or controller) fail it could survive 6 disk failures, in theory ) or using 11 disks in a raidz with one spare ( like raid-5, giving a total of 5TB raw disk but with less resilience to faults and the usual write penalty when using stripes/distributed parity ). It all depends on if the SSD logdisks is enough to achive good write performance, which I'm told they will.

The logdisk, the mirrored SSD, is another thing I'm pondering as they come in 120GB size and I guess that ZFS will use a fraction of that. Some tell me that partitioning each disk in 2 partition and use one mirror set of partitions for the write log and the other 2 mirror partitions for read cache is a viable solution. Wonder if that will be needed - our problems so far has not been about the read performance, just the write performance.

One person even told me that since I'm using SSD drives with MLC technology I could partition them into 4 30GB partitions each creating a 8-way mirror for the ZFS log in order to cover for cellfailures on the disk. I'm not that convinced as that will lead to 4 times more write operations and if a disk fails I still have to replace the whole disk.

Well, well, when the hardware arrives I'll have to do some testing I think. I'll write some updated when I have something to report 🙂

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.