Tag Archives: pool

IOPS calculation for your FAST Pool

I will provide a calculation example for calculating the required spindles in combination with a known skew. No capacity will be addressed in this post, where I will base it purely on IOPS / throughput and apply it to a mixed FAST VP pool.

We all know about the write penalty which is the following:

  • RAID10: 2
  • RAID5: 4
  • RAID6: 6

What if we have an environment which has a skew of 80% with a required amount of IOPS that is 50000. Besides this, we know that there are 80% reads and only 20% writes. Remember that Flash is a good reader?

Now that we know there is a skew of 80%, we can calculate the amount of flash we need inside the pool:

0.80 * 50000 = 40000 IOPS that we need inside the highest tier of our FAST VP pool. For the remaining 10000 IOPS, we will keep the rule of thumb where we base the remaining stuff on 80% for SAS and 20% for NLSAS:

0.2 * (0.2 * 10000) = 2000 IOPS for NLSAS

0.2 * (0.8 * 10000) = 8000 IOPS for SAS

Now, without the write penalty applied, we need to get the following in our pool:

  • Flash: 40000 IOPS
  • SAS: 8000 IOPS
  • NLSAS: 2000 IOPS

Write Penalty

But what about the backend load? By backend load, I mean that there will be the write penalty included for calculating the exact spindles we need. Remember that we have about 20% reads on this environment:

(0.8 * 40000) + (2 * 0.2 * 40000) = 32000 + 16000 = 48000 IOPS for FASTCache which is in RAID10

or..

(0.8 * 40000) + (4 * 0.2 * 40000) = 32000 + 32000 = 64000 IOPS for Flash in our pool on RAID5

(0.8 * 8000) + (4 * 0.2 * 8000) = 6400 + 6400 = 12800 IOPS for SAS in RAID5

(0.8 * 2000) + (6 * 0.2 * 2000) = 1600 + 2400 =  4000 IOPS for NLSAS in RAID6

How many drives to I need per tier?

We keep the following rule of thumbs in mind for the IOPS capacity per drive:

  • Flash: 3500 IOPS
  • SAS 15k: 180 IOPS
  • NLSAS: 90 IOPS

To make sure you are ready for bursts, you could use “little’s law”, which means you will use only about 70% of this rule of thumb so you always have an extra buffer, but this is up to you as we will also round up the amount of disks for best RAID purposes.

64000 / 3500 = 19 disks, which we would round up to 20 when we want flash to be in a RAID5 configuration

12800 / 180 = 72 disks, which we would round up to 75 to keep RAID5 best practices again

4000 / 90 = 45 disks, which we would round up to 48 if we want to keep 6+2 RAID6 sets for example

Keep in mind that this calculation does not incude any capacity based on TB or GB, only on IOPS!

How to Create a Mirrored Root Pool After Installation ?

  1. Display your current root pool status.
    # zpool status rpool
      pool: rpool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       ONLINE       0     0     0
              c1t0d0s0  ONLINE       0     0     0
    
    errors: No known data errors
  2. Attach a second disk to configure a mirrored root pool.
    # zpool attach rpool c1t0d0s0 c1t1d0s0
    Please be sure to invoke installboot(1M) to make 'c1t1d0s0' bootable.
    Make sure to wait until resilver is done before rebooting.
  3. View the root pool status to confirm that resilvering is complete.
    # zpool status rpool
      pool: rpool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress for 0h1m, 24.26% done, 0h3m to go
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0  3.18G resilvered
    
    errors: No known data errors

    In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:

    scrub: resilver completed after 0h10m with 0 errors on Thu Mar 11 11:27:22 2010
  4. Apply boot blocks to the second disk after resilvering is complete.
    sparc# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
  5. Verify that you can boot successfully from the second disk.
  6. Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv command from the SPARC boot PROM. Or, reconfigure the PC BIOS.