Wednesday, February 20, 2013

EMC VNX Pool Improvements after Flare 32 Upgrade

In an earlier blog post, I wrote about the new enhancements provided by the Flare 32 upgrade on the EMC VNX series SAN. After rebuilding the storage pools on our VNX this week, I was able to quantify some of the benefits gained.


Background

Our EMC VNX5300 SAN originally had 2 storage pools. The configuration is posted below:

Pool 0
  • 4x 600GB SAS Disks - Configured as RAID5 [4+1]: 1x 4 disk private raid group (PVR)
  • 8x 1TB NLSAS Disks - Configured as RAID5 [4+1]: 2x 4 disk PVR
  • 6x 3TB NLSAS Disks - Configured as RAID5 [4+1]: 1x 6 disk PVR
  • Total Usable: 20.8TB
Pool 1
  • 4x 600GB SAS Disks - Configured as RAID5 [4+1]: 1x 4 disk private raid group (PVR)
  • 6x 1TB NLSAS Disks - Configured as RAID5 [4+1]: 1x 6 disk PVR
  • 6x 3TB NLSAS Disks - Configured as RAID5 [4+1]: 1x 6 disk PVR
  • Total Usable: 19.9TB

As this system predates the beginning of my employment, I had no idea why it was set up like this. Several things go against EMCs best practices:
  • RAID 5 [4+1] should be added as groups of 5 disks
  • RAID 6 should be configured for disks larger than 1TB
Note: Mixed RAID groups were not supported in storage pools prior to Flare 32.

Since going against the best practices can lead to degradation in performance, we decided to rebuild both these pools at the same time we were adding in more space (creating a third pool). Rebuilding the pools is a destructive process and will destroy all data on the array. In order to do this without losing data, the data needs to be first migrated off. This can be easily done using LUN migrations onto an empty pool (which we had the luxury of as we were adding more storage).


Initial Design

Here is the initial design by an EMC engineer on how to rebuild our pool according to best practices. By purchasing two 600GB SAS disks, two 1TB NLSAS, and four 3TB NLSAS disks we obtain:

Pool 0
  • 5x 600GB SAS Disks - Configured as RAID5 [4+1]: 1x 5 disk private raid group (PVR)
  • 8x 1TB NLSAS Disks - Configured as RAID6 [6+2]: 1x 8 disk PVR
  • 8x 3TB NLSAS Disks - Configured as RAID6 [6+2]: 1x 8 disk PVR
  • Total Usable: 24.2TB
Pool 1
  • 5x 600GB SAS Disks - Configured as RAID5 [4+1]: 1x 5 disk private raid group (PVR)
  • 8x 1TB NLSAS Disks - Configured as RAID6 [6+2]: 1x 8 disk PVR
  • 8x 3TB NLSAS Disks - Configured as RAID6 [6+2]: 1x 8 disk PVR
  • Total Usable: 24.2TB

Using this design, we will be in compliance with the best practices. However, this design can be improved further if we were to combine both Pool 0 and Pool 1.


Revised Design

By combining both pools, we can configure the new pool as follows:

Combined pool
  • 10x 600GB SAS Disks - Configured as RAID5 [4+1]: 2x 5 disk private raid group (PVR)
  • 16x 1TB NLSAS Disks - Configured as RAID6 [14+2]: 1x 16 disk PVR
  • 16x 3TB NLSAS Disks - Configured as RAID6 [14+2]: 1x 16 disk PVR
  • Total Usable: 55.6TB

From the numbers, we can see that combining the pools and using the more efficient RAID options increases our usable disk space from 48.4TB to 55.6TB. This represents an increase of approximately 15%. Furthermore, the only downside to this design that I can see would be the slightly longer rebuild times if a disk was to fail due to parity calculations spanning more disks. However, as we are now using RAID6 for the larger disks, we felt that this was an acceptable risk as RAID 6 can withstand up to 2 disk failures.



No comments:

Post a Comment