Wednesday, May 8, 2013

Expanding System Drive on a Multiple Partition Virtual Disk

One of the most common requests we receive as a VMware admin is to expand the disk space on a VM. This is usually a very quick and simple exercise that takes no more than 5 minutes. However, last week I received a request that did take quiet a bit longer. Here are the details to the situation:

  • Expand the system drive on a Windows 2003 Server
  • Multiple partitions are on the virtual disk as this server was previously a physical server that was P2V'd
  • Changing the Virtual Hardware and expanding the virtual disk only adds unallocated space to the end of the drive, thus not allowing the system drive to be expanded as the unallocated space is not directly after the system disk

In order to expand the system drive, you will need to download a partition editor software. GParted, a free software which I used can be obtained here.

The first thing step to expanding the system disk is to expand the virtual disk within VMware. Next, take a snapshot of the VM. This will allow us to rollback in case anything bad occurs. Once that is done, mount the GParted ISO file onto the VM and boot up the live environment (accept all the defaults). You should now see something like this:


Please note that the system drive is /dev/sda1 and the gray spot is the unallocated space. We will need to move the unallocated space to be directly after the system partition. Using the GUI interface, click on /dev/sda3 and click "Resize/Move". Drag the partition to the right, then repeat this process for /dev/sda2. You should then have something like this:


Hit apply and then the operation will then start. The partitions will be copied onto their new locations. This will take some time depending on the size of the partitions.

Once the operations have finished, restart the VM and let it load into Windows. A chkdsk should occur to rebuild the partition tables.


Once Windows is up and running, you should now have the option to extend the system disk.



Increasing FAST Cache on EMC VNX5300

Adding FAST cache onto a VNX SAN is one of cheapest, easiest and quickest ways to increase the performance of the storage system as a whole. The steps to upgrade the cache is painless and simple and are documented below.

Prior to updating the FAST cache, please take into consideration the following:

  • the maximum FAST cache capacity varies depending on each model. The chart below indicates the maximum capacity and the permissible flash drive counts.  


System Model
Maximum
FAST Cache
Capacity
Recommended
minimum
drive count
Permissible Flash Drive Count
for FAST Cache

VNX5100
100 GB
2
2

VNX5300
500 GB
4
2*, 4, 6, 8 or 10 (100 GB EFD)
or 2* or 4 (200 GB EFD)

VNX5500
1000 GB
4
All VNX5300 configurations, plus:
12, 14, 16, 18 or 20 (100 GB EFD)
6, 8 or 10 (200 GB EFD)

VNX5700
1500 GB
8
All VNX5500 configurations, plus:
22, 24, 26, 28 or 30 (100 GB EFD)
12 or 14 (200 GB EFD)

VNX7500
2100 GB
8
All VNX5700 configurations, plus:
32, 34, 36, 38, 40 or 42 (100 GB EFD)
16, 18 or 20 (200 GB EFD)


* Not recommended

Please note that FAST cache can only be configured in RAID1

  • all drives in the FAST cache should be of the same capacity
  • try to spread the FAST cache disks across different buses

With those considerations in mind, we upgraded our VNX5300 from 200 GB to 400 GB by adding an additional two 200GB EFDs. In order to upgrade the cache, you must actually destroy the current cache and then recreate it. 

First, log in to Unisphere and open the System Properties. Under the "FAST Cache" tab, hit destroy.


The VNX will then flush all the cached data to disk. You will see the following warning:



Click yes and proceed. It will take a bit of time for all the data to be flushed out. The Fast Cache tab will now show the status of the destroy operation. You will notice the "Percentage Dirty Pages (%)" counters begin to decrease.


It should look like this when it is done.


Next, hit create and then manually select your EFDs to be used for the FAST cache.




The cache initializing process will now begin. As with the destroy operation, the initialization process status will be shown in the FAST cache tab.


After a while, the FAST cache will be initialized. You may want to check your RAID groups to ensure that FAST cache is still enabled for the LUNs residing in RAID groups.