18 October 2013

Building Hyper-V Clusters Step by Step - Part VI - Virtual SAN with StarWind Native SAN for Hyper-V

In all the previous part we have covered the one thing that remained constant regardless of the way it was presented was shared storage.  This means storage that was "outside" of the Hyper-V host.  StarWind Native SAN for Hyper-V allows you to virtually create shared storage using local storage on the hosts.

Design Goals
Two node Hyper-V cluster
Using no additional external shared storage
Using Cluster Shared Volume between 2 nodes


Step 1  - Configure Servers
Both Servers are identical.  The have the Hyper-V Role and Failover clustering features installed.
HYPERV01
HYPERV02

Each Server has three NICs (although this is possible with two)
NIC 1 is the normal networks  (10.0.0.0\24)

These next two are essentially server to server connections.
NIC 2 is used for SAN Heartbeat ( 192.168.254.0\24)
NIC 3 is used for SAN Replication (192.168.255.0\24) (Must be at least 1Gb/s)


Step 2 - Get and install StarWind Native SAN
You will have to get the bits from http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition there is a free edition and a paid for one that provides additional features such as three nodes and unlimited storage.

Installing is a straight forward, accept all the defaults process.  This needs to be installed on both of the Hyper-V hosts that will participate in the virtual SAN.

Step 3 - Configure HYPERV01
Open the StarWind Management Console

  • Right Click the StarWind Servers node and select Add StarWind server
  • Specify HYPERV02 as the host and click OK
  • Once it has been added select it and click Connect
  • Both nodes should now be online

If not then you need to fix it.

Step 4 - Add device
Here we are going creat the virtual disk that will span both nodes.

  • Select HYPERV01 and click Add Device
  • Select High Availability Device
  • Specify the partner node Hyperv02
  • Specify the Alias
  • Select - Create HA device from the virtual disk
  • Specify the filename of where oyu want ot virtual disk image to be
  • Check Create new
  • Specify the Size
  • Configure the same on the second node
  • Select the network sets to be use as defined earlier
  • Select Write-back caching
  • Select Clear Virtual disks
  • Finish the wizard



Once the wizard has finished the disks will be created on both servers, it will also start the initial sync process.  This is a good time to carry on creating additional devices, you want at least two.  One for Quorum and one for the CSV.  While you are busy with this you will notice that it creates a new target for each device on each node.



Step 5 - Attach each host to to virtual storage
The process is the same on each server and at this stage it is pretty much the same old iSCSI process we have used in the previous builds.

HYPERV01

  • Start iSCSI initiator
  • Select the Discovery tab
  • Click Discover portal
  • Specify 127.0.0.1 as the target
  • Select the Targets tab
  • Select each target and connect them
  • Select Server manager - File and Storage Devices Volumes - Disks
  • Select the iSCSI Starwind SCSI Disk Device
  • Bring Online and Create a volume
  • Repeat for all the disks


HYPERV02

  • Start iSCSI initiator
  • Select the Discovery tab
  • Click Discover portal
  • Specify 127.0.0.1 as the target
  • Select the Targets tab
  • Select each target and connect them
  • Select Server manager - File and Storage Devices Volumes - Disks
  • You should see the iSCSI disk as offline and read only


At this stage I want to point out something important.  If you look at the iSCSI targets you will see that they are different.  In all the previous shared iSCSI examples they were the same.  You can see how each host looks at it's own copy of the HA disk directly.



Step 6 - Create the fail-over cluster
Server manager - Tools - Failover Cluster Manager

  • Create Cluster
  • Specify HYPERV01 and HYPERV02
  • Run validation
  • Specify a Cluster Name
  • Make sure "Add all eligible storage to the cluster is selected"
  • Create the cluster

By default the smallest Shared disk is used as the quorum - you can create additional Devices and add them as additional storage.

From here on in it is business as usual.  Additional disks can be added to the cluster and you can select to assign them as Cluster Shared Volumes.

In the background all cluster disk IO is now being replicated on both nodes.  This has obvious performance considerations, fortunately the StarWind Management Console proves some handy performance counters a graphs to help you monitor this.


Step 7 - Testing
Create a few virtual machines and test you normal fail over form one host to the other.  This should happen as per normal without any problem or issue.

Next up you would want to simulate a sudden failure of one of the node with a very scientific - pull the power plug test.

The normal failover cluster process will step in and move all the resources to the sole active node.  Once the second node is restarted the HA virtual disk will automatically start the sync process getting them both up to the current state.  You can monitor this in the  Management Console.  While the sync is in progress the recovering node's iSCSI target will be in the "Reconnecting" state until the disks are in sync.


Conclusion
Fail-over clustering does provide additional functionality over the normal share nothing live migration and replication capabilities of two stand alone Hyper-V hosts.  Storage Visualization like we have here allows you to get to a full blown fail-over cluster.  As long as you understand all the implication of this concept this is a great solution for a SAN free environement.

02 October 2013

Building Hyper-V Clusters Step by Step - Part V - iSCSI Dell PowerVault MD3200i

In the previous articles in the series We stepped through all the different steps in creating the clusters form scratch. This article will focus primarily on the step required to configure the PowerVault and SCVMM.  Because of the large number of step I have kept the article a brief as possible, so it is more of a rough guide.

Design Goals

  • Two node Hyper-V cluster
  • Using iSCSI as shared storage on a Dell PowerVault target
  • Using a single Cluster Shared Volume between 2 nodes
  • Using only single NIC per server
  • Using DHCP
  • Import the cluster into SCVMM


Step 1 - Configure Storage Disk
Open the PowerVault Modular Disk Storage Manager

  • Select Logical Tab
  • Right Click Unconifgured Capacity
  • Select Create Disk Group
  • Next
  • Give it a meaningful name (30 char limit)
  • Next
  • Select the RAID Level
  • Select a Capacity
  • Finish
  • You will be prompted to create a virtual disk using the group - Select Yes
  • On the Introduction Screen - Next
  • Specify disk capacity and a disk name
  • (Here I create a small disk for Quorum and additional disks for VM Data
  • Finish
  • You will be prompted to create another virtual disk - Repeat for all the volumes you want to create
  • You will be reminded to create mappings
  • Click OK
Step 2 - Configure iSCSI target
Next up we need to configure the hosts and host groups.  For this we will need the iSCSI initiator names from the Hyper-V servers.


  • On each server
  • Open the iSCSI Initiator
  • Select the Configuration tab
  • Copy the the value of the Initiator Name: field


  • In the PowerVault Modular Disk Storage Manager
  • Select the Mappings Tab
  • Right Click Storage Array
  • Define Host
  • Host Name can be your actual Hyper-V host name
  • Select "Add by creating a new host port identifier" 
  • Past the iSCSI initiator name you copied in the previous section
  • User Lable cannot  be the Server host name again
  • Click Add
  • Click Next
  • Select Windows form the Host type list
  • Next
  • Select "Yes - this host will share access to the same virtual disks with other hosts"
  • Enter the name of the hosts group (cluster name)
  • Next Finish
  • Repeat the process for the second host



  • Right Click the Host group
  • Define > Storage Partitioning
  • Next
  • Select the correct host group
  • Select and add the relevant disks
  • Finish


Step 3 - Configure a host to connect to storage
The first host in the cluster will be used to prepare the volumes. From the server manager - 

  • iSCSI Initiator
  • Discovery
  • Discover Portal
  • Specify IP of the PowerVault
  • Advanced
  • Local Adapter select Microsoft iSCSI Initiator
  • Initiator IP select the correct one
  • OK
  • OK
  • Select Targets Tab
  • Select the Target - Connect
  • Select Volumes and Devices - Auto Configure
  • Ok

  • Server Manager
  • File And Storage Services
  • Select Disks
  • Bring online
  • Right Click  - new Volume
  • Complete the wizard 
NOTE* Formatting large volumes will take a Looooooong time - like half and hour ish for 1TB

Complete this for all of the volumes you want to add to the cluster.


Step 4 - Configure the second host to connect to storage
On the second host configure and connect the iSCSI initiator.

At this point both nodes should be able to see the disks and volumes but only online on one of the hosts.

Step 5 - Create the cluster

  • Start Fail-over cluster manager
  • Create cluster
  • Add nodes
  • Provide Cluster Name
  • Finish The Wizard


You should now have a cluster that is up and running.

Step - 6 Add the storage to SCVMM
Next up we need to import the cluster into VMM.  However to do this correctly we need to provision the storage in VMM.  For this we need a provider for the PowerVault.

You have to make use of the VCenter Plugin

Note: It only runs on 2008 R2  - not 2012

You can follow these step http://vinfrastructure.it/en/2012/05/manage-a-dell-powervault-array-with-system-center-vmm-2012/


  • In SCVMM
  • Select Fabric - Storage - Providers - Add Storage Devices
  • SAN and NAS devices discovered and managed by a SMI-S provider
  • Protocol SMI-S CIMXML
  • IP address is the server hosting the VCentre Plugin
  • Specify an admin account for that server

  • You should now see that the PowerVault is listed
  • Select the Storage Devices
  • Here you will see the Storage groups as defined on the PowerVault
  • Create a classification and add the relevant group


Select Fabric - Storage - Classification and pools
Expand the new Classification you created - you should now see the LUNs that are mapped to the storage group


Step 7 - Import the cluster
Next up import the Cluster as Hyper-V hosts

  • Select the properties of the host
  • Select available storage - here you should see the cluster disks
  • Select them and Convert to CSV


The setup is now complete.


Please note the following:
This is purely a lab configuration, actual real world performance of this single NIC configuration will be potentially unpredictable with periods of lag.  When building this for a production environment it is suggested that you use dedicated NICs for storage, MPIO also add redundant links to the storage.