18 October 2013

Building Hyper-V Clusters Step by Step - Part VI - Virtual SAN with StarWind Native SAN for Hyper-V

In all the previous part we have covered the one thing that remained constant regardless of the way it was presented was shared storage.  This means storage that was "outside" of the Hyper-V host.  StarWind Native SAN for Hyper-V allows you to virtually create shared storage using local storage on the hosts.

Design Goals
Two node Hyper-V cluster
Using no additional external shared storage
Using Cluster Shared Volume between 2 nodes


Step 1  - Configure Servers
Both Servers are identical.  The have the Hyper-V Role and Failover clustering features installed.
HYPERV01
HYPERV02

Each Server has three NICs (although this is possible with two)
NIC 1 is the normal networks  (10.0.0.0\24)

These next two are essentially server to server connections.
NIC 2 is used for SAN Heartbeat ( 192.168.254.0\24)
NIC 3 is used for SAN Replication (192.168.255.0\24) (Must be at least 1Gb/s)


Step 2 - Get and install StarWind Native SAN
You will have to get the bits from http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition there is a free edition and a paid for one that provides additional features such as three nodes and unlimited storage.

Installing is a straight forward, accept all the defaults process.  This needs to be installed on both of the Hyper-V hosts that will participate in the virtual SAN.

Step 3 - Configure HYPERV01
Open the StarWind Management Console

  • Right Click the StarWind Servers node and select Add StarWind server
  • Specify HYPERV02 as the host and click OK
  • Once it has been added select it and click Connect
  • Both nodes should now be online

If not then you need to fix it.

Step 4 - Add device
Here we are going creat the virtual disk that will span both nodes.

  • Select HYPERV01 and click Add Device
  • Select High Availability Device
  • Specify the partner node Hyperv02
  • Specify the Alias
  • Select - Create HA device from the virtual disk
  • Specify the filename of where oyu want ot virtual disk image to be
  • Check Create new
  • Specify the Size
  • Configure the same on the second node
  • Select the network sets to be use as defined earlier
  • Select Write-back caching
  • Select Clear Virtual disks
  • Finish the wizard



Once the wizard has finished the disks will be created on both servers, it will also start the initial sync process.  This is a good time to carry on creating additional devices, you want at least two.  One for Quorum and one for the CSV.  While you are busy with this you will notice that it creates a new target for each device on each node.



Step 5 - Attach each host to to virtual storage
The process is the same on each server and at this stage it is pretty much the same old iSCSI process we have used in the previous builds.

HYPERV01

  • Start iSCSI initiator
  • Select the Discovery tab
  • Click Discover portal
  • Specify 127.0.0.1 as the target
  • Select the Targets tab
  • Select each target and connect them
  • Select Server manager - File and Storage Devices Volumes - Disks
  • Select the iSCSI Starwind SCSI Disk Device
  • Bring Online and Create a volume
  • Repeat for all the disks


HYPERV02

  • Start iSCSI initiator
  • Select the Discovery tab
  • Click Discover portal
  • Specify 127.0.0.1 as the target
  • Select the Targets tab
  • Select each target and connect them
  • Select Server manager - File and Storage Devices Volumes - Disks
  • You should see the iSCSI disk as offline and read only


At this stage I want to point out something important.  If you look at the iSCSI targets you will see that they are different.  In all the previous shared iSCSI examples they were the same.  You can see how each host looks at it's own copy of the HA disk directly.



Step 6 - Create the fail-over cluster
Server manager - Tools - Failover Cluster Manager

  • Create Cluster
  • Specify HYPERV01 and HYPERV02
  • Run validation
  • Specify a Cluster Name
  • Make sure "Add all eligible storage to the cluster is selected"
  • Create the cluster

By default the smallest Shared disk is used as the quorum - you can create additional Devices and add them as additional storage.

From here on in it is business as usual.  Additional disks can be added to the cluster and you can select to assign them as Cluster Shared Volumes.

In the background all cluster disk IO is now being replicated on both nodes.  This has obvious performance considerations, fortunately the StarWind Management Console proves some handy performance counters a graphs to help you monitor this.


Step 7 - Testing
Create a few virtual machines and test you normal fail over form one host to the other.  This should happen as per normal without any problem or issue.

Next up you would want to simulate a sudden failure of one of the node with a very scientific - pull the power plug test.

The normal failover cluster process will step in and move all the resources to the sole active node.  Once the second node is restarted the HA virtual disk will automatically start the sync process getting them both up to the current state.  You can monitor this in the  Management Console.  While the sync is in progress the recovering node's iSCSI target will be in the "Reconnecting" state until the disks are in sync.


Conclusion
Fail-over clustering does provide additional functionality over the normal share nothing live migration and replication capabilities of two stand alone Hyper-V hosts.  Storage Visualization like we have here allows you to get to a full blown fail-over cluster.  As long as you understand all the implication of this concept this is a great solution for a SAN free environement.

No comments:

Post a Comment