20 September 2013

Building Hyper-V Clusters Step by Step - Part III - iSCSI Cluster Shared Volume (CSV)

In the first two parts of this series we covered building SMB3.0 or File Share storage clusters.  We will now change things up a bit and switch to Block Storge.  In this case iSCSI, with the target being Windows Server 2012 R2.

Design Goals
  • Two node Hyper-V cluster
  • Using iSCSI as shared storage
  • Using a single Cluster Shared Volume between 2 nodes
  • Using only single NIC per server
  • Using DHCP
  • No SCVMM

What we are building this time round is a Fail Over Cluster for the Hyper-V tier.  This means that one of the Hyper-V hosts can fail and thing will carry on working. It is important to note that this configuration still only provides fault tolerance for the compute tier and not the storage tier.

Servers
The same basic configuration as the previous labs

Cluster nodes HV01 and HV02
iSCSI target and management server HV03

Build Process
As before all installation tasks are performed form HV03 unless stated otherwise.  Again we will be using PowerShell when it makes sense and saves time and effort.


  1. Configure the iSCSI target server
  2. Configure nodes with roles
  3. Configure storage
  4. Create the cluster
  5. Configure CSV
  6. Create Virtual Switches
  7. Create fail over VMs


Step 1 - Configure the iSCSI target server
Install the required services
The iSCSI initiator is installed by default but not the target, so we will have to install it manually

Install-WindowsFeature FS-iSCSITarget-Server

Enable the firewall rules
By installing the iSCSI target you enable the rule but strangely firewall is not configured for you. This enables the required rules 

Enable-NetFirewallRule -DisplayGroup "iSCSI Service", "ISCSI Target Group"

Configure iSCSI volume
iSCSI volumes in Windows Server are in actual fact virtual drives, in this case VHDX.  As such you can create iSCSI volumes just about anywhere.  All you have to do is specify a folder where you want to create them.  The easiest place to configure this is form the server manager.

  • Open Server manager
  • Select File and Storage Services
  • Select iSCSI

Creating a quorum disk

  • Tasks - New iSCSI virtual disk
  • Select the Server and choose a drive
  • Provide a meaningful name such as Quorum
  • Specify a disk size - since this is purely for quorum it can be tiny - 1GB
  • Leave the defaults for Fixed Size
  • Next
  • Select New iSCSI Target
  • Name hyperv-cluster
  • Next
  • Add the initiators
  • Click the Add button
  • Query the initiator computer by browsing HV01's AD object
  • Repeat for HV02
  • Now that you have two iSCSI  initiators listed click next
  • Leave off authentication
  • Finish the wizard



Create the VMStorage disk

  • Tasks - New iSCSI virtual disk
  • Select the Server and choose a drive
  • Provide a meaningful name such as VMStorage
  • Specify the size
  • Select the iSCSI target we created earlier (hyperv-cluster)
  • Finish the wizard

Step 2 - Prepare the Cluster node roles
Install the required roles and features

Invoke-Command -computername HV01, HV02 {Install-WindowsFeature RSAT-Hyper-V-Tools, RSAT-Clustering, Failover-Clustering, hyper-v -Restart}

Once the servers have rebooted you can continue the build

Step 3 - Connecting the Nodes to the storage
The nodes use the iSCSI initiator to connect to the storage. This is installed by default and only needs to be configured.

Node 1
Node one will connect and do the initial disk prep work too.

enter-pssession hv01

Automatically start the iSCSI initiator service

Set-Service –Name MSiSCSI –StartupType Automatic | Start-Service MSiSCSI

Connect to the target and register it as favorites so it will persist after reboots

New-IscsiTargetPortal –TargetPortalAddress hv03
Get-IscsiTarget |Connect-IscsiTarget
Get-IscsiSession | Register-IscsiSession

Configuring the disks
At this point your block storage is now connected, but it is still RAW, you need to initialize the disk, create a partition and format the drive.  This should be done for each disk

get-disk
Initialize-Disk -Number 1 -PartitionStyle GPT -PassThru | New-Partition -DriveLetter Q -UseMaximumSize | Format-Volume
Initialize-Disk -Number 2 -PartitionStyle GPT -PassThru | New-Partition -DriveLetter V -UseMaximumSize | Format-Volume

Close of the remote powershell session

Exit-PSSession

Node 2
The second cluster node now also needs to be connected to the same storage.

enter-pssession hv02

New-IscsiTargetPortal –TargetPortalAddress hv03
Get-IscsiTarget |Connect-IscsiTarget
Get-IscsiSession | Register-IscsiSession
get-disk

You should now see the iSCSI disks and they would be offline for this node.

Close of the remote powershell session

Exit-PSSession

Step 4 - Create the Cluster
By now everything has been prepped and is ready.  If the sequence is right the cluster will be created an automatically grab the Q drive for the quorum, and attach the second disk to the cluster.

New-Cluster -Name HVC01 -Node HV01, HV02

Since we are building Hyper-V we can make use of Cluster Shared Volume.  To enable this we need to configure the cluster to mark our vm storage drive as a CSV

Add-ClusterSharedVolume -Cluster ET-LAB-HVC01 "Cluster disk 2"



Step 5 - Configure Virtual Switches
The nodes need to have identical configurations, including the virtual switches.  This build is very simple from a network perspective.  Each node has a single network adapter.  We will need to create the virtual switch and allow access to the management OS

Start a PSSession to each node and follow these steps

Enter-PSSession HV01
Get-NetAdapter

Identify the adapter name you want to use (in this case Ethernet 3)

New-VMSwitch -Name "External" -NetAdapterName "Ethernet 3" -AllowManagementOS:$true 
Exit-PSSession

Step 6 - Creating Virtual machines
Since the Virtual Machines are now Cluster resources they need to be created from within the cluster manager

  • Select Roles
  • Virtual machines - New Virtual machine
  • Select a node
  • Specify the server name
  • Store machine in a different location will be C:\ClusterStorage\volume1\
  • Complete the wizard as per normal


Conclusion
Building a fail over cluster with iSCSI storage is only marginally harder to do than using SMB 3.0  also the only significant difference is the storage, once you get the hang of that then there is not real difference here - especially with this very simple deployment. 

No comments:

Post a Comment