30 September 2013

Building Hyper-V Clusters Step by Step - Part IV - iSCSI Cluster Shared Volume (CSV) SCVMM

In part III of this series we managed to build a block storage Hyper-V failover cluster manually.  When you  are using SCVMM it will automate a lot of the steps for you.  For this to work you need to provision your storage through VMM.  Because we are using Server 2012 R2 it can perform everything directly from SCVMM 2012 R2.

Design Goals
  • Two node Hyper-V cluster
  • Using iSCSI as shared storage
  • Using two Cluster Shared Volume between 2 nodes
  • Using only single NIC per server
  • Using DHCP
  • Use SCVMM to configure everything
Servers
The same basic configuration as the previous labs

Cluster nodes HV01 and HV02
HV03 is the management server, iSCSI target and SCVMM Server

Build Process

Step 1 Add a storage provider

  • Fabric - Storage - Providers - Add Storage Devices
  • San and NAS devices descovered and managed by a SMI-S provider
  • Protocol: SMI-S WMI
  • FQDN of HV03 
  • Specify a Run As account
  • Discover should succeed and you should see and item in the list
  • Create a classification (2012 R2 iSCSI)
  • Select two drives to use and choose your classification
  • Finish the Wizard




Step 2 - Create logical Units
This is where you create iSCSI drives for the nodes to use.

  • Fabric - Storage - Classification and Pools
  • Select the Clasification created earlier
  • Click Create logical unit
  • Create one small logical unit (1GB - for the Quorum)
  • Create a bigger logical unit ( For the CSV)

At this stage it should look something like this



Step 3 - Create a host group
  • Fabric - Servers - All hosts - Create Host Group
  • View the properties of the host group
  • Select storage 
  • Allocate Storage Pools - Select and Add both
  • Allocate Logical Units - Select and add all three
  • Close properties
  • Add the Hyper-V hosts
  • Add the individual hosts as per normal
  • If you check the properties of the hosts look at storage
  • iSCSI Array should have the target specified 
Step 4 - Create the cluster
  • Fabric - Servers - All Hosts
  • Create - Hyper-V Cluster
  • Specify Name
  • Specify Run As account
  • Add the two nodes prepared earlier
At the storage screen you will notice that one of the disk is grayed out.  This will be used for the Quorum.  It selects the smallest partition for this.




  • For the other partition check all three boxes (Quick Format - Force Format - CSV)
  • Finish the wizard

Check progress on the Job as it can run for a good few minutes


Step 5  - Check Storage on the cluster
  • Fabric - Servers - All Hosts - Your Cluster Group
  • Open the properties of the cluster
  • Select Shared volumes
  • You should see the CSV here, if not then check available storage, select the volume and click the Convert to CSV button



Step 6 - Create a HA VM
  • VMs and Services - Create Virtual Machine
  • Create New VM
  • Specify a name
  • Configure hardware - Make sure you select availability and check "Make this virtual machine highly available"
  • Specify the host group as a destination
  • Select either host
  • The virtual machine path would be  C:\ClusterStorage\Volume1\   or similar - this refers to the CSV volumes
  • Finish the wizard
Conclusion
Using SCVMM you can drive the entire deployment of Hyper-V iSCSI fail-over clustering as long as your storage plays along nicely.  If you manually go and check the iSCSI initiators and the iSCSI targets on the individual servers you will notice that it is pretty much the same as when we completed lab III

20 September 2013

Building Hyper-V Clusters Step by Step - Part III - iSCSI Cluster Shared Volume (CSV)

In the first two parts of this series we covered building SMB3.0 or File Share storage clusters.  We will now change things up a bit and switch to Block Storge.  In this case iSCSI, with the target being Windows Server 2012 R2.

Design Goals
  • Two node Hyper-V cluster
  • Using iSCSI as shared storage
  • Using a single Cluster Shared Volume between 2 nodes
  • Using only single NIC per server
  • Using DHCP
  • No SCVMM

What we are building this time round is a Fail Over Cluster for the Hyper-V tier.  This means that one of the Hyper-V hosts can fail and thing will carry on working. It is important to note that this configuration still only provides fault tolerance for the compute tier and not the storage tier.

Servers
The same basic configuration as the previous labs

Cluster nodes HV01 and HV02
iSCSI target and management server HV03

Build Process
As before all installation tasks are performed form HV03 unless stated otherwise.  Again we will be using PowerShell when it makes sense and saves time and effort.


  1. Configure the iSCSI target server
  2. Configure nodes with roles
  3. Configure storage
  4. Create the cluster
  5. Configure CSV
  6. Create Virtual Switches
  7. Create fail over VMs


Step 1 - Configure the iSCSI target server
Install the required services
The iSCSI initiator is installed by default but not the target, so we will have to install it manually

Install-WindowsFeature FS-iSCSITarget-Server

Enable the firewall rules
By installing the iSCSI target you enable the rule but strangely firewall is not configured for you. This enables the required rules 

Enable-NetFirewallRule -DisplayGroup "iSCSI Service", "ISCSI Target Group"

Configure iSCSI volume
iSCSI volumes in Windows Server are in actual fact virtual drives, in this case VHDX.  As such you can create iSCSI volumes just about anywhere.  All you have to do is specify a folder where you want to create them.  The easiest place to configure this is form the server manager.

  • Open Server manager
  • Select File and Storage Services
  • Select iSCSI

Creating a quorum disk

  • Tasks - New iSCSI virtual disk
  • Select the Server and choose a drive
  • Provide a meaningful name such as Quorum
  • Specify a disk size - since this is purely for quorum it can be tiny - 1GB
  • Leave the defaults for Fixed Size
  • Next
  • Select New iSCSI Target
  • Name hyperv-cluster
  • Next
  • Add the initiators
  • Click the Add button
  • Query the initiator computer by browsing HV01's AD object
  • Repeat for HV02
  • Now that you have two iSCSI  initiators listed click next
  • Leave off authentication
  • Finish the wizard



Create the VMStorage disk

  • Tasks - New iSCSI virtual disk
  • Select the Server and choose a drive
  • Provide a meaningful name such as VMStorage
  • Specify the size
  • Select the iSCSI target we created earlier (hyperv-cluster)
  • Finish the wizard

Step 2 - Prepare the Cluster node roles
Install the required roles and features

Invoke-Command -computername HV01, HV02 {Install-WindowsFeature RSAT-Hyper-V-Tools, RSAT-Clustering, Failover-Clustering, hyper-v -Restart}

Once the servers have rebooted you can continue the build

Step 3 - Connecting the Nodes to the storage
The nodes use the iSCSI initiator to connect to the storage. This is installed by default and only needs to be configured.

Node 1
Node one will connect and do the initial disk prep work too.

enter-pssession hv01

Automatically start the iSCSI initiator service

Set-Service –Name MSiSCSI –StartupType Automatic | Start-Service MSiSCSI

Connect to the target and register it as favorites so it will persist after reboots

New-IscsiTargetPortal –TargetPortalAddress hv03
Get-IscsiTarget |Connect-IscsiTarget
Get-IscsiSession | Register-IscsiSession

Configuring the disks
At this point your block storage is now connected, but it is still RAW, you need to initialize the disk, create a partition and format the drive.  This should be done for each disk

get-disk
Initialize-Disk -Number 1 -PartitionStyle GPT -PassThru | New-Partition -DriveLetter Q -UseMaximumSize | Format-Volume
Initialize-Disk -Number 2 -PartitionStyle GPT -PassThru | New-Partition -DriveLetter V -UseMaximumSize | Format-Volume

Close of the remote powershell session

Exit-PSSession

Node 2
The second cluster node now also needs to be connected to the same storage.

enter-pssession hv02

New-IscsiTargetPortal –TargetPortalAddress hv03
Get-IscsiTarget |Connect-IscsiTarget
Get-IscsiSession | Register-IscsiSession
get-disk

You should now see the iSCSI disks and they would be offline for this node.

Close of the remote powershell session

Exit-PSSession

Step 4 - Create the Cluster
By now everything has been prepped and is ready.  If the sequence is right the cluster will be created an automatically grab the Q drive for the quorum, and attach the second disk to the cluster.

New-Cluster -Name HVC01 -Node HV01, HV02

Since we are building Hyper-V we can make use of Cluster Shared Volume.  To enable this we need to configure the cluster to mark our vm storage drive as a CSV

Add-ClusterSharedVolume -Cluster ET-LAB-HVC01 "Cluster disk 2"



Step 5 - Configure Virtual Switches
The nodes need to have identical configurations, including the virtual switches.  This build is very simple from a network perspective.  Each node has a single network adapter.  We will need to create the virtual switch and allow access to the management OS

Start a PSSession to each node and follow these steps

Enter-PSSession HV01
Get-NetAdapter

Identify the adapter name you want to use (in this case Ethernet 3)

New-VMSwitch -Name "External" -NetAdapterName "Ethernet 3" -AllowManagementOS:$true 
Exit-PSSession

Step 6 - Creating Virtual machines
Since the Virtual Machines are now Cluster resources they need to be created from within the cluster manager

  • Select Roles
  • Virtual machines - New Virtual machine
  • Select a node
  • Specify the server name
  • Store machine in a different location will be C:\ClusterStorage\volume1\
  • Complete the wizard as per normal


Conclusion
Building a fail over cluster with iSCSI storage is only marginally harder to do than using SMB 3.0  also the only significant difference is the storage, once you get the hang of that then there is not real difference here - especially with this very simple deployment. 

18 September 2013

Building Hyper-V clusters Step by Step - Part II - The most basic added to SCVMM

In part one of this series we created a very basic Hyper-V cluster using SMB 3.0 as shared storage.  In this part we will cover the steps required to add this cluster to SCVMM.

See PART I for more background info

Design Goals
  • Two node Hyper-V cluster
  • Using only SMB 3.0 as shared storage
  • Using only single NIC per server
  • Using DHCP
  • Managed from SCVMM
Build Process
All of the steps can be performed directly from the System Center Virtual Machine Manager Console.

Step 1 Adding Storage
Here we will add the fileshare server as available storage for VMM to use
  • Select Fabric
  • Under Storage Select Provides - Right Click - Add Storage devices
  • Add Windows-based file server as managed storage device
  • Specify the FQDN (HV03) and a Run As account that is an administrator on the  File share host
  • Select relevant shares (From Part I we created a Witness and Storage share - add both)
  • Finish the wizard


Step 2 Adding the host nodes
Here the Hyper-V host and the configured cluster is added
  • Select Fabric
  • Create and select a new Folder under All Hosts
  • Right Click - Add Hyper-V hosts and Clusters
  • Windows Server computers in a trusted AD Domain
  • Specify  a Run As account that is an administrator on the f Hyper-V host
  • Specify a name of one of the nodes (HV01) - Next
  • It will discover the cluster with the nodes (HVC01, HV01 and HV02) - Select and click next
  • Finish the wizard

Step 3 Adding Storage to the Cluster
Before the storage3 can be used by the cluster it needs to be provisioned for it
  • Right Click the Cluster - Properties
  • Select File Share Storage - Add
  • Select \\HV03\StorageShare
  • OK
  • Specify Run As account
  • Finish The Wizard

Step 4 Verify the cluster
Before continuing on you should check that everyhting is working up to this point
  • Select the cluster
  • Right Click - Validate Cluster  - This takes a few minutes
  • At this point the cluster should Succeed with a few warnings



Step 5 Create a Highly Available virtual machine
The process for creating a virtual machine is the same,one key exception is when you are specifying the hardware you can now scroll down to the Advanced section and select the "Make this virtual machine highly available."
When you get to the configure Setting screen you should see the cluster file share is specified as the storage location.



Conclusion
If you want to allow SCVMM to apply dynamic optimization and have high availability you need clusters.  As you could see from this article it can be very simple and fairly straight forward to create and implement a basic SMB Hyper-V cluster.

Building Hyper-V clusters Step by Step - Part I - The most basic

This series of articles will go through various way on how to build a Hyper-V clusters.  This can mean a lot of things but the generally idea is to have two or more Hyper-V host servers that can host high availability virtual machines.

Design Goals

  • Two node Hyper-V cluster
  • Using only SMB 3.0 as shared storage
  • Using only single NIC per server
  • Using DHCP
  • No SCVMM




Servers
Cluster nodes: HV01 and HV02
Management and Fileshare Server HV03

All machines need to be joined to the same domain.

Build process
All build steps can and should be performed from the management server. Personally I like to run everything from the PowerShell ISE



All the PowerShell commands are formatted in courier

Step 1 Install required roles on the servers

The nodes needs:

  • Hyper-V
  • Failover Clustering
  • Hyper-V Management Tools
  • Failover Clustering Management Tools


Invoke-Command -computername HV01, HV02 {Install-WindowsFeature RSAT-Hyper-V-Tools, RSAT-Clustering, Failover-Clustering, hyper-v -Restart}

The role installation will reboot the server and may take a few minutes.

The Management Server needs

  • Hyper-V Management Tools
  • Failover Clustering Management Tools


Install-WindowsFeature RSAT-Hyper-V-Tools, RSAT-Clustering

Step 2 Create virtual switches
The nodes need to have identical configurations, including the virtual switches.  This build is very simple from a network perspective.  Each node has a single network adapter.  We will need to create the virtual switch and allow access to the management OS

Start a PSSession to each node and follow these steps

Enter-PSSession HV01
Get-NetAdapter

Identify the adapter name you want to use (in this case Ethernet 3)

New-VMSwitch -Name "External" -NetAdapterName "Ethernet 3" -AllowManagementOS:$true 
Exit-PSSession


Step 3 Create the cluster
Since we need the cluster Active Directory object to provisions the SMB shares we need to perform this step first.

This will create a cluster called HVC01 and will use DHCP to assign a cluster VIP

New-Cluster -Name HVC01 -Node HV01, HV02 

If this fails you can run a validation check to see if any errors show up

Test-Cluster -Node  HV01, HV02 


Step 4 Create the SMB 3.0 Fileshares
In this build the Management Server will also be the SMB 3.0 file share host.  For a cluster we will create two shares.  The one will be for the Cluster Quorum Witness, the other for storing virtual machines. The shares are identical and as such you can use the same commands, just change the path.

The commands below create the folder, then shares the folder granting full access to Everyone, lastly it sets the NTFS permissions to also allow full control to the nodes and the cluster machine accounts.


$Folder = "D:\VMFOLDER"
$ShareName = "VMFOLDER"
$MachineAccounts = "domain\hv01$","domain\hv02$","domain\hvc01"



New-Item -Path $folder -ItemType directory
New-SmbShare -Name $ShareName -Path $Folder -FullAccess Everyone

$MachineAccounts | ForEach-Object {
$Acl = Get-Acl $Folder
$Ar = New-Object  system.security.accesscontrol.filesystemaccessrule($_,"FullControl","ContainerInherit,ObjectInherit","none","Allow")
$Acl.SetAccessRule($Ar)
Set-Acl $Folder $Acl
}

At this point you should have a cluster that is up and running.  

Step 5 Configure the Quorum
In SMB only cluster you can make use of a file share to act as the Quorum.

Set-ClusterQuorum –Cluster HVC01 –NodeAndFileShareMajority \\hv03\VMWitness 


Step 6 Adding Virtual Machines
Since you now have a cluster the process of adding or creating a virtual machine is a little different.  Instead of using the Hyper-V management console you will be using the Failover Cluster Manager 

  • Connect to cluster
  • Select Roles
  • Right Click - Virtual machine - New Virtual Machine
  • Select any of the nodes
  • In the Specify Name and location Screen - specify the Fileshare created earlier
  • Complete the wizard as normal.
  • There will be a cluster validation notice that the object was created correctly



You can now start the virtual machine and fail it over between the hosts by selecting Move - Live Migration - Select Node



Conclusion
This is a very simple build and all of the build steps are performed in PowerShell, as such this build doc will work for Windows Server with GUI, Windows Server Core and Hyper-V server.

Check out Part II