12 December 2013

Resolution for Error (2927) in SCVMM 2012 R2 when creating a clone

Had this issue on a environment where everything was working perfectly other than creating a clone.

Error (2927)
A Hardware Management error has occurred trying to contact server HYPERV03.domain.com  .

WinRM: URL: [http://hyperv03.woolworths.co.za:5985], Verb: [INVOKE], Method: [FilteredHardDriveScout], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/P2VServerJob]

Unknown error (0x80338043)

Recommended Action
Check that WinRM is installed and running on server HYPERV03.woolworths.co.za. For more information use the command "winrm helpmsg hresult" and http://support.microsoft.com/kb/2742275 .

The Fix for this was to restart the System Centre Virtual Machine manager Agent on the Hyper-V Host.

Ref: http://blogs.technet.com/b/scvmm/archive/2012/12/13/vmm-support-tip-adding-hosts-fails-with-error-2927.aspx

Use PowerShell to conditionally add VM's to a VMM cloud

I've had a few instanced where I needed to add a whole bunch of existing VM's to an existing cloud in SCVMM.  I wanted to only add the machines where the VM name matched a certain naming convention.

The easiest way to do this was to pipe the details together PowerShell

Set the Variable to the Cloud's Name - in this case "Service Management"

$Cloud = Get-SCCloud -VMMServer scvmmc01.fixmyitsystem.com | where {$_.Name -eq "Service Management"}

I wanted to add all the machine that contained "CAA" in the machine name

Check that the correct list of VM's will be added

Get-SCVirtualMachine -VMMServer scvmmc01.fixmyitsystem.com | where {$_.Name -Match "CAA"} | select name

Once I was happy, the command pipe below added the machines to the cloud

Get-SCVirtualMachine -VMMServer scvmmc01.fixmyitsystem.com | where {$_.Name -Match "CAA"} | ForEach-Object {Set-SCVirtualMachine -VM $_.Name  -Cloud $Cloud }

You can use this method to use any of the virtual machine properties and you can use any of the conditional operators. As an example you could add all machines where the owner is someone, or even move all machines from one cloud to another.

Some of the VM Properties:


To get a full list of the available properties you can use

Get-SCVirtualMachine |gm

The compare or evaluate the property to your query you can use the Conditional Operators

 -eq             Equal
 -ne             Not equal
 -ge             Greater than or equal
 -gt             Greater than
 -lt             Less than
 -le             Less than or equal
 -like           Wildcard comparison
 -notlike        Wildcard comparison
 -match          Regular expression comparison
 -notmatch       Regular expression comparison
 -replace        Replace operator
 -contains       Containment operator
 -notcontains    Containment operator

02 December 2013

Manage Windows Network Bandwidth with native QOS

By default Windows process will use as much of the network as it can to complete a task as fast as possible. This is great when there is only one thing happening at a time.  If however you have a server that is handling different workloads you might want to prevent once kind of traffic from hogging all of the bandwidth and starving the others.

As an example we have a single server that hosts a HTTP site and a File Share.  When files are not being copied form the file share the Web site is responsive and works well.  When someone copies content from the file share the Web site performance deteriorates.   I ideally we want to prevent this from happening.

Lets look at the transfer rate and therefore network usage of the file copy

On this network, the speed at which the file can be copied is around 40MB/s

If we want to slow down the file copy to consume less bandwidth we can make us of the native Windows QOS Packet Scheduler.  The Quality Of Service scheduler allows us to prioritize  and limit traffic to ensure that one process does not starve another.

QOS functionality is covered by policies.  these policies  mostly lives on the group policy.  You can make QOS policy change in the group policy directly or through PowerShell using the NetQosPolicy noun

Step 1 - Check for existing policies
First up check if there are any existing policies that could conflict or override your intended policy.


Step 2 - Create a new policy
Next up we are going to create a new Policy  that applies to SMB traffic and we are going the throttle or limit the throughput to 2MB/s.

New-NetQosPolicy -Name "FileCopy" -SMB -ThrottleRateActionBitsPerSecond 2MB

At this stage we can start a file copy and see that the network usage is much lower.  

Step 3 - Adjust an existing policy
Once nice ting about QOS policies are that they apply almost immediately without any need for restarting a process or rebooting.
You can now adjust the limit with.

Set-NetQosPolicy -Name "FileCopy" -ThrottleRateActionBitsPerSecond 10MB

I ran this with a few different values and you can tell form the "steps" when they started and stopped.  You can increase or decrease the values, there is not restriction on it from that perspective.

Step 4 - Removing rules (optional)
Ultimately if you are done you can simply remove the policy with

Remove-NetQosPolicy -Name "FileCopy"

You can confirm that there are no policies left by running


The one limitation.to be aware of when starting out is that QOS only applies to OUTBOUND traffic.  Any file copy to the server would still consume the full network that is available.

Before you start looking for other tools to help shape your network traffic, look at what is possible with Windows QOS.  This barely scratches the surface but it shows just how easy and effective it can be.  For more information check out http://technet.microsoft.com/en-us/library/hh967469.aspx

18 October 2013

Building Hyper-V Clusters Step by Step - Part VI - Virtual SAN with StarWind Native SAN for Hyper-V

In all the previous part we have covered the one thing that remained constant regardless of the way it was presented was shared storage.  This means storage that was "outside" of the Hyper-V host.  StarWind Native SAN for Hyper-V allows you to virtually create shared storage using local storage on the hosts.

Design Goals
Two node Hyper-V cluster
Using no additional external shared storage
Using Cluster Shared Volume between 2 nodes

Step 1  - Configure Servers
Both Servers are identical.  The have the Hyper-V Role and Failover clustering features installed.

Each Server has three NICs (although this is possible with two)
NIC 1 is the normal networks  (\24)

These next two are essentially server to server connections.
NIC 2 is used for SAN Heartbeat (\24)
NIC 3 is used for SAN Replication (\24) (Must be at least 1Gb/s)

Step 2 - Get and install StarWind Native SAN
You will have to get the bits from http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition there is a free edition and a paid for one that provides additional features such as three nodes and unlimited storage.

Installing is a straight forward, accept all the defaults process.  This needs to be installed on both of the Hyper-V hosts that will participate in the virtual SAN.

Step 3 - Configure HYPERV01
Open the StarWind Management Console

  • Right Click the StarWind Servers node and select Add StarWind server
  • Specify HYPERV02 as the host and click OK
  • Once it has been added select it and click Connect
  • Both nodes should now be online

If not then you need to fix it.

Step 4 - Add device
Here we are going creat the virtual disk that will span both nodes.

  • Select HYPERV01 and click Add Device
  • Select High Availability Device
  • Specify the partner node Hyperv02
  • Specify the Alias
  • Select - Create HA device from the virtual disk
  • Specify the filename of where oyu want ot virtual disk image to be
  • Check Create new
  • Specify the Size
  • Configure the same on the second node
  • Select the network sets to be use as defined earlier
  • Select Write-back caching
  • Select Clear Virtual disks
  • Finish the wizard

Once the wizard has finished the disks will be created on both servers, it will also start the initial sync process.  This is a good time to carry on creating additional devices, you want at least two.  One for Quorum and one for the CSV.  While you are busy with this you will notice that it creates a new target for each device on each node.

Step 5 - Attach each host to to virtual storage
The process is the same on each server and at this stage it is pretty much the same old iSCSI process we have used in the previous builds.


  • Start iSCSI initiator
  • Select the Discovery tab
  • Click Discover portal
  • Specify as the target
  • Select the Targets tab
  • Select each target and connect them
  • Select Server manager - File and Storage Devices Volumes - Disks
  • Select the iSCSI Starwind SCSI Disk Device
  • Bring Online and Create a volume
  • Repeat for all the disks


  • Start iSCSI initiator
  • Select the Discovery tab
  • Click Discover portal
  • Specify as the target
  • Select the Targets tab
  • Select each target and connect them
  • Select Server manager - File and Storage Devices Volumes - Disks
  • You should see the iSCSI disk as offline and read only

At this stage I want to point out something important.  If you look at the iSCSI targets you will see that they are different.  In all the previous shared iSCSI examples they were the same.  You can see how each host looks at it's own copy of the HA disk directly.

Step 6 - Create the fail-over cluster
Server manager - Tools - Failover Cluster Manager

  • Create Cluster
  • Specify HYPERV01 and HYPERV02
  • Run validation
  • Specify a Cluster Name
  • Make sure "Add all eligible storage to the cluster is selected"
  • Create the cluster

By default the smallest Shared disk is used as the quorum - you can create additional Devices and add them as additional storage.

From here on in it is business as usual.  Additional disks can be added to the cluster and you can select to assign them as Cluster Shared Volumes.

In the background all cluster disk IO is now being replicated on both nodes.  This has obvious performance considerations, fortunately the StarWind Management Console proves some handy performance counters a graphs to help you monitor this.

Step 7 - Testing
Create a few virtual machines and test you normal fail over form one host to the other.  This should happen as per normal without any problem or issue.

Next up you would want to simulate a sudden failure of one of the node with a very scientific - pull the power plug test.

The normal failover cluster process will step in and move all the resources to the sole active node.  Once the second node is restarted the HA virtual disk will automatically start the sync process getting them both up to the current state.  You can monitor this in the  Management Console.  While the sync is in progress the recovering node's iSCSI target will be in the "Reconnecting" state until the disks are in sync.

Fail-over clustering does provide additional functionality over the normal share nothing live migration and replication capabilities of two stand alone Hyper-V hosts.  Storage Visualization like we have here allows you to get to a full blown fail-over cluster.  As long as you understand all the implication of this concept this is a great solution for a SAN free environement.

02 October 2013

Building Hyper-V Clusters Step by Step - Part V - iSCSI Dell PowerVault MD3200i

In the previous articles in the series We stepped through all the different steps in creating the clusters form scratch. This article will focus primarily on the step required to configure the PowerVault and SCVMM.  Because of the large number of step I have kept the article a brief as possible, so it is more of a rough guide.

Design Goals

  • Two node Hyper-V cluster
  • Using iSCSI as shared storage on a Dell PowerVault target
  • Using a single Cluster Shared Volume between 2 nodes
  • Using only single NIC per server
  • Using DHCP
  • Import the cluster into SCVMM

Step 1 - Configure Storage Disk
Open the PowerVault Modular Disk Storage Manager

  • Select Logical Tab
  • Right Click Unconifgured Capacity
  • Select Create Disk Group
  • Next
  • Give it a meaningful name (30 char limit)
  • Next
  • Select the RAID Level
  • Select a Capacity
  • Finish
  • You will be prompted to create a virtual disk using the group - Select Yes
  • On the Introduction Screen - Next
  • Specify disk capacity and a disk name
  • (Here I create a small disk for Quorum and additional disks for VM Data
  • Finish
  • You will be prompted to create another virtual disk - Repeat for all the volumes you want to create
  • You will be reminded to create mappings
  • Click OK
Step 2 - Configure iSCSI target
Next up we need to configure the hosts and host groups.  For this we will need the iSCSI initiator names from the Hyper-V servers.

  • On each server
  • Open the iSCSI Initiator
  • Select the Configuration tab
  • Copy the the value of the Initiator Name: field

  • In the PowerVault Modular Disk Storage Manager
  • Select the Mappings Tab
  • Right Click Storage Array
  • Define Host
  • Host Name can be your actual Hyper-V host name
  • Select "Add by creating a new host port identifier" 
  • Past the iSCSI initiator name you copied in the previous section
  • User Lable cannot  be the Server host name again
  • Click Add
  • Click Next
  • Select Windows form the Host type list
  • Next
  • Select "Yes - this host will share access to the same virtual disks with other hosts"
  • Enter the name of the hosts group (cluster name)
  • Next Finish
  • Repeat the process for the second host

  • Right Click the Host group
  • Define > Storage Partitioning
  • Next
  • Select the correct host group
  • Select and add the relevant disks
  • Finish

Step 3 - Configure a host to connect to storage
The first host in the cluster will be used to prepare the volumes. From the server manager - 

  • iSCSI Initiator
  • Discovery
  • Discover Portal
  • Specify IP of the PowerVault
  • Advanced
  • Local Adapter select Microsoft iSCSI Initiator
  • Initiator IP select the correct one
  • OK
  • OK
  • Select Targets Tab
  • Select the Target - Connect
  • Select Volumes and Devices - Auto Configure
  • Ok

  • Server Manager
  • File And Storage Services
  • Select Disks
  • Bring online
  • Right Click  - new Volume
  • Complete the wizard 
NOTE* Formatting large volumes will take a Looooooong time - like half and hour ish for 1TB

Complete this for all of the volumes you want to add to the cluster.

Step 4 - Configure the second host to connect to storage
On the second host configure and connect the iSCSI initiator.

At this point both nodes should be able to see the disks and volumes but only online on one of the hosts.

Step 5 - Create the cluster

  • Start Fail-over cluster manager
  • Create cluster
  • Add nodes
  • Provide Cluster Name
  • Finish The Wizard

You should now have a cluster that is up and running.

Step - 6 Add the storage to SCVMM
Next up we need to import the cluster into VMM.  However to do this correctly we need to provision the storage in VMM.  For this we need a provider for the PowerVault.

You have to make use of the VCenter Plugin

Note: It only runs on 2008 R2  - not 2012

You can follow these step http://vinfrastructure.it/en/2012/05/manage-a-dell-powervault-array-with-system-center-vmm-2012/

  • In SCVMM
  • Select Fabric - Storage - Providers - Add Storage Devices
  • SAN and NAS devices discovered and managed by a SMI-S provider
  • Protocol SMI-S CIMXML
  • IP address is the server hosting the VCentre Plugin
  • Specify an admin account for that server

  • You should now see that the PowerVault is listed
  • Select the Storage Devices
  • Here you will see the Storage groups as defined on the PowerVault
  • Create a classification and add the relevant group

Select Fabric - Storage - Classification and pools
Expand the new Classification you created - you should now see the LUNs that are mapped to the storage group

Step 7 - Import the cluster
Next up import the Cluster as Hyper-V hosts

  • Select the properties of the host
  • Select available storage - here you should see the cluster disks
  • Select them and Convert to CSV

The setup is now complete.

Please note the following:
This is purely a lab configuration, actual real world performance of this single NIC configuration will be potentially unpredictable with periods of lag.  When building this for a production environment it is suggested that you use dedicated NICs for storage, MPIO also add redundant links to the storage.

30 September 2013

Building Hyper-V Clusters Step by Step - Part IV - iSCSI Cluster Shared Volume (CSV) SCVMM

In part III of this series we managed to build a block storage Hyper-V failover cluster manually.  When you  are using SCVMM it will automate a lot of the steps for you.  For this to work you need to provision your storage through VMM.  Because we are using Server 2012 R2 it can perform everything directly from SCVMM 2012 R2.

Design Goals
  • Two node Hyper-V cluster
  • Using iSCSI as shared storage
  • Using two Cluster Shared Volume between 2 nodes
  • Using only single NIC per server
  • Using DHCP
  • Use SCVMM to configure everything
The same basic configuration as the previous labs

Cluster nodes HV01 and HV02
HV03 is the management server, iSCSI target and SCVMM Server

Build Process

Step 1 Add a storage provider

  • Fabric - Storage - Providers - Add Storage Devices
  • San and NAS devices descovered and managed by a SMI-S provider
  • Protocol: SMI-S WMI
  • FQDN of HV03 
  • Specify a Run As account
  • Discover should succeed and you should see and item in the list
  • Create a classification (2012 R2 iSCSI)
  • Select two drives to use and choose your classification
  • Finish the Wizard

Step 2 - Create logical Units
This is where you create iSCSI drives for the nodes to use.

  • Fabric - Storage - Classification and Pools
  • Select the Clasification created earlier
  • Click Create logical unit
  • Create one small logical unit (1GB - for the Quorum)
  • Create a bigger logical unit ( For the CSV)

At this stage it should look something like this

Step 3 - Create a host group
  • Fabric - Servers - All hosts - Create Host Group
  • View the properties of the host group
  • Select storage 
  • Allocate Storage Pools - Select and Add both
  • Allocate Logical Units - Select and add all three
  • Close properties
  • Add the Hyper-V hosts
  • Add the individual hosts as per normal
  • If you check the properties of the hosts look at storage
  • iSCSI Array should have the target specified 
Step 4 - Create the cluster
  • Fabric - Servers - All Hosts
  • Create - Hyper-V Cluster
  • Specify Name
  • Specify Run As account
  • Add the two nodes prepared earlier
At the storage screen you will notice that one of the disk is grayed out.  This will be used for the Quorum.  It selects the smallest partition for this.

  • For the other partition check all three boxes (Quick Format - Force Format - CSV)
  • Finish the wizard

Check progress on the Job as it can run for a good few minutes

Step 5  - Check Storage on the cluster
  • Fabric - Servers - All Hosts - Your Cluster Group
  • Open the properties of the cluster
  • Select Shared volumes
  • You should see the CSV here, if not then check available storage, select the volume and click the Convert to CSV button

Step 6 - Create a HA VM
  • VMs and Services - Create Virtual Machine
  • Create New VM
  • Specify a name
  • Configure hardware - Make sure you select availability and check "Make this virtual machine highly available"
  • Specify the host group as a destination
  • Select either host
  • The virtual machine path would be  C:\ClusterStorage\Volume1\   or similar - this refers to the CSV volumes
  • Finish the wizard
Using SCVMM you can drive the entire deployment of Hyper-V iSCSI fail-over clustering as long as your storage plays along nicely.  If you manually go and check the iSCSI initiators and the iSCSI targets on the individual servers you will notice that it is pretty much the same as when we completed lab III

20 September 2013

Building Hyper-V Clusters Step by Step - Part III - iSCSI Cluster Shared Volume (CSV)

In the first two parts of this series we covered building SMB3.0 or File Share storage clusters.  We will now change things up a bit and switch to Block Storge.  In this case iSCSI, with the target being Windows Server 2012 R2.

Design Goals
  • Two node Hyper-V cluster
  • Using iSCSI as shared storage
  • Using a single Cluster Shared Volume between 2 nodes
  • Using only single NIC per server
  • Using DHCP
  • No SCVMM

What we are building this time round is a Fail Over Cluster for the Hyper-V tier.  This means that one of the Hyper-V hosts can fail and thing will carry on working. It is important to note that this configuration still only provides fault tolerance for the compute tier and not the storage tier.

The same basic configuration as the previous labs

Cluster nodes HV01 and HV02
iSCSI target and management server HV03

Build Process
As before all installation tasks are performed form HV03 unless stated otherwise.  Again we will be using PowerShell when it makes sense and saves time and effort.

  1. Configure the iSCSI target server
  2. Configure nodes with roles
  3. Configure storage
  4. Create the cluster
  5. Configure CSV
  6. Create Virtual Switches
  7. Create fail over VMs

Step 1 - Configure the iSCSI target server
Install the required services
The iSCSI initiator is installed by default but not the target, so we will have to install it manually

Install-WindowsFeature FS-iSCSITarget-Server

Enable the firewall rules
By installing the iSCSI target you enable the rule but strangely firewall is not configured for you. This enables the required rules 

Enable-NetFirewallRule -DisplayGroup "iSCSI Service", "ISCSI Target Group"

Configure iSCSI volume
iSCSI volumes in Windows Server are in actual fact virtual drives, in this case VHDX.  As such you can create iSCSI volumes just about anywhere.  All you have to do is specify a folder where you want to create them.  The easiest place to configure this is form the server manager.

  • Open Server manager
  • Select File and Storage Services
  • Select iSCSI

Creating a quorum disk

  • Tasks - New iSCSI virtual disk
  • Select the Server and choose a drive
  • Provide a meaningful name such as Quorum
  • Specify a disk size - since this is purely for quorum it can be tiny - 1GB
  • Leave the defaults for Fixed Size
  • Next
  • Select New iSCSI Target
  • Name hyperv-cluster
  • Next
  • Add the initiators
  • Click the Add button
  • Query the initiator computer by browsing HV01's AD object
  • Repeat for HV02
  • Now that you have two iSCSI  initiators listed click next
  • Leave off authentication
  • Finish the wizard

Create the VMStorage disk

  • Tasks - New iSCSI virtual disk
  • Select the Server and choose a drive
  • Provide a meaningful name such as VMStorage
  • Specify the size
  • Select the iSCSI target we created earlier (hyperv-cluster)
  • Finish the wizard

Step 2 - Prepare the Cluster node roles
Install the required roles and features

Invoke-Command -computername HV01, HV02 {Install-WindowsFeature RSAT-Hyper-V-Tools, RSAT-Clustering, Failover-Clustering, hyper-v -Restart}

Once the servers have rebooted you can continue the build

Step 3 - Connecting the Nodes to the storage
The nodes use the iSCSI initiator to connect to the storage. This is installed by default and only needs to be configured.

Node 1
Node one will connect and do the initial disk prep work too.

enter-pssession hv01

Automatically start the iSCSI initiator service

Set-Service –Name MSiSCSI –StartupType Automatic | Start-Service MSiSCSI

Connect to the target and register it as favorites so it will persist after reboots

New-IscsiTargetPortal –TargetPortalAddress hv03
Get-IscsiTarget |Connect-IscsiTarget
Get-IscsiSession | Register-IscsiSession

Configuring the disks
At this point your block storage is now connected, but it is still RAW, you need to initialize the disk, create a partition and format the drive.  This should be done for each disk

Initialize-Disk -Number 1 -PartitionStyle GPT -PassThru | New-Partition -DriveLetter Q -UseMaximumSize | Format-Volume
Initialize-Disk -Number 2 -PartitionStyle GPT -PassThru | New-Partition -DriveLetter V -UseMaximumSize | Format-Volume

Close of the remote powershell session


Node 2
The second cluster node now also needs to be connected to the same storage.

enter-pssession hv02

New-IscsiTargetPortal –TargetPortalAddress hv03
Get-IscsiTarget |Connect-IscsiTarget
Get-IscsiSession | Register-IscsiSession

You should now see the iSCSI disks and they would be offline for this node.

Close of the remote powershell session


Step 4 - Create the Cluster
By now everything has been prepped and is ready.  If the sequence is right the cluster will be created an automatically grab the Q drive for the quorum, and attach the second disk to the cluster.

New-Cluster -Name HVC01 -Node HV01, HV02

Since we are building Hyper-V we can make use of Cluster Shared Volume.  To enable this we need to configure the cluster to mark our vm storage drive as a CSV

Add-ClusterSharedVolume -Cluster ET-LAB-HVC01 "Cluster disk 2"

Step 5 - Configure Virtual Switches
The nodes need to have identical configurations, including the virtual switches.  This build is very simple from a network perspective.  Each node has a single network adapter.  We will need to create the virtual switch and allow access to the management OS

Start a PSSession to each node and follow these steps

Enter-PSSession HV01

Identify the adapter name you want to use (in this case Ethernet 3)

New-VMSwitch -Name "External" -NetAdapterName "Ethernet 3" -AllowManagementOS:$true 

Step 6 - Creating Virtual machines
Since the Virtual Machines are now Cluster resources they need to be created from within the cluster manager

  • Select Roles
  • Virtual machines - New Virtual machine
  • Select a node
  • Specify the server name
  • Store machine in a different location will be C:\ClusterStorage\volume1\
  • Complete the wizard as per normal

Building a fail over cluster with iSCSI storage is only marginally harder to do than using SMB 3.0  also the only significant difference is the storage, once you get the hang of that then there is not real difference here - especially with this very simple deployment. 

18 September 2013

Building Hyper-V clusters Step by Step - Part II - The most basic added to SCVMM

In part one of this series we created a very basic Hyper-V cluster using SMB 3.0 as shared storage.  In this part we will cover the steps required to add this cluster to SCVMM.

See PART I for more background info

Design Goals
  • Two node Hyper-V cluster
  • Using only SMB 3.0 as shared storage
  • Using only single NIC per server
  • Using DHCP
  • Managed from SCVMM
Build Process
All of the steps can be performed directly from the System Center Virtual Machine Manager Console.

Step 1 Adding Storage
Here we will add the fileshare server as available storage for VMM to use
  • Select Fabric
  • Under Storage Select Provides - Right Click - Add Storage devices
  • Add Windows-based file server as managed storage device
  • Specify the FQDN (HV03) and a Run As account that is an administrator on the  File share host
  • Select relevant shares (From Part I we created a Witness and Storage share - add both)
  • Finish the wizard

Step 2 Adding the host nodes
Here the Hyper-V host and the configured cluster is added
  • Select Fabric
  • Create and select a new Folder under All Hosts
  • Right Click - Add Hyper-V hosts and Clusters
  • Windows Server computers in a trusted AD Domain
  • Specify  a Run As account that is an administrator on the f Hyper-V host
  • Specify a name of one of the nodes (HV01) - Next
  • It will discover the cluster with the nodes (HVC01, HV01 and HV02) - Select and click next
  • Finish the wizard

Step 3 Adding Storage to the Cluster
Before the storage3 can be used by the cluster it needs to be provisioned for it
  • Right Click the Cluster - Properties
  • Select File Share Storage - Add
  • Select \\HV03\StorageShare
  • OK
  • Specify Run As account
  • Finish The Wizard

Step 4 Verify the cluster
Before continuing on you should check that everyhting is working up to this point
  • Select the cluster
  • Right Click - Validate Cluster  - This takes a few minutes
  • At this point the cluster should Succeed with a few warnings

Step 5 Create a Highly Available virtual machine
The process for creating a virtual machine is the same,one key exception is when you are specifying the hardware you can now scroll down to the Advanced section and select the "Make this virtual machine highly available."
When you get to the configure Setting screen you should see the cluster file share is specified as the storage location.

If you want to allow SCVMM to apply dynamic optimization and have high availability you need clusters.  As you could see from this article it can be very simple and fairly straight forward to create and implement a basic SMB Hyper-V cluster.

Building Hyper-V clusters Step by Step - Part I - The most basic

This series of articles will go through various way on how to build a Hyper-V clusters.  This can mean a lot of things but the generally idea is to have two or more Hyper-V host servers that can host high availability virtual machines.

Design Goals

  • Two node Hyper-V cluster
  • Using only SMB 3.0 as shared storage
  • Using only single NIC per server
  • Using DHCP
  • No SCVMM

Cluster nodes: HV01 and HV02
Management and Fileshare Server HV03

All machines need to be joined to the same domain.

Build process
All build steps can and should be performed from the management server. Personally I like to run everything from the PowerShell ISE

All the PowerShell commands are formatted in courier

Step 1 Install required roles on the servers

The nodes needs:

  • Hyper-V
  • Failover Clustering
  • Hyper-V Management Tools
  • Failover Clustering Management Tools

Invoke-Command -computername HV01, HV02 {Install-WindowsFeature RSAT-Hyper-V-Tools, RSAT-Clustering, Failover-Clustering, hyper-v -Restart}

The role installation will reboot the server and may take a few minutes.

The Management Server needs

  • Hyper-V Management Tools
  • Failover Clustering Management Tools

Install-WindowsFeature RSAT-Hyper-V-Tools, RSAT-Clustering

Step 2 Create virtual switches
The nodes need to have identical configurations, including the virtual switches.  This build is very simple from a network perspective.  Each node has a single network adapter.  We will need to create the virtual switch and allow access to the management OS

Start a PSSession to each node and follow these steps

Enter-PSSession HV01

Identify the adapter name you want to use (in this case Ethernet 3)

New-VMSwitch -Name "External" -NetAdapterName "Ethernet 3" -AllowManagementOS:$true 

Step 3 Create the cluster
Since we need the cluster Active Directory object to provisions the SMB shares we need to perform this step first.

This will create a cluster called HVC01 and will use DHCP to assign a cluster VIP

New-Cluster -Name HVC01 -Node HV01, HV02 

If this fails you can run a validation check to see if any errors show up

Test-Cluster -Node  HV01, HV02 

Step 4 Create the SMB 3.0 Fileshares
In this build the Management Server will also be the SMB 3.0 file share host.  For a cluster we will create two shares.  The one will be for the Cluster Quorum Witness, the other for storing virtual machines. The shares are identical and as such you can use the same commands, just change the path.

The commands below create the folder, then shares the folder granting full access to Everyone, lastly it sets the NTFS permissions to also allow full control to the nodes and the cluster machine accounts.

$Folder = "D:\VMFOLDER"
$ShareName = "VMFOLDER"
$MachineAccounts = "domain\hv01$","domain\hv02$","domain\hvc01"

New-Item -Path $folder -ItemType directory
New-SmbShare -Name $ShareName -Path $Folder -FullAccess Everyone

$MachineAccounts | ForEach-Object {
$Acl = Get-Acl $Folder
$Ar = New-Object  system.security.accesscontrol.filesystemaccessrule($_,"FullControl","ContainerInherit,ObjectInherit","none","Allow")
Set-Acl $Folder $Acl

At this point you should have a cluster that is up and running.  

Step 5 Configure the Quorum
In SMB only cluster you can make use of a file share to act as the Quorum.

Set-ClusterQuorum –Cluster HVC01 –NodeAndFileShareMajority \\hv03\VMWitness 

Step 6 Adding Virtual Machines
Since you now have a cluster the process of adding or creating a virtual machine is a little different.  Instead of using the Hyper-V management console you will be using the Failover Cluster Manager 

  • Connect to cluster
  • Select Roles
  • Right Click - Virtual machine - New Virtual Machine
  • Select any of the nodes
  • In the Specify Name and location Screen - specify the Fileshare created earlier
  • Complete the wizard as normal.
  • There will be a cluster validation notice that the object was created correctly

You can now start the virtual machine and fail it over between the hosts by selecting Move - Live Migration - Select Node

This is a very simple build and all of the build steps are performed in PowerShell, as such this build doc will work for Windows Server with GUI, Windows Server Core and Hyper-V server.

Check out Part II 

27 August 2013

SCVMM 2012 SP1 library issues and fixes

System Center Virtual machine Manager 2012 SP1 includes support for Windows Server 2012 and Hyper-
V 3.0  Logically you would think that VMM 2012 is now fully supported and supporting of the Windows Server 2012 technology stack, but you would be wrong.

Error 2905
When adding an additional VMM Library server hosted on Windows Server 2012 you may run  into the following issue:

Error (2905)
The file name, folder name, or volume label syntax \\<ServerName>\CommonLibrary\ApplicationFrameworks\SAV_x64_en-US_4.9.37.2003.cr\SCVMMCRTag.cr is incorrect on the <ServerName> server.
The filename, directory name, or volume label syntax is incorrect (0x8007007B)

The alternate error popup has the following wording.

The file name, folder name, or volume label syntax \\<ServerName>\CommonLibrary\ISOs\en_windows_7_ultimate_x64_dvd_x15-65922.iso is incorrect on the <ServerName> server.
Ensure that the path name does not contain the characters (\ / : * ? " < > | ), and then try the operation again. ID: 2905 Details: The filename, directory name, or volume label syntax is incorrect (0x8007007B)

If you look at the Change Tracking section of the task that terminated with the error you will notice that some of the information is actually being pulled through.

If you look at the actual file structure you would also notice that the default resources are actually successfully created.

It took a while to figure this one out but turns out that the VMM Library must be on NTFS.  If you attempted to use ReFS it will simply not work, you would get the errors above.

Error / Information 10804
When moving your default VMM library to another server you start seeing warning during library refresh tasks.

Information (10804)
Unable to import \\<servername>\VMM-LIB\VHD's\Blank Disk - Large.vhdx because of a syntax error in the file.
The file structure does contain the vhdx files listed in the alert but they are not visible when checking the library.

This error occurs when VHDX files are located in a library share that is hosted on an OS older that Windows Server 2012.  Since the library is populated by resources that the OS indexes it has to be able to interpret the new VHDX format which only came with 2012.

Error - Virtualization platform does not support shared ISO Images
Despite having set up ISO sharing correctly,  this happens when you deploy a new virtual machine. At creation you attempt to add an existing ISO to the hardware configuration and you "Share image file instead of copying it."  Copying the file works fine and does not generate any errors.

The two error messages that can manifest are:

When deploying to a host:

Virtualization platform on host <servername> does not support shared DVD ISO images.

When deploying to a cloud:

The virtual machine workload <VM Name> cannot be deployed to the hardware supporting cloud <Cloud Name>  Contact the cloud administrator and provide Task ID.................

Shared images cannot be attached during deployment.  To use a shared images you need to attach it to the VM after the initial deployment.

23 August 2013

Using WireShark on Windows Server Core or Hyper-V Server - Step-by-Step

Packet capture and analysis in real-time can be invaluable for troubleshooting certain issues.  If however you

are using an operating system flavor without a GUI you might find yourself a little stuck.  The steps below will assist you in setting up your core machine and another with a GUI to enable yo to remotely capture an analyse the data.

Stuff you will need

  • WireShark
  • Nmap
  • 7zip

On your GUI (management) computer you will need to install WireShark.  This can be downloaded and installed form http://www.wireshark.org/download.html

On the source machine you will need to install the WinPcap to allow you to capture the actual traffic.  There is just one small catch.  The version of WinPcap that is included with WireShark cannot be installed silently and a such on a core machine you are stuck.  Because of this I suggest you grab the WinPcap installer from Nmap which can be installed silently.  Download the full package from http://nmap.org/download.html

Use 7zip http://www.7-zip.org/download.html  to open the nmap-x.xx-setup.exe archive and simply copy the WinPacp executable winpcap-nmap-x.xx.exe

Installation steps
I will refer to the Windows core machine as core and the full GUI machine as the Management machine
All of these steps will be performed from the management machine.  All actions that happen on the core machine can be done through a remote PowerShell session.

Steps to be done on the core machine

  • Copy the WinPcap-Nmap.exe to the core machines's c:\install
  • Open PowerShell remote session to Source machine Enter-pssession Coremachine
  • Silently install with winpcap-nmap-4.02.exe /S

Next up you will have to create a firewall exception for this to be reachable from the management machine.
Since the initial connection is made over a named port but the actual capture data is sent using the dynamic range you have to add an all port rule.

  • netsh advfirewall firewall add rule name="Remote WinPcap" dir=in action=allow protocol=TCP localport=any remoteip=  <--- IP of you management machine

To turn this rule on or off use these two commands

  • netsh advfirewall firewall set rule name="Remote WinPcap" new enable=yes
  • netsh advfirewall firewall set rule name="Remote WinPcap" new enable=no

Next up you need ot start the WinPcap process so that we can connect to it and receiver packet data

  • Navigate to C:\Program Files\WinPcap
  • To start to packet capture service use .\rpcapd.exe -p 2002 -n

Steps to be done on the management machine
Install WireShark as per normal and launch the application

  • Select Capture Options
  • Click Manage Interfaces
  • Select Local Interfaces tab and check the Hide box next to all of them
  • Select remote Interfaces tab
  • Click add button
  • For the host specify the hostname or IP Address
  • The port default is 2002 (set with the -p switch earlier)
  • Null authentication as set with the -n switch earlier
  • OK
  • You should now see a number of interfaces added
  • Click Close

On the capture option main window you will see the remote interfaces listed now they are the once showing up as rpcap://hostname:2002/

  • Capture only the interface tied to the IP you want to trace
  • Uncheck Promiscuous mode (help to clean things up)

There will be a buffer size warning but it can be ignored, and hey presto, you are capturing packets from a remote  non GUI machine.  The process from here on in is the same as you would use WireShark with local traffic capture.

Close the door and turn off the lights
Once you have completed all of your packet capture stuff you need to close things up properly again.  This is especially important in this case considering what we have just enabled.

To stop the rpcapd.exe from running you ca use:

  • get-process rpcapd | Stop-Process

To uninstall WinPcap you can use

  • C:\Program Files\WinPcap>uninstall.exe /S

Close of the firewall by turning off the rule

  • netsh advfirewall firewall set rule name="Remote WinPcap" new enable=no

With just a little bit of effort you can remotely capture network packet data.  If done correctly this is a great tool to use for troubleshooting.  I have used this not only on Windows Server Core but also on Hyper-V Server, where you don't even ever have the option of adding a GUI.  As long as you clean up when you are done it does not pose any significant security risks.

If you like this article you may also like this one.

16 August 2013

DHCP - PowerShell basics and netsh equivalency

Windows Server 2012 brings a huge improvement for DHCP.  One of the best things is that all DHCP functionality can now be driven from PowerShell.  Netsh is still supported but has been deprecated.

If you are migrating from Windows Server 2008R2 to Windows Server 2012 you will most probably use a combination of these as you are moving along.

DHCP Export

Export the whole DHCP server's configuration, scopes and leases.

Export-DhcpServer -File c:\DHCP\PS-Full-Export.xml -Leases
Netsh dhcp server v4 export c:\DHCP\NetSH-Full-Export.txt all

Export a single scope

Export-DhcpServer -File c:\DHCP\PS-SingleScope-Export.xml -Leases -ScopeId
Netsh dhcp server v4 export c:\DHCP\NetSH-SingleScope-Export.txt

Export multiple names scopes

Export-DhcpServer -File c:\DHCP\PS-MultipleScopes-Export.xml -Leases -ScopeId,,
Netsh dhcp server v4 export c:\DHCP\NetSH-MultipleScopes-Export.txt,,

DHCP Import
One thing to keep in mind is that if you export a DHCP server or scope with netsh you have to import it with netsh since PowerShell and netsh produce two different file types.  PowerShell also has a mandatory backup before you can import anything new.

To export the whole DHCP server's configuration, scopes and leases.

Import-DhcpServer -File c:\DHCP\PS-Full-Export.xml -BackupPath C:\DHCP\ -Leases
Netsh dhcp server v4 import c:\DHCP\NetSH-Full-Export.txt all

PowerShell DNS cmdlts also allow you to selectively restore just the server config and or the leases.

Import-DhcpServer -File c:\DHCP\PS-Full-Export.xml -BackupPath C:\DHCP\ -ServerConfigOnly
Import-DhcpServer -File c:\DHCP\PS-Full-Export.xml -BackupPath C:\DHCP\ -ScopeId,, -Leases

Setting Options
Options can be set when the scopes are created but to change settings on existing scopes you can use the following:

This sets the lease time

Set-DhcpServerv4OptionValue -ScopeId -OptionId 51 -Value 3600
Netsh dhcp server scope set optionvalue 51 DWORD 3600

You can also used named Option in PowerShell


Set-DhcpServerv4OptionValue -DnsServer -WinsServer -DnsDomain domain.com -Router -Wpad http://proxy.domain.com/wpad.dat

When setting non-named Options you have to specify the value in the correct format. Here is an example of where the option data is in hex.

set-DhcpServerv4OptionValue -ScopeId -OptionId 43 -Value 0x3A,0x02,0x01,0x2D,0xFF

Batching netsh commands with PowerShell
This is useful if you are working with and Windows Server 2008 R2 machine.  One way to import and export or make chnages to a large number of scopes with NetSh is to generate a text file with one line per scope id.  The files typically look like this

A simply way to feed each line into netsh is with the following PowerShell command sequence.

Get-content C:\dhcp\list.txt | foreach-object { Netsh dhcp server v4 export c:\dhcp\$_ $_}

Get-content e:\dhcp\list.txt | foreach-object { Netsh dhcp server \\serverip scope $_  set optionvalue 51 DWORD 3600}

Keeping failover pairs in sync
Once you have Migrated on to Fail-over pairs it is important to remember that leases and reservation are synced automatically but scope changes and options are not.  You can use the following to sync the options and scopes back up.

Invoke-DhcpServerv4FailoverReplication -ComputerName DHCPserverName

Other handy commands
There are now stacks of different ways to get visibility on your DHCP environment.  The eamples below should givea good indication of what is now very easy to do.

To get a nicely formatted list of the scope in the shell use the following

Get-DhcpServerv4Scope | select scopeid, name | Format-Table -AutoSize

You can also filter by the scope, this looks for any scope name that contains the word WiFi

Get-DhcpServerv4Scope | WHERE {$_.name -match "WiFI"} 

To then also get the usage statistics use

Get-DhcpServerv4Scope | WHERE {$_.name -match "Centre"} |Get-DhcpServerv4ScopeStatistics

If you would like to keep the scope name and description matched you can simply run the following:

Get-DhcpServerv4Scope |Set-DhcpServerv4Scope -Description {$_.name}

01 August 2013

Activate Windows in PoweShell

Use this following script to activate your Windows machines through the PowerShell.  PSREMOTE onto the machine and just replace all the XXXXXs with your valid key

$computer = hostname
$service = get-wmiObject -query "select * from SoftwareLicensingService" -computername $computer

04 July 2013

Determine Sophos Enpoint version in a CID

Sophos endpoint devices are installed and update via CID's  The CID's are created as and when additional subscriptions are added to the SUM (Software Update Managers)  As a result you can easily have a confusing directory structure like this:

The truly annoying bit is that there are no obvious indicators as to which version of the Endpoint Client is located in which folder.

There is however a .xml file that does contains this info


Particular field you are interested in would be  <ProductVersion>

01 July 2013

F5 Diaries - Applying a hotfix to your BIG-IP Step By Step

Hotfixes are available for down from F5, these come with a description of all the fixes included in the hot-fix roll-up.  The hot-fixes are cumulative so there is no need to incrementally apply the fixes.

Once you have downloaded the hot-fix .iso follow this procedure.  First on your passive node.  Then switch the active load the the newly updated node and then update the now passive node.

Step 1 - Back it up

  • Log Onto The BIG-IP as Admin
  • Create an archive for safe keeping
  • System - Archives +
  • Specify a file name
  • Click Finish
  • Click OK on the progress screen
  • Click on Archives again - you should now see the list of archives.
  • Click on the archive just created then click Download
  • (It is a good idea to keep a copy somewhere other than the F5 especially during an update)
  • Reboot the F5 and check for no errors during startup.

Step 2 - Upload the hot-fix image

  • System - Software management - Hot-Fix List - Import
  • Choose the downloaded hot-fix iso and wait for upload to complete

Step 3 - Install the image

  • System - Software management - Hotfix List
  • Check the box next to the latest hotfix image - Click Install

(NOTE: If you do not currently have the base image on the system you would have to upload it first before you can proceed.  It will be installed as part of the hotfix deployment)

  • Name the new volume
  • Click Install

  • Wait patiently for the install to finish all the way
  • The install status has to be "complete"

Step 4 - Change the Boot Partition

  • System - Software Management - Boot Location
  • Click Activate on the hot-fix installation
  • Confirm that you do want to update
  • Wait for the reboot

Step 5 - Verify

  • Log in and check that you are running the expected hot-fix version
  • System - Configuration - Version

That is all there is to it.  If you have an HA pair this is where you would make this the active node and update the other.

27 June 2013

F5 Renaming and editing existing objects such as nodes, pools and virtual servers

Some object allow you to change certain parameters, but as a general rule, once an object has been created is is fixed and cannot be altered or renamed in the GUI.  Sometimes this is just a nuance, but sometimes it can set off a shuffle of creating and deleting temporary object just so you can reuse the name.

As an example I am going to step through changing the IP address of an existing node.  The follow can all be done directly from the shell.

  • SSH onto BigIP as root

Next up you have to find the correct bigip.conf file.  Depending on whether you have management partitions or not they will be in either:

/config  or

Start off by making a backup of the file

  • cp bigip.conf bigip.conf.bak

The easiest way to edit the file in the shell is with nano.  Open the file for editing the with the following command:

  • nano bigip.conf

The file is fairly easy to interpret and understand.  take some time to get used to the structure before you make any changes.

Find and replace for the IP address or the node name you want to change. I would suggest this over manually scrolling through the file - just in case you  miss an entry.

  • Ctrl W will bring up the search bar
  • Ctrl R will allow you to specify the text you want to replace (The current IP addess of the node), press enter and it will allow you to specify the text to replace it with (the new IP address of the node)
  • It will now prompt you for each replace unless you select all at the first prompt
  • Once the changes are all made you can exit nano with Ctrl X, this will also prompt you to save the file
You can verify that your config file is correct with the following command:

  • tmsh load sys config verify partitions all
If no errors are reported you can apply the new config with the following command:

  • tmsh load sys config partitions all
If you refresh the GUI you will now see that the change is reflected.

That's all there is to it!


A more user friendly, if slightly more long winded way of doing this is to use WinSCP.  Connect to the bigip and browse to the relevant partition. Right click the file - copy and then edit the original.

You would still need to execute the following command from the shell once changes are made
  • tmsh load sys config verify partitions all
  • tmsh load sys config partitions all
Hopefully this comes in handy some time and saves you some effort.

26 June 2013

How to Configure Windows Server 2012 MPIO iSCSI storage with Dell PowerVault MD3200i

The Dell PowerVault MD3200i is an iSCSI storage device containing 8 x 1GB Ethernet interfaces over two RAID controllers. The aim is to have a configuration that provides increased performance and also fault tolerance.

Design Goals:

  • Best possible performance at disk layer
  • Best protection form enclosure layer
  • Active /Active redundant iSCSI paths

Configure the PowerVault
To make sure everything works on Server 2012 make sure you have at least the updated software:
Dell PowerVault MD Series Storage Array Resource DVD version 

I struggled a bit to get it downloaded, and needed to use "Free Download manager" to finally get it down.
Completing the full install on the server will install the management too but also importantly the updates MPIO drivers for Server 2012

PowerVault iSCSI Targets
The PowerVault is configured with a iSCSI data NIC connected directly to a NIC on the windows server.  One per RAID controller.   The layout is as follows:

Windows Server   --->   PowerVault
DATA NIC0   --->   RAID 0 Port 0   --->

DATA NIC1   --->   RAID 1 Port 0   --->

Logical disks
I created two logical disk groups.  Each containing 3 drives per enclosure.  Each logical disk is manually configured to prefer an alternate RAID controller

Configuring the Windows Server 2012

Add MPIO Support
Add the MPIO by following the add roles and feature wizard

After the install launch the MPIO from the Server Manager Tools menu

Select the Discover Multi-Paths tab
Check Add support for iSCSI devices
Ok and reboot the server

When you open the MPIO again you should now see that the MPIO Devices lists the Powervault as well as the Microsoft iSCSI Bus

iSCSI initiator
The iSCSi initiator is the tool that will be used to identify the server to the PowerVault.  It is also the tool that will be used to define the additional IO path(s).

Step 1 
Launch the iSCSI initiator
In the target specify the PowerVault RAID0 Port0 IP address click Quick connect

At this stage you should get a connection to the PowerVault.  
**The connection would now have notified the PV of the servers iSCSI initiator name. You can now complete the storage mapping.**

Step 2
Check that all the required LUNS are visible to the server.

Step 3
Define additional IO path

  • Open the iSCSI initiator
  • Select the discovered name
  • Disconnect
  • Connect
  • Check Enable multipath
  • Click advanced
  • Specify the adapter as Microsoft iSCSI initiator
  • Initiator IP is Server data NIC0
  • Target Portal is  PowerVault RAID0 Port0 / 3260
  • The status should now be connected
  • Click Properties
  • There should be a single identifier
  • Click add session
  • Check Enable multipath
  • Click advanced
  • Specify the adapter as Microsoft iSCSI initiator
  • Initiator IP is Server data NIC1
  • Target Portal is  PowerVault RAID1 Port0 / 3260
  • There should now be two identifiers listed, each one representing a separate physical path

On the iSCSI initiator properties select the active target - click devices - It should list all the device one line per path, so if there are two LUNS you will see the same two LUNS twice.

Select a LUN the click MPIO
Here you can see the load balance policy as well as the link status.  one would be active and the other standby.  Clicking on the details will reveal the IP pair the path relates to.

Step 4 
Initialize the disk and bring them online and create volumes.
If everything is working properly here you are all set.  (I got to this point with the older Dell drivers but could not get past here because of errors bringing the disks online)

Step 5 
Start transferring data and randomly disconnect the network to the storage and watch as performance degrades but keeps going.