31 March 2011

TMG : Allow SSL request on an additional port

"12204 The specified Secure Sockets Layer (SSL) port is not allowed. ISA Server is not configured to allow SSL requests from this port. Most Web browsers use port 443 for SSL requests."

This is the log error you will see on TMG when attempting to connect to a site on a nonstandard SSL port.  In my example the port is 10443 but of course if you are not using 443 it might as well be anything.

This is because be default TMG will only allow HTTPS connections on 443.  This is know as the Tunnel Port Ranges or TPranges.  To add your non standard port number or a range, you will need to run some scripts.  You only need to run this on one of the array member since this is an array setting.

The Add TP Range Script
Create a text file and copy the following into it.  Save the file as AddPort.vbs

Dim root 
Dim tpRanges 
Dim newRange 
Set root = CreateObject("FPC.Root")
Set tpRanges = root.GetContainingArray.ArrayPolicy.WebProxy.TunnelPortRanges
set newRange = tpRanges.AddRange("SSL 10443", 10443, 10443)

NOTE:  ("The name"  , The start port number, the end port number)

From a command prompt run the script with "cscript AddTPPort.vbs" There is no feedback form this script to let you know it succeeded. You will now have to restart the firewall service on each of the TMG nodes in that array.

To verify that the port has been added you can attempt to connect a remote site on that port number.   You can also run a script to show the current TPRanges

The List TP Ranges Script
Create and execute the script the same as the script above.

Dim root
Set root = CreateObject("FPC.Root")
Dim isaArray
Dim tpRanges
Dim tpRange 
Set isaArray = root.GetContainingArray()
Set tpRanges = isaArray.ArrayPolicy.WebProxy.TunnelPortRanges
For Each tpRange In tpRanges
WScript.Echo tpRange.Name & ": " & tpRange.TunnelLowPort & "-" & tpRange.TunnelHighPort

The result from this script should now be:

C:\Users\aa\Desktop>cscript listportss.vbs
Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.

NNTP: 563-563
SSL: 443-443
SSL 10443: 10443-10443

The ports specified in the addtpport.vbs script should now also show up.  These additional ports can of course also be deleted.

The following article on TechNet has more info and some longer script versions, including a delete script

30 March 2011

SCVMM Warning (3107) Error (3107) checkpoint and snapshot problems

Checkpoint are great until they go wrong.  That is why  I also recommend that snapshots are a "short term only" feature.  It is in no way a point in time full backup.

In the SCVMM console you will see one of the VM showing Warning (3107)  " The format of the file - is not compatible with the VHS format. (Internal error code: 0x80990C23)

The machine will actually function correctly you will just have an issue when attempting to manage checkpoints.  Your Warning will now turn into an Error

To resolve this error you need to connect to the Hyper-V management console.  Find the offending machine and check out the snapshots.  Sometimes you can have pending merge, this can only happen when the machine is turned off.

Check http://fixmyitsystem.com/2010/09/disk-management-options-missing-from.html for more info and other similar issues.

Turn the machine off and watch the status column. This should then allow any pending Merge actions to complete.  If you really get stuck create and additional snapshot, then remove it again, this should also force a merge.

Once done your machine will now correctly update in the SCVMM console.  The warning and error should go away after refreshing the status.  You can now manage checkpoints from the SCVMM console again.

29 March 2011

SCVMM display a status or error inconsistent with Hyper-V Manager

SCVMM is great for being able to manage all your VMs  from various hosts.  One thing I find very frustrating though is that sometimes it gets stuck on an error stat for a VM that is simply not true. Here is a typical example. Other issues I had is that it report that the checkpoint creation has failed or that it is corrupted.  Checking it out in Hyper-V manager however you can see that everything is just fine.

This is because the SCVMM works from a database.  This is not always refreshed to show the updates status of the VMs that it manages. 

In http://fixmyitsystem.com/2011/02/scvmm-delete-error-402-entry-form.html we covered how to connect to the database and where to look for the offending entries.

I have found that by removing all the entries for a VM form the dbo.tbl_WLC_VObject table in VirtualManagerDB database will force SCVMM to update it's database from the Hyper-V manager and therefore reflect the correct Virtual Machine state.


  • Check that the VM is indeed in a working start form the Hyper-V manager console.
  • Stop the Virtual Machine Manager service on SCVMM server

Finding the entries

  • From a connect SQl management studio 
  • Expand the VirtualManagerDB
  • Expand Tables
  • Right click dbo.tbl_WLC_VObject and click Select Top 1000 Rows

This will open the Query window and the result.  To make it easier to find the entries you are looking for add order by [name] to the end of the last line and execute the query again.

In the results pane you will see that some machines have more than one entry, this is typically for check points. Keep a note of how many entries you have for that VM name.

Removing the Entries

  • Right click dbo.tbl_WLC_VObject and click Edit top 200 rows
  • This will open a table that you can actually edit.
  • Scroll through until you find an entry that maches the VM name. 
  • Select the line 
  • Right Click and delete.
  • After this you need to execute SQL
  • Repeat this for the number of rows you found in the query.

If there are more than 200 rows you can edit the command to list more rows.

  • In the Management console select Tool - Options.  
  • Expand SQL server Object Explorer
  • Select Commands
  • Change the Value for Edit top <n> rows command to your desired number. 0 will return all rows.
This will now allow you to edit more rows.

Finishing up
Start the Virtual Machine Manager service on SCVMM server
Connect to the SCVMM console and check that the VM is repopulated and it should now show the updated correct status.

28 March 2011

TMG logs and reports view of system and enterprise rule usage

If you want to see whether a specific rule is being hit by a request you simple select the rule form the logs and reports edit filter screen and you can see only traffic hitting that rule.  This is very handy for troubleshooting, also for rule maintenance as described in http://fixmyitsystem.com/2011/03/tmg-rule-maintenance-optimisastion-and.html

The problem is that in the Logs and Reports view the list of rules are filtered.  So you cannot see rules that are of the following policies:

  • System
  • Enterprise

You can however still manually specify these and see the result for those rules.

When specifying the rule equals to filter in the you can manually type in the name of the rule.

To get the name of the rule you will have to list them in them or show them in the Firewall policy screen.  From the Task pane click on Show System Policy Rules and if your array is port of an enterprise you can also click on Show Enterprise Policy Rules.

This will now show all the rules.  Double click the rules.  Copy the rule name (some are long - really long) and when specifying in the filter remember to prefix [System] or [Enterprise].

It would have been nice if the log filter could detect if you were showing the additional rules and list them for selection too.  But alas it does not.

There is another way of analysing the usage of these rules.  This is more for reporting purposes.  And that is to use Webspy to analyse your logs.   You will need to import the Firewall logs and not just the proxy logs.  Some of these rules will show up in the proxy logs, but not all of them since they are used by the firewall component. Now when you do an analysis you can list and filter based in the rule names.  This list includes System and Enterprise rules by default.

If you know how to, you can finally track and see where the traffic is going if it is affected or picked up by an Enterprise or System rule that you might not expect.

25 March 2011

TMG rule maintenance optimisastion and organising tips

In a previous article http://fixmyitsystem.com/2010/11/tmg-rule-organising-enhancements.html I covered how to organise your rules so that they are easier to manage.  What becomes inevitable is that some of your rules become stale or unused.  Some of these might even be created in a trouble shoot or testing situations and then be forgotten about.

Naming convention
Everyone has an opinion as to how this should be done.  I am gong to tell you how I do mine and maybe there is some useful.

If we look at the firewall policy screen there is a nice rich environment that show you at a glance what is configured.  The columns are:

  • Order - containing indicative icon
  • Name - free text field
  • Action - containing Allow or Deny icon
  • Protocols -  populated by all the applicable protocols with a generic icon and text description
  • From / Listener - Contains the existing name of the network object
  • TO - Contains the existing name of the network object
  • Condition - Might as well be called users
  • Description - the most often neglected field
  • Policy - This could also have been called the policy scope and is either System Array or Enterprise.

The only fields you can actually change or have control over is

Using the name you can put a lot of information into it.  This can come on very useful when doing log analysis.
Remember that all the nice GUI stuff from the firewall policy screen are not available anywhere else in logs.  So it may seem redundant to call a rule Allow xxx whenre there is an icon indicating this, but it is not.

As an example if you are using log and report and want to create a filter based on a rule, you will only see the rule name. By following a function based naming convention you can quickly figure out what the rules are and do.

I use the following function names:

  • Allow
  • Deny
  • Deny Redirect
  • Publish
  • Publish NLB
  • Allow VPN
  • Deny VPN

To indicate that a rule is temporary for trouble shooting or testing I prefix the rule name with "!"  This allows you to stick to the naming convention and easily distinguish the temporary ones.  And when this becomes a permanent rule it is a quick change.

From / Listener
When creating a listener I normally stick to assigning a single IP per listener, for various reasons.  I therefor like to name my listener in the following manner.

IP - Application name  eg. - Exchange 2010 listener

In my case I find this very useful because this then brings up the affected IP address in the firewall policy screen.  Another advantage is if i am looking for a free IP or need to add another NLB i can quickly see what is and what is not being used.

This field is not as customisable as I would like but it can still be very useful.  Normally this would contain the network object name such as:

  • Networks
  • Computers
  • URL Sets
  • Server Farms

When creating these object using a sensible name can make a big difference.  as an example if you are creating a web server farm I like to use the following format:

NLB - Application name  eg. NLB - Sharepoint Head Office

Naming User group with enough information can make it very simple to administer.  I have been around the block a few time on this one and have finally settled on matching the corresponding Active Directory group.

Here you can go into loads of details as to that the purpose and point of the rule is.  If your naming convention is done correctly you should not need to put much information in here.  TMG be default has change tracking,  but another very useful bit you can do with the description filed is to keep relevant change or request numbers with dates. Since rules can live for years it is a nice addition for the change tracking system.

I find using the naming convention helps me maintain the rule set much easier.  I can easily spot or search for temporary rules.  I can then use these in a logs and reports filter to see if it was used within the last 7 or 30 days.

For more in depth rule maintenance you might want to use a 3rd party log analyser.  I use Webspy for various reasons.  But even here you can see how a good organising of names and object can refine information much easier.

If you have your own take on this or maybe have some additional tips or recommendations please put hem in a comment .

23 March 2011

Converting a TMG wpad file to a Apple MAC compatible pac file

TMG automatically creates a wpad file that contains all the setting you would want to specify as an administrator.
For more info on how this works and what it does check out the following:


A WPAD files is a valid .pac file so no conversion is required.

Apple OS X  machines however do not work 100% with a WPAD file that uses a multi node  NLB array.  Typical problem that they would experience is constantly being prompted for credentials first by one node and the the other.  As the moves between the nodes it gives them problems.  As I am NOT an avid MAC user I can't go into too many details.  (If you are a MAC user please add a comment explaining the symptoms better.)

To resolve this issue there are a few line we need to change in the WPAD file.  

  • Download the wpad.dat file
  • Edit the file
  • Test with pacparser
  • Distribute to MACs

Download the wpad.dat file
To download the wpad file browse to  http://yourproxy/wpad.dat  or depending on the config http://yourproxy:8080/wpad.dat.

Edit the File
What I did was to only use the NLB IP instead of the two individual host IPs.  My wpad file contains 224 lines but we only need to edit the one section.  What I did was to only use the NLB ip insted of the two individual host IPs.

DirectNames=new MakeNames();
function MakeProxies(){
this[0]=new Node("x.x.x.x",1409863761,1.000000);
this[1]=new Node("y.y.y.y",3630121203,1.000000);

This needs to be changed to:

DirectNames=new MakeNames();
function MakeProxies(){
this[0]=new Node("z.z.z.z",1409863761,1.000000);

cDirectNames=   -- This indicates the number of nodes so change this to 1 since there is only one NLB IP
this[0]=new Node  -- Specify the NLB IP

I tired to find out what the number behind the comma is but no luck, so with a bit of trial and error I found that leaving the numbers untouched works.

Test with pacparser
You can use a utility called pacparser to test both your wpad.dat and wpad.pc files.  Download it from 

This command line utility will validate and show the result of your testing.  Direct indicates that it is the failover configuration and for internal network that it skips using a proxy (this is per the TMG config)

For the PAC file using the NLB
pactester.exe -p wpad.pac -u http://www.google.co.za
PROXY z.z.z.z:8080; DIRECT

For the PAC file using the NLB local network
pactester.exe -p wpad.pac -u http://intranet

For the wpad.dat file using two node IPs
pactester.exe -p wpad.dat -u http://www.google.com
PROXY x.x.x.x:8080; PROXY y.y.y.y:8080; DIRECT

For the wpad.dat file using two node IPs  local network
pactester.exe -p wpad.dat -u http://intranet

Distribute to MACs
Manually - Save the file as a .pac file and the MAC users can now specify this under setting / networks / 
proxies (This seems to cause issues with OSx Lion)

Publish on a webserver you will have to specify the urls as http://sitename/wpad.pac

I have not tried to get autodetect to work, but in theory you should be able to specify it in DHCP. (In my case the DHCP scope is shared with Windows Machines)

 ***  UPDATE for Lion ***
There has been some changes in Lion.  The wpad file still needs to be edit as listed above.  A stock standard array wpad file will simply not work.  (A single node TMG server will be fine without need to edit it.)

Additionally it seems that Safari's behavior for CARP is now different.  To prevent being prompted endlessly for credentials CARP needs to be turned off.

To find out more about Cache Array Routing Protocol check out http://msdn.microsoft.com/en-us/library/ff823958(v=vs.85).aspx

To configure the Mac for using this script you need to go to Settings - Network - Advanced - Proxies.
Only configure the field "Automatic Proxy Configuration"

This will prevent it form picking up the network default wpad file (that you want on PC.)  Furthermore if it is not able to retrieve the script it will revert to the manual configuration ( web proxy & secure web proxy) which if left blank or un-checked is to go direct.

17 March 2011

TMG - Error Code: 502 Proxy Error. The data is invalid. (13) for some sites

I had the hardest time trying to get one particular site to work from behind a TMG 2010 proxy.  The users would get.

Error Code: 502 Proxy Error. The data is invalid. (13)
IP Address: 84.233.1
Date: 17/03/2011 02:34:51 PM [GMT]
Source: web filter

This correlated with the following log event on the TMG server.

Failed Connection Attempt
Log type: Web Proxy (Forward)
Status: 13 The data is invalid.
Source: Internal ( Destination: External (84.233147:80)
Request: GET http://lc.com/PORTAL/TOPLEVEL.php
Filter information: Req ID: 0fdbab91; Compression: client=No, server=Yes, compress rate=0% decompress rate=0% 
Protocol: http 
User: anonymous

The big problem I had is that this site "worked just fine" from a direct Internet connection.

I used HTTP watch to trace the connection to the site. While goign through the STREAM view I noticed the following:
  • The page was hosted on a Red Hat Apache server.  (Just because it is at the top of the header)
  • Content Encoding was GZIP
The GZIP thing got me because only the one single page was being compressed.

When scrolling further down the stream data I saw something strange - after the gzip data segment (green) there was "clear text" again. (Amber)  I had not seen this before.

This did not look right, maybe that is what TMG was on about.  I check that the server would send me an uncompressed version of the page with some TamperData request header manipulation, and thankfully it did.

Then I moved onto the TMG to exclude the site form requesting compressed data.

  • Create a computer or computer set object.
  • Open the HTTP compression setting
  • Add the computer object to the exceptions

That finally fixed the problem loading that page.

GZIP implemented correctly is a fantastic thing - Do it badly and it cause cause issues, That is why I prefer to do all my compression on a reverse proxy TMG.

Just goes to show you TMG was correct in dropping non standard or compliant data.  Browsers were far more forgiving, this is good for compatibility but bad for standard enforcement.  So if you are getting this or a similar issue requesting data from a site, look at the site closely and see if you can spot something.

16 March 2011

Exchange 2010 OWA opens in Web app light even though not selected

Ran into this little gem during testing.

When logging into the Exchange 2010 OWA the default is to load into the "standard" version.  This is the nice one with the enhance functionality, right clicks etc.

When logging into the Exchange 2010 OWA you have the option to select "Use Outlook Web App Light".  This is the lighter more basic version.

This is also the version that it will default to if the browser is not supported for the standard 2010 OWA.  

Interesting to note here is that Internet Explorer 6 is/was supported for the Standard Exchange 2007 OWA it is however not supported for Exchange 2010 OWA

Internet Explorer 7 tested fine though

All the more reason to check out It's time to let IE6 go

How to export and convert a Windows .PFX certificate to a Unix / Linux compatible .cer or .pem file

When publishing a Unix web environment behind TMG you will most likely have to export a certificate from the one platform to the other.  The problem is that although both sides use certificates, they come in different formats.

To get the files you need you will have to:
  • Export
  • Convert
  • Extract
For this you will need OPENSSL, you can download the Windows version http://gnuwin32.sourceforge.net/packages/openssl.htm

This will install the command line utility that will allow you to do the conversion.

For this I am going to assume that the certificate has been requested and installed on a Windows server.

The Export
From the Certificate MMC console you can now choose to export your certificate.
You can now select to export the certificate

You will need to export the private key to be able to use the certificate server side.

Choosing to export all the certificates in the path makes it much simpler to import chains of certificates.

You will also be prompted for a password.
This will now give you an exported pfx file that contains the private key.  You can tell by the icon that has a key on it.

A Windows machine can import the PFX package but for the Unix platform you need to "break it up" into individual files.

The Conversion

After installing the OPENSSL utility you can open a command prompt and execute the following command the convert the .pfx to a .cer

openssl pkcs12 -in certificate.pfx -out certificate.cer -nodes

You should be prompted for the export password and the export is successful with a MAC verified OK message

The resultant file is a plain text file that need to be broken up to firm the individual certificate and key files

The Extraction

Open the file in a text editor

You will see that the file is segmented into different parts starting with a line:

BAG Attributes

and ending with

-----END RSA PRIVATE KEY----- or

The number of certificates in the exported chain will determine how many certificate sections are in the file.

Copy the content of each segment and save it in a separate text file.

The file containing the Private Key needs to be save as .key while the files containing the Certificate sections can be saved a .cer file

Alternative extension for the certificates files are .pem and .crt

These .key and .cer files should now be compatible with your Unix system.

08 March 2011

It's time to let IE6 go

After 10 year of loyal - or not so loyal service it is time to let IE6 go and have it decommissioned.

MS has created a special site for develops to track the declining usage of IE6 so that they can stop writing compatibility code for it.


As an IT professional do your part and assist MS in it goal of killing off probably the logest serving browser of all time.

First released in 2001 and bundled with Windows XP and Server 2003 it was superseded by IE 7 when it was launched in 2006.  5 Years later IE6 it is still hanging on, in no doubt due to the fact that it natively ships with XP that is still very much alive and well.

IE 9 is currently in RC so if you are still using IE6 - let it go....

02 March 2011

Changing or adding a TMG standalone array into an enterprise (EMS) array

Why you would want to do this
When you only have to manage one TMG array these is no need for using an Enterprise Management server.   This is because all setting on that Array apply only to that array.  When you have to manage multiple arrays this changes quite a bit, especially if there are multiple similar arrays, say in a large multi site deployment.  This would be difficult to mange in dispersed manner and since there would be many common configuration items (such as computer groups) it would make sense to be able to centrally configure and reuse those items.  Same goes fir certain rules.

But what if you started of with one or more standalone arrays and you now want to move over to a EMS or enterprise array.

There is not direct way of doing this.  You can either join an array manged by an EMS or a stand alone server.  You cannot natively "merge" a stand alone array into an enterprise EMS directly.  I will step through the process required to achieve this, namely:

  • Prepare your EMS
  • Export your existing standalone array configuration
  • Create new blank array
  • Import your standalone configuration into your new array
  • Disjoin standalone array members
  • Join member to the EMS array
  • Join former array manager server to the EMS

Preparing your EMS
Singe this is a general procedure and not specific to this process I have a separate  article on this : http://fixmyitsystem.com/2011/03/installing-tmg-enterprise-management.html

Export your standalone array configuration
All the setting of your array can be exported in one big xml file.

Log onto the TMG management console

  • From the array top level node, Right click and Export (Backup)

This will start the Export Wizard

  • Since we want to export the Entire Array configuration check the following
  • Export confidential information
  • Export user permission settings
  • Follow the wizard and specify the export file name
  • Wait for the export to complete - if you have any errors you you have a problem.  Do not proceed without fixing it.

  • Copy this exported XML file to the new EMS server
  • It is a good idea to keep this xml file since it is a current complete config backup.
  • Although not required for this procedure - I would also suggest exporting all the certificates on the servers.  You can do a bulk export - just multi select all of the certs you want to export form the certificates MMC console and export those in one go.
Create a new blank array
You need a container to import the existing array information to.  For this you need to create a new array in the EMS

  • From the management console expand the Arrays node
  • Right click and select new array
  • Specify the same name as the exported array
  • Specify the DNS name from the exported array
  • Select to apply the default policy
  • Make sure that the " Deny" "Allow" and Publish check boxes are checked
  • Finish
  • Do not Apply the changes yet

Import your standalone configuration into your new array
We now need to make sure that the new array will have the identical configuration as the standalone array, if it is not 100% you will have issues.

  • Right Click the new blank array name and select import
  • Ignore the warning stating that there are outstanding changes
  • The Import Wizard will start , click Next
  • Specify the xml file you exported and copied earlier
  • Choose to overwrite
  • Check "Import server Specific information" (This includes information such as installed certificates etc.)
  • Check "Import user permission settings information"
  • Specify the password from the export
  • Confirm that you will indeed be overwriting the current configuration
  • Once the import completes you can apply the changes

You will now be prompted about affecting the following services, Select "Save the changes and restart the services"

After completion of this step you will now a an array but all the comms will be broken.  You cannot contact any of the servers in the array form the management console. DO NOT PANIC! Your array is working away unaware of the new EMS array.

Disjoin standalone array members
The reason your servers are not reachable is because they are still talking to their standalone configuration store.  To get them to use your new EMS one you need to disjoin them from the standalone and join them to you new EMS one.

NOTE: While any server is not part of an array it has no TMG configuration. It is however still bound in the NLB.  As a result it will not handle traffic correctly until is it an array member again.  This will most likely cause some disruption.  Plan for, and do this when it is acceptable to have a 15 minute intermittent break in service

  • Log back onto the stand alone array. 
  • Starting with the array member that is not the array manager.
  • Select the array level node - in the action pane there will be an option to  Disjoin Server from array
  • Follow the wizard
  • Next
  • Finish
  • WAIT, WAIT, WAIT and the WAIT some more

Join member to the EMS array
Once your server has been dis-joined you can join it to your new array.  Your server will find it's old configuration waiting for it in the new array so everything should work perfectly.

  • Starting with the array member that is not the array manager.
  • Select the array level node - in the action pane there will be an option to join array
  • From the Wizard select  "Join an array managed by an EMS server"
  • Specify the EMS server's FQDN
  • It will check connectivity.
  • From the Join EMS array screen use the drop down box to select the array you created earlier
  • Next, Finish
  • Wait for join to complete
By this stage you should be able to see connectivity from the EMS console to the member you just added.

Join former array manager server to the EMS
Once all the array members (other then the array manager) are removed you can now join the array manager to the ems array in the same method as the member server process above, with the exception that you do not need to disjoin it first.

Once this step is completed you should now be able to manage the array successfully from the EMS console. All servers in the array should show up as synced to the EMS configuration store.

Disclaimer: I have not been able to find any documentation on doing this, so this is my winging it procedure.  I have submitted this article to the MS TMG blog for verification.  The process worked flawlessly for me though.  Check the comments below for more info.

Installing TMG Enterprise Management Console

The TMG enterprise Management Server (EMS) allows you to manage serveral arrays centrally.  This also allows you to apply enterprise wide objects to multiple arrays, as opposed to having to do the configuration for ever array.  This also differs from simply having the management console installed on a non array member machine in that it not only is used for managing arrays but that it actually stores the array configurations (Configuration Storage Server.)

Preparation Wizard
After the starting up the splash screen you select the Preparation Wizard.  This will easily step you through the installation.  Just take not of the following screen.  Note that you CANNOT select to have a TMG   firewall machine configured as a EMS server.  This is different from ISA 2006 where you could do this.

After the preparation finishes you can launch the installation wizard, again follow the wizard but take not of the following screens.

This is a general recommendation, you can choose to ignore it and have multiple TMG enterprise arrays.  But you have been warned.

Since TMG support not being part of a domain for security reason you can choose to have a work group deployment.  The other reason for choosing this is if you have a deployment that spans multiple domains without a trust relationship.

Post Installation Tasks
Once the installation is done you will notice that the Console tree now contains additional nodes for 
  • Enterprise
  • Arrays

Once the installation is complete remember to install the TMG updates.

At the time of writing this the available updates were:

To successfully install the service pack and update you need to turn off User Access Control (UAC) - Yes Really! . You also need to install them in sequence - SP1 then the Update for SP1.