Tuesday, 25 June 2013

Migrating public folders from Exchange 2007 / 2010 to another Exchange 2010 server

If you need to migrate to a new Exchange 2010 server; Either from Exchange 2007 or from an old Exchange 2010 installation, for example SBS 2011, you will need to move all the public folders to the new server. 

In this example there are two servers:

new-server and old-server

The first task to complete is to see what public folders are located on the old-server. To do that open the Exchange PowerShell as Administrator and enter:

Get-PublicFolderStatistics | Sort-Object -Descending ItemCount |ft -AutoSize

This gives us an idea of how many public folders there are and the item count. If you run the command on the new-server is should return only one result which is the NON_IPM_Subtree.

On the new-server open the Exchange PowerShell as Administrator and enter:

cd $exscripts


.\AddReplicaToPFRecursive.ps1 -TopPublicFolder “\” -ServerToAdd new-server
.\AddReplicaToPFRecursive.ps1 -TopPublicFolder “\NON_IPM_SUBTREE” -ServerToAdd new-server
This tells the old-server to add the new-server to it's list of servers to replicate public folders to for both System folder and User public folders.

Still on the new server run these commands to check that the new-server will take replicas from the old-server.

Get-PublicFolder \ -Recurse | ft name,parentpath,replicas
Get-PublicFolder \NON_IPM_Subtree -Recurse | ft name,parentpath,replicas

Still on the old-server, we can now tell the old-server to move all it's replicas to the new-server via this PowerShell script:

./MoveAllReplicas.ps1 -Server old-server -NewServer new-server

On the new-server we should be able to see the public folder item count increasing. This can take some time to complete.

Get-PublicFolderStatistics | Sort-Object -Descending ItemCount |ft -AutoSize

Because this was originally a migration from Exchange 2003 to 2010 the 'servers' container was still present in ADSIEDIT.MSC which needed removing to prevent replication back-fill from failing.

Hyper-V LiveMigration fails with Live migration did not succeed at the source

I built a Hyper-V 3.0 cluster for a customer moving away from SBS 2011. The cluster was created successfully but live migration was failing with these errors:

"Live migration did not succeed at the source"

"Failed to authenticate the remote node: The specified target is unknown or unreachable (0x80090303)"

The cluster validated without a problem. All the sources were pointing to a mismatch in Hyper-V Network name and vSwitch name. I knew this wasn't the issue as I have used the same PowerShell script to create the network team and vSwitch.

When i looked in the Event Viewer at the Hyper-V-VMMS log I could see the the SPN was not able to register on the SBS 2011 domain controller.



When I used SETSPN -L Hyper-V-node I could see that the Hyper-V SPNs were missing.  I checked the permissions on each node to make sure that 'SELF' has the permission "Validated write to service principal name" but that was correctly set.

I saw on forums that others with a SBS 2011 domain had the same issue. In the end, I worked around the issue by manually adding the required Hyper-V SPNs in AD.

Below are the correct SPNs for Hyper-V 3.0:

Microsoft Virtual Console Service/Hyper-V-node
Microsoft Virtual Console Service/Hyper-V-node.domain.local
Microsoft Virtual System Migration Service/Hyper-V-node
Microsoft Virtual System Migration Service/Hyper-V-node.domain.local
Hyper-V Replica Service/Hyper-V-node
Hyper-V Replica Service/Hyper-V-node.domain.local

After adding the SPNs for both cluster nodes manually on each computer object in DSA.MSC the Live Migration started working :) I did see that the Event ID 14050 were still appearing :( Later in the project a new Windows 2012 DC was added and the 14050 events stopped.

I can only guess this is an issue with the bespoke permissions/policies that Microsoft have built into the SBS solution. This will be an issue moving forward as companies move away from SBS now M$ has killed it off.

http://social.technet.microsoft.com/Forums/windowsserver/en-US/2b80845a-94de-4fc4-8963-ac8e7b41fca6/server-2008-r2-hyperv-live-migration-did-not-succeed-at-the-source-vmname-failed-to-migrate 

Thursday, 20 June 2013

How to determine the current Active Directory or Exchange Server schema version

To check the forest schema version:

dsquery * cn=schema,cn=configuration,dc=domain,dc=local -scope base -attr objectVersion

The command prompt will return the Schema version:
56 = Windows Server 2012
47 = Windows Server 2008 R2
44 = Windows Server 2008
31 = Windows Server 2003 R2
30 = Windows Server 2003
13 = Windows 2000

To check the domain schema version:

dsquery * cn=ActiveDirectoryUpdate,cn=DomainUpdates,cn=System,dc=domain,dc=local -scope base -attr revision

The command will return the revision of the Active Directory update:
9 = Windows Server 2012
5 = Windows Server 2008R2
3 = Windows Server 2008


To check the exchange schema version:

dsquery * CN=ms-Exch-Schema-Version-Pt,cn=schema,cn=configuration,dc=domain,dc=local -scope base -attr rangeUpper



Taken from http://support.microsoft.com/kb/556086

Tuesday, 18 June 2013

Error "Windows cannot access the installation sources. Verify that the installation sources are accessible, and restart the installation"

It's logical to assume that when you build Hyper-V virtual machines to use the Windows media that came shipped with the hardware??

The VM boots from the either the ISO or the DVD media fine, but then issues this error:

"Windows cannot access the installation sources. Verify that the installation sources are accessible, and restart the installation"




Luckily I also had a non OEM copy of 2012, which works a treat!!  Looks like you can't use Dell OEM media to build virtual machines.

If you don't have a copy, you will need to download it from the volume licensing center https://www.microsoft.com/Licensing/servicecenter/default.aspx or apply for the Windows 2012 server trial.

This can also be caused by a damaged or corrupt VHX/VHDX


Tuesday, 11 June 2013

Creating a LAG between Dell PowerConnect 6224 and Cisco 2960S

Creating a LAG between two switch for redundancy and bandwidth is normally not very complicated. I need to create a LAG between a stack of Dell PowerConnect 6224s and a single Cisco 2960-S. I had already created a LAG between the 6224 stack and a 3750-X stack with out any trouble. 

I setup the LAG on the 6224 stack with:

interface range ethernet 1/g24,2/g24
channel-group 24 mode auto

And on the Cisco 2960-S:

Interface range gi1/0/47,gi1/0/48
channel-group 1 mode active

The LAG would be active for a few moment then one of the 2960-S ports would turn orange, with interface status 'err-disabled'
I thought maybe spanning tree was blocking the ports, but this was not the case:

show spanning-tree blocked-ports

Further troubleshooting, I set both of the channel groups to static LAG thinking it might not be negotiating LACP correctly:

 Dell 6224 stack:

interface range ethernet 1/g24,2/g24
channel-group 24 mode on

Cisco 2960-S:

Interface range gi1/0/47,gi1/0/48
channel-group 1 mode on

The LAG was still taking a dive :(

Troubleshooting, etherchannel debugging was enabled on the Cisco
Cisco 2960-S:

debug etherchannel 
Interface range gi1/0/47,gi1/0/48
shutdown
no shutdown

With debugging enabled we could see that the 2960-S was erroring the ports because PAgP could not be negotiated!! We didn't ask for PAgP in the first place. PAgP is Cisco proprietary and not supported on other vendors switches.

The solution was to set the channel group on the Cisco to 'passive' mode :)

Dell 6224 stack:

interface range ethernet 1/g24,2/g24
channel-group 24 mode auto

Cisco 2960-S:

Interface range gi1/0/47,gi1/0/48
channel-group 1 mode passive

You can check to see if the LAG is passing VLAN trunk with this command on the Cisco 2960-S:

show interface trunk

Inter-vendor switch connectivity issues eh!! This seems to only be the case with 2960's as the 3750's where fine. Thanks to Paul Walton for his troubleshooting.