We’ll, in the first three parts of this series, we have installed the CCR cluster with one active and one passive node, as well as configure the Transport Dumpster. We are pretty much home free. First we need to do some things. The first of which, is validating the install and DB Seeding:
DB seeding is the process where you copy the active node’s up to date database, to the passive node’s blank or not up to data database. Once seeding is complete, you will be able to let the log copying keep the DB’s up to date. This happens automatically when the servers are initially installed, so you don’t need to do anything but to check to ensure that the process worked. You can do this by navigating to Server Configuration->Mailbox. You should see “Healthy” under the copy status:
So when would you need to reseed? We’ll, lets say your active node blue screens. You fail over to the passive node, extremely limited interruption to your end users. Problem is, that blue screen, has corrupted your exchange DB on the active node. You can simply re-seed the working copy of your DB from the good node. We’ll simulate this now.
First, lets assume our active node WAS NYCCRNODE2 and it has failed over to NYCCRNODE1. We need to replace the copy of NYCCRNODE2’s DB with the working copy on NYCCRNODE2. First, we will need to stop the copying of the database. In EMS, run the following command:
Suspend-StorageGroupCopy –Identity:”NYMB01\First Storage Group”
Now, NYCCRNODE2 is no longer receiving updates. Let’s remove the files for the First Storage Group on NYCCRNODE2, simply delete them. You will need to stop the Exchange Services on NYCCRNODE2 to unlock the files.
Now, what we need to do, is create a new DB by Seeding or copying the DB from NYCCRNODE1 to NYCCRNODE2. Run the following command:
Update-StorageGroupCopy –Identity:”NYMB01\First Storage Group”
You should receive the following message:
The green bar, is the copying of the database. Make sure you restart the Exchange Services you stopped earlier. Now check the copy status of the DB and it should read Healthy again.
In SP1, you can also run the Suspend and Update commands by right clicking on the Storage Group in the EMC.
Testing the CCR Cluster:
We can test the cluster with the Move-ClusteredMailboxServer command, or with the Exchange Management Console. Be advised, you never want to use Cluster Administrator to move the cluster, always use an Exchange tool to perform that. First, lets get the status of the cluster with the Get-ClusteredMailboxServer command:
As we can see, the active node is currently NYCCRNODE1. Lets move it to NYCCRNODE2. Your command should look like this:
Move-ClusteredMailboxServer –Identity NYMB01 –TargetMachine NYCCRNODE2 –MoveComment “Testing CCR Move”
After you confirm, if you run the command Get-ClusteredMailboxServer you should see NYCCRNODE2 as the active node:
Great! The cluster fails over without issue. If your end users are in Cached Mode, they wont even realize the cluster went offline!
Moving the Storage Group and Database Files:
This one is a little tricky to grasp the concept of. In all previous versions of Exchange, and all Versions of 2007 except a CCR cluster, you could just tell the management console to move the Storage Group Files, or the DB file, and it would do so without issue. Try that with a CCR cluster, and you’ll notice two things. One, when you select “Move the Storage Group Path”, that the browse for a new location is grayed out:
Then, even if you try to move, you’ll receive the following error:
The error states you cannot perform this on a remote server (not us) or on a clustered mailbox server (us!) in a CCR environment. Please use the –ConfigurationOnly option and manually move the files.
Ok, so what just happened? We’ll, lets think about it. We have a cluster that has two separate sets of Exchange files. Our cluster though, only has one set of paths configured in Active Directory. If we use ADSIEDIT and drill down to our server object located in:
Configuration->Servers->Microsoft Exchange->Organization Name –> Administrative Groups –> Exchange Administrative Groups (FYDIBOHF23SPDLT) –> Servers –>NYMB01 –> Information Store –> First Storage Group
If we right click on this value and select properties. Then navigate for an Attribute called msExchESEParamSystemPath, this will tell us the location of where Active Directory thinks they are:
This is why the path’s for each node HAVE TO BE THE SAME! So, what are we to do? We follow the error’s suggestion. We use the –ConfigurationOnly parameter. What this will do, is update Active Directory with the new path we INTEND to have the files on. We will then have to move the files ourselves with good old fashion Copy and Paste. Just like all moves, this will cause the DB’s to be inaccessible. Unfortunately, having the cluster here doesn’t help as both will have to go offline. That is why it is better to do this upon the initial install, when there are no users on the server.
So, we want to do the following move:
Storage Group Files
“C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group” -> “E:\Storage Group Files”
“C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group” -> “E:\DB Files”
First, suspend the replication of the DB’s:
Suspend-StorageGroupCopy –Identity:”NYMB01\First Storage Group”
Next, dismount the database:
Dismount-Database –Identity:”NYMB01\First Storage Group\Mailbox Database”
Now you must change the path of the files in AD, remember, we need to use the -ConfigurationOnly option:
Move-StorageGroupPath –Identity:”NYMB01\First Storage Group” –LogFilePath “E:\Storage Group Files” –SystemFolderPath “E:\Storage Group Files” –ConfigurationOnly
The LogFilePath is the Transaction Logs, and the SystemFolderPath is the Checkpoint file.
Now, on both nodes, copy the files over manually to their new locations. You are literally copying EVERY file and folder except a file with the .edb extension. If we check the same value in ADSIEDIT:
It’s been changed. Now, do the same for the Exchange Database File. You have to move the EDB file PRIOR to running this command. That’s because you have to actually specify the file itself and the EMS checks this against the active node.
Move-DatabasePath –Identity:”NYMB01\First Storage Group\Mailbox Database” –EdbFilePath “E:\DB Files\Mailbox Database.edb” –ConfigurationOnly
Now, all the files are moved on BOTH nodes. We can re-mount, and resume seeding.
To mount, its Mount-Database –Identity:”NYMB01\First Storage Group\Mailbox Database”
To resume seeding its Resume-StorageGroupCopy –Identity:”NYMB01\First Storage Group”
And your files have now been moved successfully. As you can see there is a little bit more work with a CCR cluster than with other Mailbox Server role installs of 2007. Finish the above steps for any remaining DB’s and Storage Groups you have.
Updating the Cluster:
This one is actually really easy when you think about it. Let’s say it’s Patch Tuesday, and you have a whole slew of patches for the Base OS, as well as Exchange 2007 to install. There is a certain process you should follow with a cluster.
***Microsoft Update, the update client that downloads updates for ALL Microsoft products and not just Windows, will NOT recognize nodes of a cluster, as having Exchange 2007 installed. This means you need to manually download the updates for Exchange 2007 manually and apply them.***
The process is simple. Always update the passive node. This means, install all your updates on the passive node, finish all the reboots, and ensure everything is working. Next, move the CMS over to the freshly updated node with the Move-ClusteredMailboxServer command. Now, update the node you just moved the CMS off of. Install all updates, finish the reboots, ensure everything is working. See, easy!
Word of advice, make sure that each node is at the same patch level for OS and Exchange.
So, it took us four articles to get there, but we are there. We have covered the theory behind a CCR cluster and how it can give an business the Enterprise level protection without a costly investment such as a SAN, to the installation of the cluster and Exchange 2007, to configuring and testing your cluster.