Rise Foundation

Blog Categories


Posts Tagged ‘Exchange 2010’

Office 365 – The Magic Behind The Hybrid Config Wizard (2010)

Configuring Exchange hybrid prior to the Hybrid Configuration Wizard (HCW) is just a distant memory at this point. The process that was a painfully long configuration was greatly simplified with the release of the HCW with SP2 for Exchange 2010 back in May 2011.

As much as the HCW has made my job easier, I’m always a bit hesitant about black box processes. Since an early age, I’ve always been one that needed to know how things work under the hood.

So what does the wizard do?

What does it change?

What is the impact?

If you submitted a change control request stating that you’re going to “run the hybrid wizard”, you’re probably being asked these same questions.

For those that are implementing Exchange hybrid on a regular basis, what the wizard does should not be a mystery at this point. If you’re new to Exchange hybrid, I’ve outlined below the individual commands run by the wizard and areas where there might be potential risk.
Read the rest of this post »

Exchange 2013 Site Mailboxes coexisting with Exchange 2010

With the introduction of Exchange 2013 and SharePoint 2013, a new feature called Site Mailboxes was introduced, which allows for team collaboration to bring documents and emails together in Outlook 2013 and SharePoint 2013.  Recently I’ve been working with a customer who had a specific business need for Site Mailboxes as part of a SharePoint 2013 project for document management.

As part of the project, I introduced two multi-role Exchange 2013 servers into their existing Exchange 2010 organization and configured them to use a new namespace separate from what was being used in Exchange 2010 for accessing OWA, ActiveSync, etc.  After following guidance from Microsoft to configure Site Mailboxes and enabling the Site Mailbox App within the SharePoint 2013 site, I proceeded to setup the first Site Mailbox when I was promptly presented with the following error message.  The error message states “Site Mailboxes are one of the many new features offered in Microsoft Exchange Server 2013.  At the moment, the Site Mailbox app has been configured by your administrator to connect to an older version of Exchange Server.  Please contact your administrator if this message persists”.  Correlation ID:  b8e83c9c-4a6d-c06a05371026dba9b88018, Error Code 2.


The error message seemed a bit odd to me, as Exchange 2013 was configured with a separate namespace for client connectivity versus what was used for Exchange 2010.  The SharePoint 2013 Site Mailbox configuration was done using the following command where the ExchangeAutodiscoverDomain was the URL being used for client connectivity to Exchange 2013:

.\Set-SiteMailboxConfig.ps1 –ExchangeSiteMailboxDomain –ExchangeAutodiscoverDomain -WebApplicationUrl Read the rest of this post »

Copying distribution groups to cloud for Outlook/OWA management

While directory sync provides a much needed service for Office 365 tenants one pain point that comes up pretty regularly is distribution group management once you’re in the cloud. Sure the groups get synced to the cloud but if you’ve been used to managing the group memberships with Outlook when everyone’s mailbox used to be on-premise, once you move your mailbox to the cloud you won’t be able to do that anymore. This is because the object is synchronized from your local AD and therefore you must make changes to the group in Active Directory and let dirsync bring those changes to the cloud. If you have a hybrid server or local Exchange environment you could use it to manage the membership but most likely you’re not going to allow users to access the EMC. You could also create your own application which allows users to edit groups in your local AD but honestly who wants to spend development time doing that?

So what other options are there? Well the only way is to recreate each group directly in the cloud. What if you have hundreds or thousands of groups and thousands of members of those groups? I know, it doesn’t’ sound like this would be any fun at all and it’s not. You can automate this process using PowerShell and maybe some simple Excel skills. I like keeping things organized and so I use Excel to prepare input files for my bulk PowerShell applications. For this particular task what I did was get a list of the existing distribution groups from my on-premise Exchange environment with a few attributes to allow me to bind to the AD object and leverage other attributes in my script. I would grab at a minimum the displayName, mail, and mailNickname. Using Excel I would then use this information to create the new displayName, mail and mailNickname for the cloud-based distribution groups. To show you what I mean here’s an example input file (CSV) for my script.

oldgroupDisplayname oldgroupMail oldgroupAlias newgroupDisplayname newgroupMail newgroupAlias
DGroup1 dgroup1 Cloud Group1 cloudgroup1


Now for the very simple script to connect to the cloud, create the new group and populate the membership based on the existing group:

Note the prefix (“Cloud”) that I used in the example. This simply means to prefix the cmdlet you’re running with “cloud” (i.e. Get-CloudMailbox instead of Get-Mailbox). Using a prefix allows me to use multiple remote PowerShell sessions, one against the cloud and one against the on-premise Exchange environment so I can keep track of which objects I’m updating. This script could be expanded easily to configure other settings on the new cloud distribution group and to duplicate other settings from the on-premise group like the manager, proxyAddresses, group opt-in/opt-out settings, etc.

I hope this proves useful for someone out there faced with the same challenge.

Lync 2010 integration with Exchange Online Hosted Voicemail

I recently came across an issue when one of my peers in the enterprise voice group helped integrate their existing on-premise Lync 2010 environment with hosted voicemail in the cloud. We followed the recommended steps outlined in these series of TechNet articles and other publications and everything seemed to work fine with a few test accounts. However, in a small pilot, we uncovered an issue when we discovered that our pilot users were getting fast busy signals when you would dial their extensions. Everything worked fine for the mail migration and we were using an Exchange 2010 hybrid server. After some searching we uncovered the root cause of our problem.

One of the two Lync commands we ran after someone’s UM-enabled mailbox was migrated to the cloud was this:


Set-CsUser -Identity “John Doe” -HostedVoiceMail $True


The AD attribute that gets updated is msExchUCVoiceMailSettings. When run from the Lync server the value for this is set to CsHostedVoiceMail=1. This was getting correctly set with our scripts but each time dirsync would run it would get wiped out. It turns out this is one of those write-back attributes that you can enable with dirsync when you configure dirsync for rich coexistence using a hybrid server. So the issue is that this attribute is a one-way sync from the cloud to on-premise and each time dirsync would run it would wipe out our value to null because that is what was in the Metaverse on the dirsync server. The other strange thing is that we saw our test accounts work but their value for this AD attribute was ExchangeHostedVoiceMail=1 since the cloud set this value after we migrated the UM-enabled mailbox. Both values work fine and allow voicemails to be forwarded to the cloud mailbox. The problem here is that there seems to be a significant delay as to when this attribute actually gets stamped on the cloud account. I timed it during our next migration and it took about 90 minutes for the cloud mailbox to have this attribute set. Needless to say, though, this was an unacceptable scenario with my client since they heavily used Lync and voicemail.

The workaround was actually simple. Since dirsync was the one overwriting the local AD attributes and was taking so long that we couldn’t rely on the provisioning process to immediately update it, I decided to take dirsync out of the mix for that particular attribute. To do this I edited the SourceAD Management Agent’s properties and removed the attribute from the Attribute Flow section as shown in the image. After that I was able to use Lync to set the local AD attribute and forward voicemail immediately to the cloud mailbox without waiting for dirsync to catch up.

The downside, however, is that does leave you technically in an unsupported dirsync scenario and if you were to reinstall dirsync you would need to reconfigure it if you wanted to have no delays in setting this attribute quickly. Doing this minor surgery on dirsync is probably good while you’re doing migrations but if you can normally wait the 90 minutes or so for newly created cloud accounts (i.e. new hires), then it’s probably best to leave dirsync alone and let it do its job.

Note: Use this solution at your own discretion.


Using Exchange 2010 Native Data Protection

Recently I had the pleasure of working with a customer who decided to eliminate backups within their Exchange Organization.  They were upgrading from Exchange 2007 to Exchange 2010 and wanted to take advantage of many of the new features that Exchange 2010 had to offer such as larger mailbox databases and cheaper storage.  The customer was increasing mailbox quotas for approximately 12,000 users from between 120 or 200MB to 2 or 3GB with a handful of users with unlimited mailbox size limits.  There was also a subset of approximately 19,000 users who had 35MB mailbox quotas which would be increased to 75MB in Exchange 2010.  Their email retention time was currently 180 days but they were in the process of reviewing that with their legal department and possibly reducing that to 30 days.  While going through their design session we came upon the topics of backups and what they planned to do going forward with Exchange 2010 while increasing the mailbox size limits.  After running through the Exchange 2010 Mailbox Server Role Requirements Calculator with the Messaging and Storage Teams, the amount of storage they would have to purchase for their backup system was just not going to be feasible from a budgetary standpoint.

After numerous discussions about Exchange 2010 regarding backups, storage and database copies.  The customer decided they wanted to explore the idea of using Exchange Native Data Protection with Exchange 2010 and eliminate backups completely from their environment.  With the guidance of the following TechNet article: the choice was clear that by going to Exchange 2010, traditional backups for would no longer be necessary.  We were able to meet many of following considerations for using Exchange Native Data Protection which allowed the customer to meet their business and technical requirements for upgrading to Exchange 2010.

1.  Your recovery time objective and recovery point objective goals should be clearly defined, and you should establish that using a combined set of built-in features in lieu of traditional backups enables you to meet these goals.

  • Resolution:  In working with the customers legal department the decision was made that the retention time for email would be reduced to 30 days for all users.  We used this information to enter into the Exchange 2010 Mailbox Server Role Requirements Calculator for the retention time to be 30 days with a combination of Single Item Recovery

2.  You should determine how many copies of each database are needed to cover the various failure scenarios against which your system is designed to protect.

  • Resolution:  The customer decided that they would have 3 copies of each database spread across two Datacenters which were configured in an Active/Active configuration.  In this case, the customer had F5 GTM and LTM load-balancers as well as a very high-speed connection between the Datacenters as well as multiple backup connections with different carriers if the primary WAN link were to go down.

3.  Can you afford to lose a point-in-time copy if the DAG member hosting the copy experiences a failure that affects the copy or the integrity of the copy?

  • Resolution:  Because there are 3 copies of each database and a very high-speed connection between Datacenters this will not be an issue in this specific deployment.

4.  Exchange 2010 allows you to deploy larger mailboxes, and the recommended maximum mailbox database size has been increased from 200 gigabytes (GB) in Exchange 2007 to 2 terabytes (when two or more highly available mailbox database copies are being used). Based on the larger mailboxes that most organizations are likely to deploy, what will your recovery point objective be if you have to replay a large number of log files when activating a database copy or a lagged database copy?

  • Resolution:  Again for this customer, this was a moot point because of their Datacenter configuration.  However, we did deploy a single lagged copy with a 7 day playback time for around 150 Executive level users in the very rare event of logical corruption, see the following point.

5.  How will you detect and prevent logical corruption in an active database copy from replicating to the passive copies of the database? What is your recovery plan for this situation? How frequently has this scenario occurred in the past? If logical corruption occurs frequently in your organization, we recommend that you factor that scenario into your design by using one or more lagged copies, with a sufficient replay lag window to allow you to detect and act on logical corruption when it occurs, but before that corruption is replicated to other database copies

  • Resolution:  After speaking with a number of my colleagues as well as a few folks at Microsoft, logical corruption is something that a very, very unlikely to happen with the changes in disk technology since the release of Exchange 5.5.  Ever since the release of Exchange 2000/2003, Microsoft has built in a number of safeguards that help prevent logical corruption.

Every customer environment is different and careful planning and understanding of both the business and technical requirements are crucial.  By moving to Exchange 2010, this customer was able to reduce the costs and improve the performance of their messaging environment in a number of ways.  First, by using multi-role servers we were able to reduce the number of servers deployed in the environment from 15 down to 10.  Second, all of their storage was low-cost, high-capacity, direct attached storage.  And third, by implementing multiple database copies along with deleted item retention and single item recovery, traditional backups and the amount of disk needed to backup larger mailboxes was eliminated.

Token size affecting Free/Busy information between 2007 and 2010

I recently encountered an interesting issue with Token Sizes that I haven’t seen before with regards to Free/Busy information between Exchange 2007 and Exchange 2010 during an Intra-Org upgrade.

A little background on the environment, the customer I was working with was upgrading from Exchange 2007 to Exchange 2010 and had approximately 25,000 mailboxes.  There are also many nested Security Groups within the environment applied to user accounts primarily within the IT Department.  During the pilot phase with IT users, we had Exchange 2007 users reporting that Free/Busy information was not working for users whose mailboxes were moved to Exchange 2010.  What was strange was we were unable to reproduce the issue with test accounts or see any information logged in the Event Logs in either Exchange 2007 or 2010 after diagnostic logging was increased.

During further troubleshooting with the customer we stumbled upon the following KB article which describes the issue we were encountering:  After meeting with the customer’s Active Directory team, they stated that they recently had an issue with another application deployment where Token Size was impacting that project as well.  Since there are many documented Free/Busy issues with Exchange, this specific KB article did not initially come up in any online searches.

After following the steps outlined in Workaround section of the KB article, the Free/Busy issues were completely resolved.  A few items of note with the KB article, we did not run the PowerShell scripts specified in the article as there was no test environment to run these against before running them in production.  We also did not have to modify the Registry Keys on client workstations as the customer had already increased the MaxTokenSize due to an issue they had with another system.

I’ll be curious to see if this specific Free/Busy issue still occurs in Exchange 2013 with large environments upgrading from either Exchange 2007 or Exchange 2010.

Exchange 2010 – Is The Upgrade Worth The Effort?

I have written several posts providing insight into some technical challenges that I have encountered while helping my customers upgrade to Exchange 2010. Upgrading any technology within a computing environment requires time and effort, and with some of the technical challenges I’ve already documented, some of you may be wondering if upgrading your Exchange environment is worth the effort. For this post, I thought I’d provide some insight into why my customers have been upgrading to Exchange 2010 rather than discussing technical challenges encountered during the upgrade process.

Electronic Messaging Requirements

Whenever I begin an Exchange 2010 Design Session with a customer, I always start by helping my customers define and understand their electronic messaging requirements. While it can sometimes be a challenging exercise, this is an essential part of the design process that cannot be overlooked because the requirements drive the overall design process. As I often tell my customers, once the messaging requirements are understood, the high level architecture practically designs itself. In many instances, I have found that my customers’ electronic messaging requirements have changed since their last deployment or upgrade.

For many of my customers, in years past, email was not considered a mission critical application. My how times have changed. I now find that the majority of my customers classify email as a mission critical application to their business. As a result, they are making investments in the underlying architecture to ensure that their email infrastructure meets their mission critical application standards. For many customers, this typically requires redundant hardware and redundant copies of email data. For my customers who have found themselves in a situation where their email requirements have changed, upgrading to Exchange 2010 has enabled them to deploy a new Exchange architecture to meet their organization’s evolving needs. For many of them, what was an adequate email solution 5 years ago no longer meets their organization’s needs. In several cases, re-evaluating messaging requirements and upgrading to Exchange 2010 were essential to mitigate risk to the organization.

Hardware Redundancy and End User Transparency

As mentioned above, many companies require redundant hardware for their mission critical applications; and many companies are now classifying email as a mission critical application. Microsoft has provided the capability to support redundant Exchange Servers since Exchange 5.5. However, with each new version of Exchange, the ability to support redundant servers and the reliability of such solutions has improved dramatically. As a consultant responsible for deploying highly available solutions for my customers, I take personal responsibility for the solutions that I deploy. I can honestly say that when designed and deployed correctly, I have personally watched customers reboot Exchange 2010 Servers in production and the server reboot has been transparent to end users.

Previous versions of Exchange provided automated and manual failover capabilities, but none of the previous solutions ever provided 100% transparency to end users when failovers occurred. Granted, the end user impact could be minimized to a popup notification in the end users’ System Tray or an informational Outlook dialogue box. Nonetheless, when an Exchange Server failover occurred, someone within the end user community would notice. For many of my customers, organizational requirements dictate 100% transparency to end users when performing automated or manual failovers within the email environment. Anything less than 100% transparency requires scheduled downtime sometimes weeks in advance. When designed and deployed correctly using CAS Arrays, load balancers, Database Availability Groups, and the latest version of Outlook, Exchange 2010 does provide 100% transparency to the end user community when servers go down expectedly or unexpectedly. This capability alone has been reason enough for some of my customers to upgrade to Exchange 2010.

Email Data Redundancy

While previous versions of Exchange supported clustering to provide hardware redundancy, all versions (with the exception of CCR and SCR Clustering in Exchange 2007) only provided a single copy of each Exchange database. Over the years I have had customers experience database corruption within their Exchange databases. Usually this corruption was caused by a hardware failure or an application or administrator mistakenly removing Exchange Log files in an incorrect manner. Whatever the cause, database corruption can happen; and having a duplicate copy of a corrupted database can be a real life saver.

For years, some third party products have been able to duplicate Exchange Server databases from one server to another. Some could even replicate data between datacenters located in different geographic locations. This capability is now built into Exchange 2010 via Database Availability Groups. Having this capability built directly into the product is a tremendous benefit for a couple reasons. First and foremost, the solution uses native Microsoft technology to replicate data. As a result, data replication is fully documented and supported by Microsoft. Also, third party applications are no longer required to provide database redundancy. This creates a less complex and less expensive solution for organizations that require data redundancy. The ability to use native Microsoft technology to replicate Exchange databases to multiple datacenters has been another reason many of my customers have upgraded to Exchange 2010.


Even though it was never officially supported by Microsoft, I have had several customers run Exchange 2007 or even Exchange 2003 in virtualized environments. My customers did this at their own risk, and I did my best to support them when they ran into issues. It was a very happy day for many of my customers when Microsoft announced that Exchange 2010 would be supported on virtualized hardware. Simply being supported virtually was reason enough for some of my customers to upgrade their Exchange environments to 2010.


Since its release, I have helped numerous customers deploy or upgrade their Exchange environments to Exchange 2010. Each deployment has presented some challenges, but every deployment has been successful. If you’re considering an Exchange 2010 upgrade, hopefully you find the rationale behind some of my customers’ reasons for their upgrades helpful in your decision-making process.

Working With Exchange 2010 Recovery Databases


Brien Posey has an excellent post explaining the steps required to mount a recovery database in Exchange 2010. His post can be found here. While assisting one of my customers restore email data using a recovery database, I learned some lessons that I thought I’d share. In this instance, my customer needed to restore data from a backup created using NetBackup. The customer initially tried to use NetBackup’s granular restore technology to recover the data in question but they were unsuccessful. I suggested restoring the entire database to an Exchange 2010 Recovery Database and that’s when I learned documenting the steps we followed might prove helpful to others.

Restore the Database

It’s important to note that since we couldn’t make use of NetBackup’s granular restore technology, our only remaining option was to restore the entire database file and then find a way to mount the database as an Exchange 2010 Recovery Database. NetBackup’s high level instructions to using an Exchange 2010 Recovery Database are similar to the following:

  1. Use Exchange 2010 Management Shell to create a Recovery Database
  2. Leave the Recovery Database dismounted
  3. Use NetBackup to perform the database restore

NetBackup’s instructions for performing restores using granular restore technology are much more thorough than their instructions for using Exchange 2010 Recovery Databases, so some trial and error was required in order to get to our data. Initially, I created a recovery database using the following PowerShell command:

New-MailboxDatabase -Recovery -Name “RecoveryDatabase” -Server MailServer1 -EdbFilePath “D:RecoveryDatabaseDatabaseRecoveryDatabase.edb” -LogFolderPath “D:RecoveryDatabaseLogs”

Then we used NetBackup to restore the database file. I then went to mount the recovery database and received the following message:

At least one of this store’s database files is missing. Mounting the store will force the creation of an empty database.

The dialogue box containing the error is below:

This seemed strange to me until I realized what was actually happening. The name of the database we restored using NetBackup was called DB2.edb, yet the database I created with my PowerShell command noted above was RecoveryDatabase.edb. Exchange was behaving normally and was simply informing me that it was about to mount the empty database that I had created. To validate my suspicion, I mounted the database and sure enough, Exchange generated all the necessary log files and created an empty Exchange database called RecoveryDatabase.edb. While I was able to get the recovery database mounted, this got us nowhere closer to our goal of mounting a recovery database that actually contains data.

Lesson Learned: When using PowerShell to create your recovery database, make sure to create your database using the exact name of the database you need to restore. Also, make sure you complete this step before restoring the database file using NetBackup.

The following is the PowerShell command I should have used initially – notice the name of the .edb file generated using the –EdbFilePath switch:

New-MailboxDatabase -Recovery -Name “RecoveryDatabase” -Server MailServer1 -EdbFilePath “D:RecoveryDatabaseDatabaseDB2.edb” -LogFolderPath “D:RecoveryDatabaseLogs”

After the Recovery Database is successfully created using the exact name of the actual .edb file that you need to restore, use NetBackup to restore the database file into the Recovery Database location that you just created.

Mount the Database

After creating the recovery database with the proper –EdbFilePath switch, I still couldn’t mount the database because it was in a “dirty shutdown” state. I typed the following command to validate the database’s state:

eseutil /mh DB2.edb

The output of the command above returns a lot of data, I snipped the relevant portion of the output and placed it below:

The recovery database won’t mount until its state is listed as ‘Clean Shutdown‘. With no log files, performing a software recovery was not possible. Therefore I had to resort to our old friend, eseutil. I ran the following eseutil command to perform a hard recovery of the restored database:

eseutil /p DB2.edb /t:D:RecoveryDatabaseDatabasetmpRepair.edb

Note that I always use the /t parameter to specify the location of the temporary database that gets created when using eseutil with the /p parameter. Remember that eseutil /p creates another copy of the database you are running your command against, and remember that you will need adequate disk space for the eseutil command to complete successfully. For example, if your database is 100GB, you will need 110GB of free disk space for the eseutil command to complete. This is because eseutil sequentially writes data to a new database, deletes the old database, renames the new database with the name of the old database, and places the new database in the same file location as the old database. Also note that there is no space between the /t: and the directory location so that’s not a typo that you’re seeing. Since the first time I used eseutil on an Exchange 5.5 database, the /t parameter has always worked in that fashion. One last thing I’ll mention about the eseutil command, if you fail to specify a temporary directory when using the /p parameter, eseutil by default will place the temporary database file on the C: drive of your mailbox server. This can have disastrous ramifications if you run out of disk space on the C: drive of your mailbox server so make sure you take disk space into consideration before using eseutil with the /p parameter.

My database was 150GB so I was a little concerned how long the hard recovery process would take. Back in the day (2000 and 2003) eseutil could process about 4GB/hour. I have seen published statistics stating that the version of eseutil built into Exchange 2010 can process approximately 45GB/hour. My hard recovery completed in about 2.5 hours so the eseutil statistics I’ve seen seem about right. Once complete, I ran the following command again to validate that state of the database:

eseutil /mh DB2.edb

This time the state of my database changed to ‘Clean Shutdown‘ as you can see below:

Now that my recovery database was in a clean shutdown state, I was able to proceed with mounting it.

Validate Database Contains Relevant Data

After mounting the recovery database, I then entered the following command to validate that the user account data I needed was there:

Get-MailboxStatistics -Database RecoveryDatabase

To refine your search when validating your database content, try filtering such as in the following:

Get-MailboxStatistics -Database RecoveryDatabase | Where {$_.DisplayName -eq “1234, Test”}

The above command returned the following data:

Restore Email Data

For this recovery, my customer wanted the contents of the restored data to be exported to a PST file. Alas, if only it was possible to export mailbox content from an Exchange 2010 Recovery Database directly to a PST file. Sadly, that is not possible. In order to accomplish this goal, a two-step process is required. First, the contents of the mailbox data in the recovery database must be restored into a production mailbox. After the data is restored to a production mailbox, then the data can be exported to a PST file. To accomplish the first step, I simply created a temporary mailbox for this purpose and executed the following command:

Restore-Mailbox -RecoveryMailbox “1234, test” -Identity ex2010 -RecoveryDatabase RecoveryDatabase -TargetFolder “RecoveredItems” -StartDate 1/1/2010 -EndDate 12/31/2010

A few things to note about the command above:

First off, the source and target mailboxes can get a little confusing. Think of the Recovery-Mailbox as the source mailbox, and the Identity mailbox as the target mailbox. When entering my command, the contents of test1234 residing in the recovery database were restored to a subfolder called RecoveredItems in the mailbox called ex2010.

Second, the Recovery-Mailbox switch only accepts the Display Name of your source mailbox. Using the Exchange Alias will not work. If you try to use anything other than Display Name you will receive an error similar to the following:

Mailbox “test1234” doesn’t exist on database “RecoveryDatabase”

Believe me, this will frustrate you because you will see the mailbox definitely exists if you use the Get-MailboxStatistics command. Microsoft does not document the fact that the Restore-Mailbox switch only accepts Display Name very well so hopefully this will save you some time.

Finally, the –TargetFolder, -StartDate, and -EndDate switches are all optional, so you do not need them in all cases.

After my data was restored to my test mailbox, I used the following command to export my data to a PST file:

New-MailboxExportRequest –Mailbox ex2010 –FilePath \Server1PSTDataEmail.pst

Remember that in order for the export to complete successfully you must export the PST data to a share.

Exchange 2010 Mailbox Moves Require NetBIOS Name Resolution

During a recent Exchange 2010 Deployment, my customer was experiencing 3 different Exchange 2010 issues that were all solved with the same fix.

Issue #1:

Customer was unable to move mailboxes from one database to another on the same Mailbox Server or between Mailbox Servers. The following error was returned when attempting to move mailboxes:

MapiExceptionNetworkError: Unable to make connection to the server. (hr=0x800004005, ec=2423)

A screen shot of the error follows:

Issue #2

Customer was seeing the following error in their Application Event Log on each Exchange 2010 CAS Server:

The Microsoft Exchange Mailbox Replication service was unable to process jobs in a mailbox database. Error: MapiExceptionNetworkError: Unable to make connection to the server. (hr=0x80004005, ec=2423)

A screenshot of the error follows:

Issue #3

Unity Connection was not able to place voicemail messages into an Exchange 2010 mailbox. Unity Connection uses EWS (Exchange Web Services) to perform this function, and Unity Connection was not able to perform the required EWS Subscribe Request in order for this functionality to work correctly. To troubleshoot this issue, a tool called EWSEditor was downloaded from Microsoft, which can be found here: After using the EWSEditor Tool to connect to the Exchange 2010 CAS Servers, the following error was returned:

The mailbox database is temporarily unavailable. Microsoft.Exchange.WebServices.Data.ExchangeService.InternalSubscribeToPullNotification

A screen shot of the error follows:


After a good deal of research and testing, I discovered that CAS Servers must be able to resolve the server names of Mailbox Servers via NetBIOS name resolution, otherwise mailbox moves fail. In my customer’s case, the CAS Servers were not able to resolve NetBIOS names of the Mailbox Servers. For example, pinging MB01 from a CAS Server failed, but pinging the FQDN of worked successfully. To resolve this issue, I added as a DNS Search Suffix on the CAS Servers. See the screenshot below:

Once the proper DNS Search Suffix was added to all CAS Servers I was able to ping all Mailbox Servers via NetBIOS name. This simple fix not only resolved all mailbox move issues, but also resolved the EWS Subscribe problem that Unity Connection was experiencing.

Unable to Move Mailboxes from Exchange 2003 to Exchange 2010


I recently struggled with the inability to move mailboxes from Exchange 2003 to Exchange 2010 while helping one of my customers. The environment consisted of a Single Exchange Organization residing in a Single Forest with Multiple Domains within the Forest. The Exchange 2003 Servers and the Exchange 2010 Servers resided in the same Forest, but Different Domains.

The challenging part about this issue was the fact that 80% of my mailbox moves completed successfully while 20% were failing with the following error:

Log Name: Application

Source: MSExchange Mailbox Replication

Event ID: 1100


Request ‘’ (13db0622-cd2b-4d79-8a12-6170d494cc66) failed.

Error code: -2147221246

MapiExceptionNoSupport: IExchangeFastTransferEx.TransferBuffer failed (hr=0x80040102, ec=-2147221246)

See screen shot of the error below:

I did manage to find some blog posts and a Microsoft forum that contained suggestions from others who have received a similar error. One such blog can be found here, and a helpful Microsoft forum discussion can be found here. I include them as references because the forum contained a suggestion that did actually resolve my problem, even though I tried it last because I never thought the suggestion would work. Since this was such a strange issue, I thought I’d share the one solution that resolved my problem in the end. Before I do that, I’ll list everything I tried that did not resolve my problem. Here’s what I tried that did not work:

  1. Moved problematic mailbox to another Exchange 2003 database and retried mailbox move command
  2. Moved problematic mailbox to another Exchange 2003 database on a different Exchange 2003 Server and retried mailbox move command
  3. Moved problematic mailbox to a newly created, empty Exchange 2003 database and retried mailbox move command
  4. Dismounted newly created Exchange 2003 database, ran ISINTEG with the ‘repair all errors’ switch, remounted the database, and retried the mailbox move command
  5. Used PFDavAdmin to check and fix DACLs on problematic mailbox and retried the mailbox move command
  6. Validated all Active Directory and Exchange permissions on problematic mailbox (ensured that all AD and Exchange permissions were identical on a mailbox that failed and a mailbox that moved successfully)
  7. Deleted all search folders in the problematic mailbox and retried the mailbox move command
  8. Deleted all calendar permissions on the problematic mailbox and retried the mailbox move command
  9. Detached the problematic mailbox from its primary Active Directory account, reattached the mailbox to a different account, and retried the mailbox move command


If you read through the Microsoft forum that I mention earlier, you will find that some of the remedies I tried actually worked for some other people experiencing a similar issue. Using PFDavAdmin seemed to do the trick for quite a few people, but it did not work for me. In the end, after exhausting every other possibility, I finally did the following:

  1. Downloaded Exchange 2010 RTM (I found it here on Microsoft’s website)
  2. Installed Exchange 2010 RTM on a virtual server with the Mailbox and CAS Roles
  3. Created a database on the Exchange 2010 RTM Server
  4. Used the Exchange 2010 RTM Management Console to move the problematic mailbox to the database residing on the Exchange 2010 RTM Server
  5. Moved the problematic mailbox from the database on the Exchange 2010 RTM Server to a database on an Exchange 2010 SP1 Server

A few things to note about my solution:

  • I had to use the Exchange 2010 RTM Server to move the mailbox from Exchange 2003 to Exchange 2010
  • I was able to use any Exchange 2010 Server (RTM or SP1) to move the mailbox from Exchange 2010 RTM to Exchange 2010 SP1
  • Initially, I could not get the Exchange 2010 RTM Server to move any mailboxes. The issue stemmed from the fact that my 2003 and 2010 Servers resided in different Domains and I needed to add the fully qualified DNS Search Suffix for each Domain to each server. For example:
    • Here are the FQDNS of my servers:
    • I had to add DNS Search Suffixes in the following manner:
      • On make sure to add as a DNS Search Suffix
      • On make sure to add as a DNS Search Suffix
      • The rationale for this is that my testing indicates that the Mailbox Replication Service (which is the service responsible for moving mailboxes) performs Exchange Server name lookups using NetBIOS name resolution instead of FQDN resolution. Without proper DNS Search Suffixes defined on your Exchange 2010 RTM Server, mailbox moves fail.


I have performed over a dozen Exchange 2010 SP1 deployments for my various customers ranging in size from 1,000 to 30,000 seats and until now I have never run into this problem before. I don’t like this solution because it requires adding an additional server to a customer’s production Exchange environment. Not only that, but it just seems wrong to install Exchange 2010 RTM (released in November 2009) into a production Exchange 2010 SP1 environment to resolve a mailbox migration issue. Nonetheless, this is the only acceptable solution that I have found for my customer.

My Exchange 2010 SP1 Servers are running Roll Up 3 Version 3. Roll Up 3 Version 3 is the only difference between my past Exchange 2010 deployments and this current deployment. With the challenges Microsoft had with Roll Up 3, I wouldn’t be surprised if something within Roll Up 3 is causing this issue. That’s just a theory on my part so if you have had similar challenges moving mailboxes to Exchange 2010 SP1 please add a comment and let me know. This migration is currently ongoing, so if I learn any new information I will post an update.