Automating mass Exchange 2010 database creation with PowerShell.

Josh's picture

A few weeks ago, I showed you my solution to creating a mass amount of disks with PowerShell and diskpart, for future use as Exchange 2010 database and log drives.   I finally had time to go back and create the databases for this environment.   In this environment, I had 4 Exchange 2010 servers with the Mailbox (MB) role on them, all part of the same Database Availability Group (DAG).   I needed to create a total of 97 databases, and 376 database copies in this DAG.   To do this, I wrote the following script I called "dbcreate.ps1"


# Exchange 2010 Database Creation Script
# By Josh M. Bryant
# Last updated 10/18/2011 11:00 AM
#
#
# Specify database root path.
$dbroot = "E:\EXCH10"
#
# Specify log file root path.
$logroot = "L:\EXCH10"
#
# Specify CSV file containing database/log paths and database names.
$data = Import-CSV C:\Scripts\dbcreate\exdbs.csv
#
# Get list of mailbox servers that are members of a DAG.
$Servers = Get-MailboxServer | Where {$_.DatabaseAvailabilityGroup -ne $null}
#
# Specify Lagged Copy Server identifier.
$lci = "MBL"
#
# Specify ReplayLagTime in fromat Days.Hours:Minutes:Seconds
$ReplayLagTime = "14.0:00:00"
#
#Specify TruncationLagTime in format Days.Hours:Minutes:Seconds
$TruncationLagTime = "0.1:00:00"
#
# Specify RpcClientAccessServer name.
$RPC = "mail.domain.com"
#
#
#
# Create databases.
ForEach ($line in $data)
{
$dbpath = $line.DBPath
$dbname = $line.DBName
$logpath = $line.LogPath
New-MailboxDatabase -Name $dbname -Server $line.Server -EdbFilePath $dbroot\$dbpath\$dbname.edb -LogFolderPath $logroot\$logpath
}
#
# Mount all databases.
Get-MailboxDatabase | Mount-Database
Start-Sleep -s 60
#
# Create Database Copies.
ForEach ($line in $data)
{
ForEach ($Server in $Servers)
    {
    If ($Server -notlike $line.Server)
        {
        Add-MailboxDatabaseCopy -Identity $line.DBName -MailboxServer $Server
        }
    }
}
#
# Setup lagged copies.
ForEach ($Server in $Servers)
{
If ($Server -like "*$lci*")
    {
    Get-MailboxDatabaseCopyStatus -Server $Server | Set-MailboxDatabaseCopy -ReplayLagTime $ReplayLagTime -TruncationLagTime $TruncationLagTime
    }
}
#
# Set RpcClientAccess Server and enable Circular Logging on all databases.
Get-MailboxServer | Get-MailboxDatabase | Set-MailboxDatabase –RpcClientAccessServer $RPC -CircularLoggingEnabled $true


The exdbs.csv file reference in the script, contained these 4 columns: "Server,DBPath,DBName,LogPath".

The script first creates the 94 databases, 47 of them on one server, 47 on another.  It then creaties copies of all these databases across all servers in the DAG.  2 of the servers are for lagged copies, so it goes and sets those based on server naming convention.  The final set is to set the RpcClientAccessServer to the FQDN of the CAS Array on all databases. UPDATE: I have the script setting all the databases for circular logging at the very end now too.

This is still a work in progress, so use at your own risk.  As always please leave author information at the top of the script intact if you use it, and don't forget to link back to this site if you share it anywhere else.

The script worked great, despite having a few "RPC endpoint mapper" errors.  It got the databases 95% setup.  One server had a service not running on it, so the database copies were in the "Suspended" state on it.  Simple use of the "Resume-MailboxDatabaseCopy" command easily correct this issue.   I also had to go back and specify the Activation Preference, which was easy to do based on the naming convention, running the Get-MailboxDatabaseCopyStatus command against each server and piping it into the Set-MailboxDatabaseCopy -ActivationPreference command.

UPDATE: I made some changes to the script, and no longer saw any errors during database creation.  I also fixed the syntax for the database and log paths so they get created in the correct location.  

The end result was 367 fully functional and ready to use database copies.  Even with the minor clean-up after running the script, it made everything a lot easier.  Creating this many database copies would have taken quite some time if done manually.