Lifting and shifting a database

If you’re migrating an application to the cloud and your application stack includes a SQL Server database, Azure provides the widest variety of options of any public cloud provider for you to migrate that database to.

If you have a fairly simple schema you might be able to migrate straight to the PaaS (platform-as-a-service) option, Azure SQL Databases, to offload some of the maintenance burden onto Microsoft and lower your total cost of ownership. This is something you should definitely look to do, if you can. However, larger applications with a more complex data schema may have difficulty migrating straight into Azure SQL Databases. This service is multi-tenant, so the T-SQL allowed is not the same as the on-premises SQL Server you might be used to.

If you cannot migrate straight to the PaaS option: fear not, you have not been abandoned. Your route to the cloud in this case is to lift and shift the database into an Azure SQL Server VM – this is the IaaS (infrastructure-as-a-service) option. Lifting and shifting is where you select your cloud hosting options with the aim of keeping additional development costs to a minimum. As a SQL Server VM is capable of running exactly the same version of SQL Server you run on premises, no parts of the application need to be redeveloped to get everything running in the cloud, at least in a data sense. You can always do the refactoring required to migrate to the PaaS option further down the line.

Configuring SQL Server VMs in the Azure Portal

The recommended approach to provisioning infrastructure in Azure is to explore the options available in the portal, experiment, then automate the provisioning of that infrastructure with PowerShell, the Azure CLI or ARM templates. The portal looks nice and is a useful place to gain familiarity with the options available, but it is not as flexible as one of the more programatic options and offers you little replay value in the event of a disaster.

In the portal, SQL Server VMs get an extra blade where a variety of SQL Server settings can be configured.

azure_sql_server_settings

Nobody would argue this isn’t useful. However, when you get to the stage where you want to automate your infrastructure provisioning, none of the above options are available to you. The standard VM provisioning options are generic and not specific to SQL Server – this is one of those cases where the portal makes things appear simpler than they actually are.

Configuring SQL Server with a custom script VM extension

One approach to performing non-standard VM configuration is to do it with an Azure VM extension. These extensions are small applications that perform post-deployment configuration and automation tasks on VMs. Some use cases for these include:

  • Setting up monitoring with a central service.
  • Perform some configuration management with PowerShell DSC, Chef, etc.
  • Execute an arbitrary automation script.

There are a variety of extensions to choose from, perhaps the simplest is the one that executes a script of your choosing – the CustomScriptExtension. We can use this to execute a PowerShell script that will perform some of the configuration that the portal provides that special blade for.

Here is an example PowerShell script:

Param (
    [Parameter(Mandatory=$true)] [string] $VmAdminUsername,
    [Parameter(Mandatory=$true)] [string] $VmAdminPassword,
    [Parameter(Mandatory=$true)] [string] $SqlLoginUsername,
    [Parameter(Mandatory=$true)] [string] $SqlLoginPassword,
    [Parameter(Mandatory=$true)] [string] $SqlClientIpAddress
)

Install-PackageProvider -Name 'NuGet' -RequiredVersion '2.8.5.201' -Force
Set-PSRepository -Name 'PSGallery' -InstallationPolicy 'Trusted'
Install-Module -Name 'SqlServer' -AllowClobber -Force

# This script will be executed by the custom script extension in the SYSTEM user context. We need to talk to
# SQL Server with the VM administrator account, which is the default SQL administrator in the marketplace image.
# This PSCredential will allow us to act as the VM administrator.
$secureVmAdminPassword = ConvertTo-SecureString -String $VmAdminPassword -AsPlainText -Force
$vmAdminCredential = New-Object -TypeName 'PSCredential' -ArgumentList "$env:ComputerName\$VmAdminUsername", $secureVmAdminPassword

# Enable mixed mode authentication (service restart required). By default the marketplace image is Windows
# authentication only. We need to do this in the execution context of the VM administrator.
Invoke-Command -ComputerName 'localhost' -Credential $vmAdminCredential -ArgumentList @($SqlLoginUsername, $SqlLoginPassword) -ScriptBlock {
    Param (
        [Parameter(Mandatory=$true)] [string] $SqlLoginUsername,
        [Parameter(Mandatory=$true)] [string] $SqlLoginPassword
    )
    Invoke-Sqlcmd -ServerInstance 'localhost' -Database 'master' `
        -Query "EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'LoginMode', REG_DWORD, 2"
    Restart-Service -Name 'MSSQLServer'

    # Add SQL login
    $secureSqlLoginPassword = ConvertTo-SecureString -String $SqlLoginPassword -AsPlainText -Force
    $sqlLoginCredential = New-Object -TypeName 'PSCredential' -ArgumentList $SqlLoginUsername, $secureSqlLoginPassword
    Add-SqlLogin -ServerInstance 'localhost' -LoginName $SqlLoginUsername -LoginType 'SqlLogin' -Enable -GrantConnectSql `
        -LoginPSCredential $sqlLoginCredential
}

# Configure Windows firewall rules. These only allow traffic from the specified client IP address.
New-NetFirewallRule -DisplayName 'SQL Server' -Direction 'Inbound' -Protocol 'TCP' -LocalPort 1433 `
    -RemoteAddress @($SqlClientIpAddress) -Action 'Allow'
New-NetFirewallRule -DisplayName 'SQL Admin Connection' -Direction 'Inbound' -Protocol 'TCP' -LocalPort 1434 `
    -RemoteAddress @($SqlClientIpAddress) -Action 'Allow'
New-NetFirewallRule -DisplayName 'SQL Database Management' -Direction 'Inbound' -Protocol 'UDP' -LocalPort 1434 `
    -RemoteAddress @($SqlClientIpAddress) -Action 'Allow'
New-NetFirewallRule -DisplayName 'SQL Service Broker' -Direction 'Inbound' -Protocol 'TCP' -LocalPort 4022 `
    -RemoteAddress @($SqlClientIpAddress) -Action 'Allow'
New-NetFirewallRule -DisplayName 'SQL Debugger/RPC' -Direction 'Inbound' -Protocol 'TCP' -LocalPort 135 `
    -RemoteAddress @($SqlClientIpAddress) -Action 'Allow'
New-NetFirewallRule -DisplayName 'SQL Analysis Services' -Direction 'Inbound' -Protocol 'TCP' -LocalPort 2383 `
    -RemoteAddress @($SqlClientIpAddress) -Action 'Allow'
New-NetFirewallRule -DisplayName 'SQL Browser' -Direction 'Inbound' -Protocol 'TCP' -LocalPort 2382 `
    -RemoteAddress @($SqlClientIpAddress) -Action 'Allow'

NOTE: The complete code listing for this post is available in GitHub.

This script does the following:

  • Enabled Mixed-Mode Authentication so that it is possible to authenticate with SQL Logins. By default a SQL Server VM is configured for Windows Authentication only.
  • Creates a SQL Login for the username and password provided by the script’s caller. This will allow the automation job that creates the VM to use these credentials to apply a database schema afterwards, for example.
  • Configures the Windows Firewall to only allow SQL traffic from a specific IP address. When I’m setting up a database with automation, I only allow traffic from the IP address that creates the SQL Server VM in the first place. This keeps everything as secure as possible.

One thing to be aware of when writing a script to be executed by a VM extension, these jobs run as the SYSTEM account on the VM, not the administrator account that you created the VM with. The above script requires the VM administrator credentials be provided so that we can change the authentication mode and create the new SQL Login inside the execution context of the VM administrator (which we do with Invoke-Command). The SYSTEM account cannot do anything with SQL Server out-of-the-box.

Attaching the custom script extension to a VM

To set up the custom script extension, you upload the script to blob storage and attach the extension to the VM at creation time. Here is a script that creates a new SQL Server VM and attaches the above configuration script to be executed post-deployment.

Param (
    [Parameter(Mandatory=$false)] [string] $VmAdminUsername = 'SqlServerAdmin',
    [Parameter(Mandatory=$true)] [string] $VmAdminPassword,
    [Parameter(Mandatory=$true)] [string] $SqlLoginUsername,
    [Parameter(Mandatory=$true)] [string] $SqlLoginPassword,
    [Parameter(Mandatory=$false)] [string] $SqlClientIpAddress = (Invoke-RestMethod -Uri 'http://ipinfo.io/json' | Select-Object -ExpandProperty 'ip')
)

Import-Module -Name 'AzureRM'

# Create resource group
$resourceGroup = New-AzureRmResourceGroup -Name 'sqlserver' -Location 'UK South'

# Create network, subnet and public IP address
$subnetConfig = New-AzureRmVirtualNetworkSubnetConfig -Name 'sqlserver-subnet' -AddressPrefix '192.168.1.0/24'
$vnet = $resourceGroup | New-AzureRmVirtualNetwork -Name 'sqlserver-vnet' -AddressPrefix '192.168.0.0/16' -Subnet $subnetConfig
$pip = $resourceGroup | New-AzureRmPublicIpAddress -AllocationMethod Static -IdleTimeoutInMinutes 4 -Name 'sqlserver-pip' `
    -DomainNameLabel 'anchorloopsqlvm'

# Create network security policy
$rdpRule = New-AzureRmNetworkSecurityRuleConfig -Name 'AllowRdp' -Protocol 'Tcp' -Direction Inbound -Priority 1000 `
    -SourceAddressPrefix $SqlClientIpAddress -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange 3389 -Access 'Allow'
$sqlRule = New-AzureRmNetworkSecurityRuleConfig -Name 'AllowSql' -Protocol 'Tcp' -Direction Inbound -Priority 1001 `
    -SourceAddressPrefix $SqlClientIpAddress -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange 1433 -Access 'Allow'
$nsg = $resourceGroup | New-AzureRmNetworkSecurityGroup -Name 'sqlserver-nsg' -SecurityRules @($rdpRule, $sqlRule)

# Create network interface
$nic = $resourceGroup | New-AzureRmNetworkInterface -Name 'sqlserver-nic' -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id `
   -NetworkSecurityGroupId $nsg.Id

# Create storage account and upload SQL Server configuration script to blob storage
$storageAccount = $resourceGroup | New-AzureRmStorageAccount -StorageAccountName 'anchorloopsqlvm' -SkuName 'Standard_LRS' -Kind 'Storage'
$storageAccountKey = ($storageAccount | Get-AzureRmStorageAccountKey).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName $storageAccount.StorageAccountName -StorageAccountKey $storageAccountKey
$storageContainer = $storageContext | New-AzureStorageContainer -Name 'sqlserver' -Permission 'Blob'
$customScript = 'Set-SqlServerConfig.ps1'
Set-AzureStorageBlobContent -File ".\$customScript" -Container $storageContainer.Name -Blob $customScript -Context $storageContext

# Create admin credential
$securePassword = ConvertTo-SecureString -String $VmAdminPassword -AsPlainText -Force
$credential = New-Object -TypeName 'System.Management.Automation.PSCredential' -ArgumentList $VmAdminUsername, $securePassword

# Create virtual machine configuration
$vmConfig = New-AzureRmVMConfig -VMName 'sqlserver' -VMSize 'Standard_D2_v2' | `
   Set-AzureRmVMOperatingSystem -Windows -ComputerName 'sqlserver' -Credential $credential -ProvisionVMAgent -EnableAutoUpdate | `
   Set-AzureRmVMSourceImage -PublisherName "MicrosoftSQLServer" -Offer "SQL2017-WS2016" -Skus "SQLDEV" -Version "latest" | `
   Add-AzureRmVMNetworkInterface -Id $nic.Id

# Create the virtual machine
$vm = $resourceGroup | New-AzureRmVM -VM $vmConfig

# Set VM custom script extension to configure SQL Server
$resourceGroup | `
    Set-AzureRmVMCustomScriptExtension -Name 'SetSqlServerConfig' -VMName 'sqlserver' -StorageAccountName $storageAccount.StorageAccountName -ContainerName $storageContainer.Name -FileName $customScript `
        -Argument "-VmAdminUsername $VmAdminUsername -VmAdminPassword $VmAdminPassword -SqlLoginUsername $SqlLoginUsername -SqlLoginPassword $SqlLoginPassword -SqlClientIpAddress $SqlClientIpAddress"

NOTE: The complete code listing for this post is available in GitHub.

This code:

  • Creates a subnet, network, network interface and network security group that only accepts SQL and RDP traffic from the IP address of the machine executing the script – this is the default value of $SqlClientIpAddress.
  • Creates a new storage account for the VM and uploads the configuration script to blob storage for the VM to download and execute post-deployment.
  • Creates the VM using the SQL2017-WS2016 (SQL Server 2017 on Windows Server 2016) marketplace image, attaching it to the network and storage defined previously.
  • Sets a VM custom script extension, referencing the script uploaded to blob storage. Parameters are passed to the script using the Argument parameter on the Set-AzureRmVMCustomScriptExtension cmdlet.

If you were to execute this script with valid arguments, a new SQL Server VM would be created and the configuration performed in Set-SqlServerConfig.ps1 would be applied. From there you could apply a database schema, perform some sanity testing, etc. You could also extend the script to perform further configuration.

.\New-SqlServerVM.ps1 -VmAdminUsername SqlVmAdmin -VmAdminPassword SqlVmAdmin123! -SqlLoginUsername SqlLoginUser -SqlLoginPassword SqlLogin123!

Tips for debugging custom script extensions

So you’ve deployed your new SQL Server and configured what you need to in a custom script extension. You fire up SSMS to find that the new SQL Login you created doesn’t work – there must have been a bug in the configuration script. This is confusing because nothing in the deployment script suggested that the custom script failed? Unfortunately VM extensions are not forthcoming when things go wrong. You need to learn where to look when things do not happen as you intended.

The first place I go, after RDPing into the new VM, is to where the custom script extension downloads the script to execute. This is under the directory structure at: C:\Packages\Plugins. There will be a subfolder for a custom script extension and the specific version of the plugin used. Underneath that folder you will find a download folder that will contain the script you uploaded to blob storage. From here I usually load the script into PowerShell ISE (whilst hoping for the day VS Code comes bundled with Windows), set a breakpoint and start debugging the script.

If you make changes, don’t just change the script on the machine. Fix the script at source and make sure the changes are included in the version uploaded to the storage account.

If you don’t see the downloaded script, that implies a problem around the plugin itself rather than your script. A useful place to look is the VM extension log files, located under: C:\WindowsAzure\Logs\Plugins. Find the log file for the custom script extension and it might give you a clue as to why the script could not be sourced.

 

About the Author Kirk MacPhee

An experienced software developer and technical lead, specializing in automation technologies and their application.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s