Configure SQL Server with Azure VM Extensions

Lifting and shifting a database

If you’re migrating an application to the cloud and your application stack includes a SQL Server database, Azure provides the widest variety of options of any public cloud provider for you to migrate that database to.

If you have a fairly simple schema you might be able to migrate straight to the PaaS (platform-as-a-service) option, Azure SQL Databases, to offload some of the maintenance burden onto Microsoft and lower your total cost of ownership. This is something you should definitely look to do, if you can. However, larger applications with a more complex data schema may have difficulty migrating straight into Azure SQL Databases. This service is multi-tenant, so the T-SQL allowed is not the same as the on-premises SQL Server you might be used to.

Using VSTS Package Management as your PowerShell Module Repository

Many of us that write and use PowerShell regularly are familiar with the PowerShell Gallery (PSGallery) package feed and the PackageManagement/OneGet modules that provides access from the shell. But did you know you can use the Package Management extension of VSTS as a private PowerShell module feed?

You may want to do this if:

  • Your organization are already VSTS users and you want a convenient, OneGet-compatible way of distributing your PowerShell modules to users within the organization.
  • You want an automated release pipeline for PowerShell code but PowerShell Gallery is not an appropriate choice for hosting the modules. It might be too public.

In this blog post I’m going to show you how to use VSTS to host your PowerShell source code and define a build process that will publish that code to a private, OneGet-compatible package feed.

Requirements

Tool Version used in this post
VSTS with Package Management extension
Git 2.15.1
A PowerShell module (including well-formed module manifest)

Step 1: Use source control

Advanced PowerShell users understand that scripting and automation are subject to the same software development best practices that traditional developers have been following for many years. Chief amongst these best practices is the use of source control. The benefits of using proper source control for PowerShell are vast and beyond the scope if this post, but our first step is to commit a PowerShell module to a Git repository. We will then define a build pipeline that triggers on any future commit to this repository, which will publish our module to the VSTS package feed for consumption elsewhere.

Since we’re using VSTS as the package feed I’m also going to use it to host the Git repository. But if you prefer GitHub, or any other repository that a VSTS build can trigger off, feel free to use that instead. If you’re already using source control, skip to Step 2 below.

I’m going to demonstrate everything with my PSPerceptron module from: Developing a neural network in PowerShell. For the purposes of this blog post, it doesn’t matter at all what the PowerShell module being published actually does, as long as it had a valid module manifest (.psd1 file). Just follow along with your own module changing any module descriptions, etc, as needed.

From the VSTS homepage, click the New Project button and complete the name and description fields. We want to use Git for version control. Click Create when done.

vsts_create_project

On the next page we want to copy the repository’s URI to the clipboard so that we can clone the repository using our local Git client. Click the copy icon as shown below.

vsts_clone_repo

On your development machine, open a shell/terminal and clone the Git repository with the following command, replacing the working directory for your usual Git repo directory and the URI for your own VSTS repository:

cd ~/source/repos/
git clone https://anchorloop.visualstudio.com/_git/PSPerceptron

You will probably get asked for your credentials as part of the clone operation. If you’re new to Git this will create a new directory with the same name as your VSTS repository, which is your local copy of the source code repository.

Our first job is just to get our module into source control, so copy your module directory into the cloned repository. The directory structure you’re probably looking to make is as follows:

~/source/repos/PSPerceptron
|_ PSPerceptron
  |_ PSPerceptron.psd1
  |_ PSPerceptron.psm1

This is the typical directory structure that the Publish-Module cmdlet expects when we come to publishing the module to a package feed.

Finally, commit your module and push it back to VSTS:

git add *
git commit -m "Initial commit"
git push

You should now be able to browse the module code in VSTS.

vsts_git_pushed

Step 2: Define a continuous publishing process

Now that we have our module in source control, we need to define a build process which will automatically publish the module to the VSTS package feed whenever we push code to our source control repository.

The best way to define a CI pipeline for PowerShell modules in VSTS is with YAML configuration files. The reason I prefer these to conventional UI-driven build definitions in VSTS is that the YAML file is easy to branch to many PowerShell modules that you might want to build and publish this way. If you had a hundred PowerShell modules that you wanted to host in a package feed, would you want to define a build pipeline for each individually via the UI? That’s a lot of work. I would rather VSTS derive the build process dynamically from a YAML file I can branch to each module as needed.

First, enable YAML build configurations in your VSTS instance. Click your user icon and select Preview features, choose for this account and toggle Build YAML definitions on.

vsts_build_yaml_definitions

Now, create a new file in the local Git repository for your module called: .vsts-ci.yml. Note that the filename begins with a dot. This is where we’re going to define the publishing process. After creation, open it up in an editor of your choice and paste in the following:

queue: Hosted VS2017

trigger:
- master

steps:
  - task: NuGetToolInstaller@0
    displayName: "Install nuget.exe and add to PATH"
    inputs:
        versionSpec: 4.6.*

  - powershell: |
        Write-Host "PowerShell Module publishing code goes here."
    displayName: "Publish PowerShell module"
    failOnStderr: true

Let’s go through this element-by-element so that we understand what everything means.

  • queue: Describes what build queue is going to handle the process. Here we’re using the Visual Studio 2017 build machines hosted by Microsoft. As far as hosted choices go, it’s usually between Visual Studio on Windows, Linux or Mac. We want the machine to be Windows for access to PowerShell 5+ (PackageManagement modules included) rather than for access to Visual Studio in particular, so this queue is the best choice for that.
  • trigger: Defines a list of branches that should trigger the build process when a change is pushed to them. We only want to republish the module for every checkin to the master branch, so ours is a list with one entry.
  • steps: Defines a list of sequential build steps that make up the build process.
  • task: Describes a VSTS build task in the format Name@MajorVersionNumber. It’s important to include the major version numbers because the same task is not usually backwards compatible with a different major version of itself. This format is mandatory, even if there is only one major version number of a given task (with with NuGetToolInstaller which is still at v0). For further details see: https://github.com/Microsoft/vsts-agent/blob/master/docs/preview/yamlgettingstarted-tasks.md
  • displayName: A string to display in the UI for each build step.
  • inputs: A list of parameters for each VSTS build task. Each task expects different parameters, you can check what valid inputs are for a task by viewing its task.json definition on GitHub. E.g. https://github.com/Microsoft/vsts-tasks/blob/master/Tasks/NuGetToolInstallerV0/task.json
  • powershell: Defines a PowerShell script that should run as part of the build.
  • failOnStderr: A boolean that determines whether the powershell step should fail if anything is emitted on stderr during execution. Otherwise the pass/fail state will be based on $LASTEXITCODE.

In a nutshell, the steps install a specific version of NuGet on %PATH%, then executes some PowerShell, which at the moment is just a Write-Host. The NuGetToolInstaller task is mainly to ensure nuget.exe is available to use without us having to figure out a path. We don’t really want to go hunting around for it on a hosted build machine, which could change periodically.

Why is easy access to nuget.exe even needed? The Publish-Module cmdlet that we are going to use to publish our PowerShell module to the VSTS package feed depends on nuget.exe being available and configured correctly for publishing to VSTS. OneGet doesn’t do a particularly good job of wrapping nuget.exe, it needs to be available and pre-configured to work with VSTS or Publish-Module will just barf.

Save the YAML file but don’t push it to master yet. Let’s flesh out the PowerShell step to do the publishing first.

Step 3: Publishing to the VSTS package feed

Here’s the PowerShell that we’re going to integrate into our build pipeline:

$patUser = 'kirk@anchorloop.com'
$patToken = 'INSERT PAT TOKEN HERE'
$securePat = ConvertTo-SecureString -String $patToken -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential($patUser, $securePat)

Register-PSRepository -Name 'VSTS' -SourceLocation 'https://anchorloop.pkgs.visualstudio.com/_packaging/Packages/nuget/v2/' -PublishLocation 'https://anchorloop.pkgs.visualstudio.com/_packaging/Packages/nuget/v2/' -InstallationPolicy Trusted -Credential $credential -Verbose

nuget.exe sources add -name 'VSTS' -Source 'https://anchorloop.pkgs.visualstudio.com/_packaging/Packages/nuget/v2/' -UserName $patUser -Password $patToken

Publish-Module -Path ".\$env:BUILD_REPOSITORY_NAME" -Repository 'VSTS' -Credential $credential -NuGetApiKey 'VSTS' -Verbose

This code does the following:

  • Constructs a PSCredential from a VSTS personal access token (PAT), which will be used to register VSTS as a repository and publish the module. Think of PATs as access tokens – they’re safer than embedding your user credentials in scripts because you can lock each PAT down such that it only has access to the specific thing you want to use it for. Another user would not be able to use this PAT to log in as you. We’ll create a PAT for this script in the next section.
  • We register VSTS as both a PowerShell repository with OneGet and a NuGet source separately. As I mentioned earlier, OneGet doesn’t wrap NuGet very well (even though it depends on it) and it won’t register the repository as a NuGet source for us, unfortunately. It would be nice if it did in future. Note that OneGet uses the v2 endpoints of the package feed. If you’re hunting around NuGet.config files for your endpoint URI, you’re likely to find the v3 endpoint. Just replace ‘v3/…’ and everything after it in the URI with ‘v2/’.
  • Finally, we publish the module to VSTS with the Publish-Module cmdlet. Note that we need to supply the PSCredential and a NuGetApiKey, which for VSTS can be any string (it’s pretty much ignored, but is a mandatory parameter of the cmdlet). The Path parameter is the path to the module directory in the Git repo. For simplicity I tend to always make the module name the same as the Git repo name, so I can just refer to this build variable in the script and it works when I branch the YAML file into other module repos. But you could do this any way you like.

Once you’ve modified the above for your VSTS user account, package feed URIs, etc., paste it into your YAML file in place of the existing Write-Host placeholder. Now all we need to do is generate a PAT for the script.

Step 4: Generating a PAT for the publishing script

In VSTS, click your user icon along the top bar then Security. You’ll be taken to your Personal access tokens page. Click Add.

Name it sensibly, give it an expiry date and select the Packaging (read and write) scope only. This locks the PAT down such that it can only be used to publish packages and nothing else.

vsts_create_packaging_pat

Click Create Token. VSTS will display the PAT to you this one time only, so make sure you copy it and paste it into the PowerShell script as the value of the $patToken variable.

Now we’re ready to go for a test drive.

Step 5: Pushing the CI definition

Push the YAML build definition with the following commands:

git add .vsts-ci.yml
git commit -m "Add .vsts-ci.yml"
git push

Now in VSTS, if you browse to Build and Release for your project you will see that a new build definition has been created and the first build should be in progress. All being well, your build job should pass.

vsts_build_succeeded

vsts_package_feed

Any future extensions you might want to make to the build process, for example running some Pester tests before publishing, can be done by extending the YAML build definition that is now alongside your module in source control.

Advice for consuming packages from the VSTS feed

I thought I would close with some advice for consuming PowerShell modules from your private VSTS package feed. First, you need to register VSTS as a trusted PSRepository on any machine you wish to consume packages on. This is the same line of code as in the publishing script:

Register-PSRepository -Name 'VSTS' -SourceLocation 'https://anchorloop.pkgs.visualstudio.com/_packaging/Packages/nuget/v2/' -PublishLocation 'https://anchorloop.pkgs.visualstudio.com/_packaging/Packages/nuget/v2/' -InstallationPolicy Trusted -Credential $credential -Verbose

You’ll need to assemble a PSCredential again to do this, every interaction with VSTS needs to be authenticated. I suggest rather than reusing the PAT we generated for publishing packages, that you generate a separate one with only the Packaging (read) scope to lock down consumers even further. You don’t need the write permission to use packages from the feed.

To consume packages from the feed, just use Install-Module:

Install-Module -Name PSPerceptron -Repository VSTS -Credential $credential

Note that if you have multiple repositories installed, and you likely will considering you will still have access to PSGallery, Install-Module will expect you to clarify which repository you intend to use with the Repository parameter. This becomes mandatory with multiple repositories registered.

Developing a neural network in PowerShell

Motivations

For about a year or so I’ve had a desire to add some data science and machine learning skills to my software development skillset. Demand for these is at all all time high and  machine learning algorithms are finding their way into system architectures everywhere. My motivation for learning some of these skills is partly motivated by fear of being left behind on these technologies, but also because I’ve seen some really cool applications in the DevOps space, e.g. automated triage of failing tests/logging in a continuous delivery setup.

Network as code: Building a site-to-site VPN to the cloud with Azure Building Blocks and PowerShell DSC

Configuring infrastructure in the cloud era

How many of us that regularly manage IT infrastructure know, in the most minute detail, exactly how everything hangs together in our datacenters? Every configuration setting, every dependency, every protocol in play. Anyone? Or put another way, if a bomb went off in your datacenter tomorrow, would you know the exact steps to configure your new kit, such that it was identical to what you had pre-bombing?

I can tell you that the number of operators that could claim the above is small, this is because the majority of infrastructure in the world is managed by multiple people applying untraceable tweak on top of untraceable tweak. The truth is that you cannot easily reproduce or scale on-demand the output of a human directly interacting with infrastructure, which is why this modus operandi has no place in the cloud era. At least, not if you want the gains offered by the cloud.

Okay, so direct interaction with infrastructure is bad, does that mean scripting is the answer? It’s a step in the right direction, in that at least they can be stored securely in source control and replayed if necessary. The problem with imperative scripts, though, is that they tend to assume a starting state and define strict steps to get you to a specific end state. What do scripts do if a machine is already in a partially configured state? Fail with a message saying the first dependency is already installed, most likely. It is possible to write idempotent scripts, but it’s not easy. This is the reason for the rise in desired-state configuration management systems like Ansible, Puppet and Chef. They allow you to define a declarative (not imperative) configuration that will be applied consistently no matter what state a particular endpoint is in.

Azure VPN as code

Infrastructure-as-code is not just for installing and configuring software packages on VMs, it can also drive something as fundamental as your network connectivity to the cloud. In this post, I’m going to demonstrate the deployment and configuration of a VPN between an Azure Virtual Network and my on-premises lab using some of my favorite configuration management tools. Azure Building Blocks for the cloud-side configuration and PowerShell DSC on a local Windows 2012 R2 server for the lab-side.

Requirements

Tool Version used in this post Link Notes
Azure CLI 2.0.21 Microsoft Dependency of Azure Building Blocks
Azure Building Blocks 2.0.4 GitHub
PowerShell 4+ For the local Windows Server 2012 R2. PowerShell DSC requires at least PowerShell 4.
xRemoteAccess N/A GitHub DSC resource for configuring RemoteAccess service.

Stage 1: Deploying the VirtualNetwork

Fire up your shell of choice. Because Azure CLI and Azure Building Blocks are cross-platform, I’m using Bash on Ubuntu on Windows (with the Windows Subsystem for Linux) just to take the cross-platformness to the extreme. Anything that runs az and azbb on your system will do though.

Virtual Network JSON

Azure Building Blocks takes a JSON parameters file containing the desired state of one or more Azure resources, then using a combination of compiled AzureRM JSON templates and the Azure CLI, deploys the required infrastructure to Azure.

Why not just go straight to the AzureRM templates? The Azure Building Blocks parameters files are much less verbose and easier to hand-edit, not a pleasant task with raw ARM JSON. The compiler also has a lot of Azure infrastructure best practices built-in, i.e. if you don’t specify an optional property it will make a sensible choice for you. This keeps the verbosity of the JSON down also. If you’ve never seen Building Blocks JSON before you will be surprised how much easier they are to work with versus normal ARM templates.

To get a feel for Azure Building Blocks, let’s first create a JSON file that defines a simple Virtual Network with no VPN. Create a new file and enter the following:

{
  "$schema": "https://raw.githubusercontent.com/mspnp/template-building-blocks/master/schemas/buildingBlocks.json",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "buildingBlocks": {
      "value": [
        {
          "type": "VirtualNetwork",
          "settings": [
            {
              "name": "Azure-VNet",
              "addressPrefixes": [
                "172.16.0.0/16"
              ],
              "subnets": [
                {
                  "name": "GatewaySubnet",
                  "addressPrefix": "172.16.0.0/24"
                },
                {
                  "name": "Subnet-001",
                  "addressPrefix": "172.16.1.0/24"
                }
              ]
            }
          ]
        }
      ]
    }
  }
}

This JSON defines a single Virtual Network, named Azure-VNet, with an address range and two subnets, named GatewaySubnet and Subnet-001, each with their own address ranges also. Pretty simple, right? GatewaySubnet is going to be used exclusively by our VPN infrastructure later on and Subnet-001 is a normal subnet for any VMs we want to connect to over the VPN.

Interesting parts of this JSON definition are:

  • “$schema”: a schema definition for a building blocks parameters file.
  • “value”: An array containing a series of object definitions representing Azure resources.
  • “type”: the type of an Azure resource required.
  • “settings”: type-specific settings for the resource. Can be required or optional.

Save the above as vnet.json. Let’s try running this through the Azure Building Blocks CLI tool, azbb:

azbb --deploy --parameters-file vnet.json --resource-group Azure-VNet --subscription-id e8ef4b09-2eaf-40be-8639-752e7d4af30d --location uksouth
Note: The above subscription ID is just a random GUID. To figure out what your subscription ID is, run: az account list then copy the id property of the subscription you want to use.

We’ve told Azure Building Blocks to deploy the virtual network into a resource group named Azure-VNet (the same name as the network object itself) in a specific subscription in the uksouth region (feel free to substitute for a region more suitable to your location). You should see output similar to the below:

  parameters written to /home/Kirk/azure-building-blocks/vnet-output.json

{
  "id": "/subscriptions/e8ef4b09-2eaf-40be-8639-752e7d4af30d/resourceGroups/Azure-VNet",
  "location": "uksouth",
  "managedBy": null,
  "name": "Azure-VNet",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null
}
{
  "id": "/subscriptions/e8ef4b09-2eaf-40be-8639-752e7d4af30d/resourceGroups/Azure-VNet/providers/Microsoft.Resources/deployments/bb-01-vnet",
  "name": "bb-01-vnet",
  "properties": {
    "correlationId": "REMOVED",
    "debugSetting": null,
    "dependencies": [
      {
        "dependsOn": [
          {
            "id": "/subscriptions/e8ef4b09-2eaf-40be-8639-752e7d4af30d/resourceGroups/Azure-VNet/providers/Microsoft.Resources/deployments/bb-01-vnet-vnet-0-k7q3ykincijwc",
            "resourceGroup": "Azure-VNet",
            "resourceName": "bb-01-vnet-vnet-0-k7q3ykincijwc",
            "resourceType": "Microsoft.Resources/deployments"
          }
        ],
        "id": "/subscriptions/e8ef4b09-2eaf-40be-8639-752e7d4af30d/resourceGroups/workaround/providers/Microsoft.Resources/deployments/bb-01-vnet-vnetPeering-0-k7q3ykincijwc",
        "resourceGroup": "workaround",
        "resourceName": "bb-01-vnet-vnetPeering-0-k7q3ykincijwc",
        "resourceType": "Microsoft.Resources/deployments"
      }
    ],
    "mode": "Incremental",
    "outputs": {},
    "parameters": {
      "deploymentContext": {
        "type": "Object",
        "value": {
          "parentTemplateUniqueString": "bb-01-vnet",
          "sasToken": ""
        }
      },
      "virtualNetworkPeerings": {
        "type": "Array",
        "value": []
      },
      "virtualNetworks": {
        "type": "Array",
        "value": [
          {
            "location": "UKSouth",
            "name": "Azure-VNet",
            "properties": {
              "addressSpace": {
                "addressPrefixes": [
                  "172.16.0.0/16"
                ]
              },
              "dhcpOptions": {
                "dnsServers": []
              },
              "subnets": [
                {
                  "name": "GatewaySubnet",
                  "properties": {
                    "addressPrefix": "172.16.0.0/24"
                  }
                },
                {
                  "name": "Subnet-001",
                  "properties": {
                    "addressPrefix": "172.16.1.0/24"
                  }
                }
              ]
            },
            "resourceGroupName": "Azure-VNet",
            "subscriptionId": "e8ef4b09-2eaf-40be-8639-752e7d4af30d",
            "tags": {}
          }
        ]
      }
    },
    "parametersLink": null,
    "providers": [
      {
        "id": null,
        "namespace": "Microsoft.Resources",
        "registrationState": null,
        "resourceTypes": [
          {
            "aliases": null,
            "apiVersions": null,
            "locations": [
              null
            ],
            "properties": null,
            "resourceType": "deployments"
          }
        ]
      }
    ],
    "provisioningState": "Succeeded",
    "template": null,
    "templateLink": {
      "contentVersion": "1.0.0.0",
      "uri": "https://raw.githubusercontent.com/mspnp/template-building-blocks/v2.0.0/templates/buildingBlocks/virtualNetworks/virtualNetworks.json"
    },
    "timestamp": "2017-12-05T09:10:28.595858+00:00"
  },
  "resourceGroup": "Azure-VNet"
}

You can hopefully see “provisioningState”: “Succeeded”, which indicates success. Go and take a look at the Azure portal and see that the resource group and the virtual network and the specified subnets exist.

Stage 2:  Deploying a Virtual Network Gateway

Now that we know how to use a Azure Building Blocks parameters file, let’s extend it to define a VPN gateway that we can use to connect our on-premises lab to Azure. First, let’s create a copy of vnet.json and work on that copy.

cp vnet.json vnet-and-vpn-gateway.json

Open vnet-and-vpn-gateway.json in your editor of choice and extend the value array of the buildingBlocks object as follows:

{
  "$schema": "https://raw.githubusercontent.com/mspnp/template-building-blocks/master/schemas/buildingBlocks.json",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "buildingBlocks": {
      "value": [
        {
          "type": "VirtualNetwork",
          "settings": [
            {
              "name": "Azure-VNet",
              "addressPrefixes": [
                "172.16.0.0/16"
              ],
              "subnets": [
                {
                  "name": "GatewaySubnet",
                  "addressPrefix": "172.16.0.0/24"
                },
                {
                  "name": "Subnet-001",
                  "addressPrefix": "172.16.1.0/24"
                }
              ]
            }
          ]
        },
        {
          "type": "VirtualNetworkGateway",
          "settings": [
            {
              "name": "Azure-VNet-Gateway",
              "gatewayType": "Vpn",
              "vpnType": "RouteBased",
              "sku": "Basic",
              "isPublic": true,
              "virtualNetwork": {
                "name": "Azure-VNet"
              }
            }
          ]
        }
      ]
    }
  }
}

That was easy, wasn’t it? Only 15 lines of simple JSON added to define a VPN gateway in the cloud.

Run the following in the shell to deploy the resources into the same resource group as previously:

azbb --deploy --parameters-file vnet-and-vpn-gateway.json --resource-group Azure-VNet --subscription-id e8ef4b09-2eaf-40be-8639-752e7d4af30d --location uksouth

Because our parameters file gets compiled to an AzureRM template and the virtual network portion of this template already exists (from our previous run) azbb will leave our previous virtual network untouched, as it is already in the desired state, only adding the new gateway to the resource group. This makes everything idempotent, a very important characteristic of infrastructure-as-code systems.

This deployment operation could take a while, as provisioning a virtual network gateway can take up to about 45 minutes. Go make yourself a coffee and come back, you should see we have successfully provisioned a VPN gateway on completion.

Note: You might be wondering why we only had to specify the virtual network in the gateway’s settings and not a specific subnet within that network. This is because Virtual Network Gateways always operate on a subnet with the name GatewaySubnet (this cannot be overridden), which is why the first subnet in our Virtual Network definition has that name.

Stage 3: Configuring a Local Network Gateway

We’re mostly done with the cloud-side of our VPN, now to turn our attention to the on-premises component.

In the enterprise, you could have any number of appliances handling VPN connectivity for you (e.g. a Cisco ASA), hopefully configured in an infrastructure-as-code style. Choose whatever is appropriate for your environment, but to make this guide as accessible as possible I’m going to use something most of us have access to – a Windows Server 2012 R2 virtual machine.

We’re going to configure the VM using PowerShell DSC. The configuration uses only one DSC resource, the xRemoteAccess resource from GitHub. Unfortunately, despite being written by a Microsoft employee, the resource does not seem to be available on PowerShell Gallery at the moment. It requires a manual download and installation.

Note: PowerShell DSC is a declarative server configuration language based on PowerShell. It is similar to the DSLs of other popular configuration management tools, such as Ansible, Puppet and Chef. Check out our introduction to PowerShell DSC if you are unfamilar with the language.

The following is my DSC configuration for a local site-to-site VPN gateway:

Configuration LocalS2SGateway {

    Import-DscResource -ModuleName PSDesiredStateConfiguration, xRemoteAccess

    Node $AllNodes.Where{$_.Role -contains "LocalS2SGateway"}.NodeName {

        LocalConfigurationManager
        {
            RebootNodeIfNeeded = $true
        }

        WindowsFeature Routing
        {
            Name = 'Routing'
            Ensure = 'Present'
        }

        WindowsFeature RemoteAccessPowerShell
        {
            Name = 'RSAT-RemoteAccess-PowerShell'
            Ensure = 'Present'
            DependsOn = '[WindowsFeature]Routing'
        }

        Service RemoteAccess
        {
            Name = 'RemoteAccess'
            StartupType = 'Automatic'
            State = 'Running'
            DependsOn = '[WindowsFeature]Routing'
        }

        RemoteAccess VpnS2S
        {
            VpnType = 'VpnS2S'
            Ensure = 'Present'
            DependsOn = '[Service]RemoteAccess'
        }

        VpnS2SInterface IKEv2
        {
            Name = $ConfigurationData.AzureVNet.IP
            Destination = $ConfigurationData.AzureVNet.IP
            IPv4Subnet = $ConfigurationData.AzureVNet.Subnet
            SharedSecret = $ConfigurationData.AzureVNet.SharedSecret
            Protocol = 'IKEv2'
            AuthenticationMethod = 'PSKOnly'
            ResponderAuthenticationMethod = 'PSKOnly'
            NumberOfTries = 3
            InitiateConfigPayload = $false
            Ensure = 'Present'
            DependsOn = "[RemoteAccess]VpnS2S"
        }

        VpnServerIPsecConfiguration Encryption
        {
            EncryptionType = 'MaximumEncryption'
            DependsOn = '[VpnS2SInterface]IKEv2'
        }
    }
}

This configuration ensures:

  • The Routing feature of Windows Server is installed.
  • The Remote Server Administration Tools for PowerShell is installed.
  • The RemoteAccess service is running and set to start automatically.
  • Site-to-site VPN is installed using IPsec, IKEv2 and pre-shared key authentication.

The core of the VPN configuration is defined in a ConfigurationData data structure and consumed here as variables to keep the configuration reasonably generic, with minimal instance-specific data. Here is the setup script used to apply the configuration to the VM:

$ErrorActionPreference = 'Stop'

# Download and unzip xRemoteAccess from GitHub, as it is not currently available
# in PowerShell Gallery.
$modulePath = "$env:ProgramFiles\WindowsPowerShell\Modules"
Invoke-WebRequest -UseBasicParsing -Uri https://github.com/mgreenegit/xRemoteAccess/archive/master.zip `
    -OutFile "$modulePath\xRemoteAccess.zip"
Add-Type -Assembly "System.IO.Compression.FileSystem"
[IO.Compression.ZipFile]::ExtractToDirectory("$modulePath\xRemoteAccess.zip", "$modulePath\xRemoteAccess")
Remove-Item -Path "$modulePath\xRemoteAccess.zip"

# WORKAROUND: PowerShell 4 does not like version folders in the module directory of DSC resources.
# So we need to remove everything out of the version directory and up a level for the xRemoteAccess
# resources to work.
if ($PSVersionTable.PSVersion -like "4*") {
    Copy-Item -Path "$modulePath\xRemoteAccess\xRemoteAccess-master\*" -Destination "$modulePath\xRemoteAccess" -Force -Recurse
    Remove-Item -Path "$modulePath\xRemoteAccess\xRemoteAccess-master" -Force -Recurse
}

# Source configuration
. C:\LocalS2SGateway.ps1

# Configuration data, separated from the DSC configuration itself to make it more reusable.
$configData = @{
    AllNodes = @(
        @{
            NodeName = $env:ComputerName
            Role = "LocalS2SGateway"
        }
    );

    AzureVNet = @{
        IP = "51.132.183.53"
        Subnet = "172.16.1.0/24:100"
        SharedSecret = "UseARealKey123!"
    }
}

# Compile our DSC configuration into a MOF file. Apply our config data.
LocalS2SGateway -OutputPath ".\MOF" -ConfigurationData $configData

# Apply the configuration to the machine.
Start-DscConfiguration -Path ".\MOF" -Wait

# Restart RemoteAccess service to finalize any configuration changes
Restart-Service -Name RemoteAccess

The AzureVNet part of $configData specifies the public IP of our Azure virtual network gateway (go find this in the portal if you don’t know it yet), the IP range of the Subnet-001 subnet and a shared secret, which we will share with the Azure gateway next.

Note: The above DSC is written for WMF 4, the version that comes with Windows Server 2012 R2 by default. If you’re running WMF 5 your DSC is likely to look different as various parts of the language have changed. I wouldn’t expect the above to work on WMF 5 without changing a few things.

Stage 4: Establishing the VPN connection

Azure-side

Finally, we need to create the VPN connection between our gateways. This you can do with Azure Building Blocks, so let’s create a copy of our latest parameters file and extend it again.

cp vnet-and-vpn-gateway.json vnet-and-vpn-connection.json

Open vnet-and-vpn-connection.json and extend it to match the below:

{
  "$schema": "https://raw.githubusercontent.com/mspnp/template-building-blocks/master/schemas/buildingBlocks.json",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "buildingBlocks": {
      "value": [
        {
          "type": "VirtualNetwork",
          "settings": [
            {
              "name": "Azure-VNet",
              "addressPrefixes": [
                "172.16.0.0/16"
              ],
              "subnets": [
                {
                  "name": "GatewaySubnet",
                  "addressPrefix": "172.16.0.0/24"
                },
                {
                  "name": "Subnet-001",
                  "addressPrefix": "172.16.1.0/24"
                }
              ]
            }
          ]
        },
        {
          "type": "VirtualNetworkGateway",
          "settings": [
            {
              "name": "Azure-VNet-Gateway",
              "gatewayType": "Vpn",
              "vpnType": "RouteBased",
              "sku": "Basic",
              "isPublic": true,
              "virtualNetwork": {
                "name": "Azure-VNet"
              }
            }
          ]
        },
        {
          "type": "Connection",
          "settings": [
            {
              "name": "Azure-VPN-Connection",
              "connectionType": "IPsec",
              "routingWeight": 10,
              "vpnType": "RouteBased",
              "sharedKey": "UseARealKey123!",
              "virtualNetworkGateway": {
                "name": "Azure-VNet-Gateway"
              },
              "localNetworkGateway": {
                "name": "Azure-Local-Gateway",
                "ipAddress": "2.219.71.91",
                "addressPrefixes": [
                  "192.168.0.0/24"
                ]
              }
            }
          ]
        }
      ]
    }
  }
}

Then redeploy with:

azbb --deploy --parameters-file vnet-and-vpn-connection.json --resource-group Azure-VNet --subscription-id e8ef4b09-2eaf-40be-8639-752e7d4af30d --location uksouth

The above JSON blob with type Connection defines the cloud-side of the site-to-site VPN connection, i.e. configures IPsec and the pre-shared authentication key. It specifies the virtual network gateway and local network gateway that will form the connection. Understandably, Azure needs more information about your localNetworkGateway than the cloud-side gateway. It’s properties are:

  • name: specifies a name for your local gateway configuration in Azure.
  • ipAddress: the public IP address of your on-premises local gateway.
  • addressPrefixes: specifies the IP ranges whose traffic should be sent over the private connection to your local gateway.

Once the redeployment has succeeded, we can establish the connection on our local side.

Local-side

To establish the local-side of our VPN connection we can simply extend our PowerShell DSC configuration. Add the following DSC resource definition to the end of the previous configuration:

VpnS2SInterfaceConnection Connect
{
    Ensure = 'Present'
    Name = $ConfigurationData.AzureVNet.IP
    DependsOn = '[VpnServerIPsecConfiguration]Encryption'
}

Recompile the MOF and apply the configuration with Start-DscConfiguration (repeat some of the steps from the above setup script). Due to all the other resources being in the correct state already, the only change on the machine should be the establishment of the VPN connection.

Use the Get-VpnS2SInterface cmdlet to check that status of the connection on completion.

connection_state

Stage 5: Testing the VPN

The moment of truth, can we connect to a machine in Azure over our new VPN?

If you have VMs connected to the Azure virtual network already, try to connect to one over Remote Desktop Connection (assuming you allow RDP traffic) using its private IP address.

If you don’t, quickly run through the new VM wizard in Azure to create a test machine, I chose a Server 2016 machine with a small disk. Be sure to attach it to the Azure-VNet virtual network and the Subnet-001 subnet, setting the Public IP option to None. Once provisioned, find the private IP address in the Networking blade of the VM in the portal. Open Remote Desktop Connection on your local gateway VM and attempt to connect to that private IP address. You should be successful.

rdp_over_private_ip

Closing thoughts

Hopefully, you can see that if we lost our cloud network or on-premises connectivity in a disaster, we now have the exact specification required (as code) to re-establish that core infrastructure with automation. There is virtually no scope for misconfiguring this VPN or suffering downtime as a result. I would be committing the above files to source control so that any future edits to the configuration can be vetted and reviewed by the appropriate people/process.

It’s also important to remember that infrastructure-as-code technologies are not just for configuring software packages on VMs. We need to be building these deployable, verifiable specifications for all infrastructure we require in running our critical services.

Block Heads: Mastering block storage with the Azure Blob Service

The importance of practice

In software development, I find it to be true that you never know how to use a tool properly until you have already used it at least once before. That’s why, when you take a wrong turn and need to start over with something, you can usually get back to where you were in half the time it took you originally.

It’s important to realize

Automating Office 365 with Microsoft Graph API

An API gateway to business productivity

What would you build if you could process the data generated by your business operations in real time? You could, for example:

  • See trending/abandoned documents and usage patterns.
  • Scan calendars to suggest optimum meeting times.
  • Map collaboration points between departments.
  • Automate a change management/approval process.
  • Manage a backlog of work.

And that’s just for starters! I would be willing to bet that a very large slice of business performed in the world today is driven by the Microsoft Office apps, so imagine the potential gains around automating some of that? It’s got to be huge.

Authenticating with Azure AD and OpenID Connect

Identity in the cloud

Identity management in the cloud is a totally different ball game to when everything was installed and accessed on the corporate network. Users in the enterprise authenticated with an on-premises directory service (e.g. Active Directory Domain Services) and this determined the apps and data they had access to. Occasionally, cross-forest federations were established to allow users belonging to one corporate domain to access resources in another.

Nowadays, with the proliferation of apps and services available in the cloud and the speed and ease with which we consume them

The fastest way to bulk insert to SQL Server in .NET

Love Entity Framework and ORMs

I love Entity Framework and other object-relational mappers (ORMs) like it. These frameworks allow us to code against an object-oriented data model and map any changes seamlessly (usually) to the underlying relational model that the data is persisted in. They reduce the amount that we need to worry about databases.

Where I have seen the most value

Security IQ: Public Key Cryptography

Also in the series

The importance of data security

I’m going to tell you something that you already know: security is important. No matter how much we might rather be looking at shiny new JavaScript frameworks, data integrity is one of those few issues that can destroy the reputation of our products and companies overnight. What UI framework BigBank’s web app uses will never make the front page news, but a visible security breach definitely will. There’s a lot at stake in getting it right.

Considering its importance, it’s clear