Skip to main content

Azure NAT Gateway - Implementation and Testing

· 8 min read

With most Cloud resources being accessible over the internet, each publically accessible resource has its own public IP address, this makes it a lot more challenging to administer the security and access rules to access third party services.

Think along the lines of - you or your organisation might use software-as-a-service CRM product. That product is only accessible from your organisations IP for compliance/security reasons, you might access the CRM product from various Azure Virtual Desktop hosts, each with its public IP or a random Microsoft Azure datacenter IP, or you want to control Multifactor authentication/conditional access policies for users using Azure services.

The administration of this, particularly in scenarios where other people or teams can create and manage resources, can be complex, sure; you can use Standard Load Balancers, which would help, but you have to manage and pay for it, which is sometimes overkill.

Tunnelling outbound traffic through to a specific IP address or IP addresses to 'known controllable IP addresses' for Azure resources (both IaaS and PaaS) which sit in the same Virtual Network is where the Azure NAT Gateway comes in, allowing you to easily allow and control what IPs your traffic is coming from. NAT Gateway replaces the default Internet destination in the virtual network’s routing table for the subnets identified

"The Azure NAT gateway is a fully managed, highly resilient service built into the Azure fabric, which can be associated with one or more subnets in the same Virtual Network, that ensures that all outbound Internet-facing traffic will be routed through the gateway. As a result, the NAT gateway gives you a predictable public IP for outbound Internet-facing traffic. It also significantly increases the available SNAT ports in scenarios where you have a high number of concurrent connections to the same public address/port combination."

My Testing

Now lets get testing the Azure NAT Gateway! To test the gateway, I created:

  • Virtual Network
  • NAT Gateway
  • IP Public Address prefix
  • 1 Windows VM (Windows Server 2019) with Public IP
  • 1 Linux (Ubuntu 18.04) VM with Public IP
  • 1 Windows VM (Windows Server 2019) as a backend pool for an Azure Load Balancer
  • Virtual Machine Scale Set with four instances (each with Windows Server 2019)

Note: Each VM has RDP opened to allow inbound traffic from my network using the Public IP and a NAT rule allowing RDP traffic on the Load Balancer. There is no point-to-site or site-to-site VPN; RDP connections are directly over the internet to Australia East, from New Zealand.

NAT Gateway - Test

Once the Azure resources were created, I then connected to each machine using RDP/SSH on their Public IP address and tested:

Linux Machine with Public IP for RDP

  • Inbound Public IP:
  • Outbound IP:

Linux Azure NAT Gateway

As you can see, I connected to the Linux VM's public IP via SSH and did a curl to: to grab my public IP. The public IP of my Linux box was my NAT Gateway Public IP prefix!

Windows Machine with Public IP for RDP

  • Inbound Public IP:
  • Outbound IP:

Windows Azure NAT Gateway

Using RDP to the public IP of the Windows Server, I navigated to: As you can see, the Public IP of my outbound IP address was my NAT Gateway Public IP prefix!

Windows Machine behind an Azure Load Balancer

  • Inbound Public IP:
  • Outbound IP:

Windows Machine behind Azure Load Balancer NAT Gateway

This was the last of the 3 test machines; I stood up. Using RDP to the public IP of the Azure Load BalancerI navigated to: As you can see, the Public IP of my outbound IP address was my NAT Gateway Public IP prefix; however, this was '', which was the second IP address available in my /31 IP address prefix.

Windows Machine behind a VM Scale Set

Although not in the diagram, I decided to add a VM Scale Set of 4 Virtual Machines into my testing (to save on cost, they are just Standard_B2ms machines but more than enough for my testing).

Azure NAT Gateway - VM Scale Set

As you can see from the mess that is my screenshot above, all machines had completely different inbound Public IP addresses. Still, the outbound public IP addresses came from the NAT Gateway as expected.

Findings and Observations

  • The outbound public IP did seem to change between the workloads; if I refreshed 'whatismyip' and 'ifconfig', the public IP changed between 184 and 185. However, no loss of connectivity occurred to the Virtual Machines. This was linked to the '4-minute idle timeout' configured on the NAT Gateway; I saw no reason to change the default timeout value; if I were that worried about the same IP address - I would have chosen with a Public IP vs a Public IP prefix on the NAT Gateway.
  • Any Public IP used on the same subnet as a NAT Gateway needs to be Standard.
  • If I had both a Public IP address and a Public IP prefix on my NAT gateway, the Prefix seemed to take precedence.
  • You cannot use a Public IP Prefix that is in use by the NAT Gateway for any other workload, i.e. any inbound Public IPs. It would be best if you had another Public IP prefix resource.
  • A single NAT gateway resource supports from 64,000 up to 1 million concurrent flows. Each IP address provides 64,000 SNAT ports to the available inventory. Therefore, you can use up to 16 IP addresses per NAT gateway resource. The SNAT mechanism is described here in more detail.

Create a NAT Gateway

To create my NAT Gateway, I used the ARM Quickstart template, located here:

Then I created the additional Virtual Machines and Load Balancers and added them to the same VNET created as part of the NAT Gateway.

To create a NAT Gateway using the Azure Portal

  1. Log in to the Azure Portal and navigate to Create a resource, NAT Gateway (this link will get you there: Create-NATGateway).
  2. Select your Subscription
  3. Enter your NAT Gateway name
  4. Enter your Region
  5. Enter your availability zone
  6. Set your idle timeout (I suggest leaving this at 4 minutes, you can change it later if it presents issues)
  7. Create Azure NAT Gateway
  8. Click Next: Outbound IP
  9. We are just going to create a new Public IP address (it has to be Standard and Static, the Azure Portal automatically selects this for you - although you can create your Public IP prefix here for scalability, you don't need it both).
  10. Create Azure NAT Gateway
  11. Click Next: Subnet
  12. Create or link your existing Virtual Network and subnets and click Next: Tags
  13. Enter in any tags that may be relevant (Creator, Created on, Created for, Support Team etc.)
  14. Click Next: Review + Create
  15. Verify everything looks ok then click Create

Congratulations, you have now created your NAT Gateway!

To create a NAT Gateway using Azure Bicep

Just a quick Bicep snippet I created to create the NAT Gateway resource only:


//Target Scope is: Resource Group

targetScope = 'resourceGroup'

//Set Variables and Parameters

param environment string = 'Prod'
param location string = resourceGroup().location

param dateTime string = utcNow('d')
param resourceTags object = {
Application: 'Azure NAT Gateway/Azure Network Management'
CostCenter: 'Operational'
CreationDate: dateTime
Environment: environment

//// Resource Creation

/// Create - NAT Gateway

resource NATGW 'Microsoft.Network/natGateways@2021-03-01' = {
name: 'aznatgw'
tags: resourceTags

location: location
sku: {
name: 'Standard'

properties: {
idleTimeoutInMinutes: 4

It can be deployed by opening PowerShell (after Bicep is installedusing the PowerShell method)and logging into your Azure and running the following(replace RGNAME with the name of the Resource Group you will be deploying it to):

When you are actually ready to deploy, remove the -Whatif at the end. Then you can go into the resource and add the Public IP/prefix. PowerShell will prompt you for the name of the NAT Gateway and be created in the same location as the Resource Group by default.

New-AzResourceGroupDeployment -Name NatGwDeployment -ResourceGroupName RGNAME -TemplateFile .\Create_NATGateway.bicep -whatif

Additional Resources

Benefits to using the Microsoft Azure Cloud to host your Infrastructure

· 5 min read

Cloud computing offers many benefits, from your traditional on-premises infrastructure, ecosystems such as Microsoft Azure, have an underlying fabric built for today's 'software as a service' or 'software defined' world.

The shift of technologies from managing on-premises Exchange environments for mail to consuming Microsoft 365 services has allowed more time for the IT and businesses to adopt, consume and improve their technology and continuously improve - to get the most use of it and remain competitive in this challenging world.

Below is a high-level list of what I consider some of the benefits of using the Microsoft Azure ecosystem:

  • Each Azure datacentre 'region' has 3 Availability Zones, each zone acts as a separate datacentre, giving redundant power and networking services, quickly allowing you to separate your services across different fault domains and zones, providing better resiliency, while also giving you the ability to keep them logically and physically close together.
  • Geo-redundant replication of backups for Virtual Machines, PaaS/File Shares, and ability to do cross-region restore functionality (i.e., Australia/Australia East).
  • A multitude of hosts (supporting both AMD and Intel workloads), which are continually patched and maintained, and tuned for virtualisation performance, stability and security, no longer do we need to spend hours patching, maintaining, licensing on-premises hypervisors, ever so increasing as these systems get targeted for vulnerabilities and architecting how many physical hosts, we may need to support a system.
  • Consistent, up-to-date hardware, no need to worry about lead times for new hardware, purchasing new hardware every three years and procurement and implementation costs of hardware, allowing you to spend the time improving on the business and tuning your services (scaling up and down, trying new technologies, turning off devices etc.)
  • For those that like to hoard every file that ever existed, the Azure platform allows scale (in and out to suit your file sizes) along with cost-saving opportunities and tweaks with Automation and migrating files between cool/hot tiers.
  • No need to pay datacentre hosting costs
  • No need to worry about redundant switching
  • With multiple hosts, there is no risk around air conditioning leaks, hardware failure; you don't need to worry about some of these unfortunate events occurring.
  • No need to pay electricity costs to host your workloads.
  • Reduced IT labour costs and time to implement and maintain systems
  • OnDemand resources available can stand up separate networks unattached to your production network for testing or other devices easily without working out through VLANs or complex switching and firewalls.
  • Azure Network have standard DDOS protection enabled by default
  • Backups are secure by default; they are offline and managed by Microsoft, so if a ransomware attack occurs, won't be able to touch your backups.
  • Constant Security recommendations, improvements built into the platform.
  • Azure Files is geo-redundant and across multiple storage arrays, encrypted at rest.
  • Windows/SQL licensing is all covered as part of the costings, so need to worry about not adhering to MS licensing, Azure helps simplify what can sometimes be confusing and complex licensing.
  • Extended security updates for out-of-date Server OS such as Windows Server 2008 R2, Windows Server 2021 R2 without having to pay for extended update support.
  • Ability to leverage modern and remote desktop and application technologies such as Windows 365 and Azure Virtual Desktop, by accessing services hosted in Azure.
  • Having your workloads in Azure gives you a step towards, removing the need for traditional domain controllers and migrating to Microsoft Entra ID joined devices.
  • Azure AutoManage functionality is built in to automatically patch Linux (and Windows of course!), without having to manage separate patching technologies for cross-platform infrastructure.
  • Azure has huge support for Automation, via PowerShell, CLI and API, allowing you to standardize, maintain, tear down and create infrastructure and services, monitoring, self-users on an as needed basis.
  • Azure datacentres are sustainable and run off renewable energy where they can, Microsoft has commitments to be fully renewable.
  • No need for NAS or Local Backups, the backups are all built into Azure.
  • Compliant datacentre across various global security standards -
  • Ability to migrate or expand your resources from Australia to ‘NZ North’ or other new or existing data centres! Azure is global and gives you the ability to scale your infrastructure to a global market easily or bring your resources closer to home if a data centre becomes available.
  • We all know that despite the best of intentions, we rarely ever test, develop, and improve disaster recovery scenarios, sometimes this is because of the complexity of the applications and backup infrastructure. Azure Site Recovery, Geo-Redundant backup, Load Balancers and automation helps make this a lot easier.
  • Ability to better utilise Cloud security tools (ie such as the Azure Security Center), across Cloud and on-premises workloads consistently using Azure Arc and Azure policies.
  • And finally - more visibility into the true cost and value of your IT infrastructure, the total cost of your IT Infrastructure is hidden behind electricity costs, outages and incidents that would not have impacted cloud resources, slow time to deployment or market, outdated and insecure technologies and most likely services you are running which you don't need to run!

#ProTip - Resources such as the Azure Total Cost of Ownership (TCO) can help you calculate the true cost of your workloads.

Always on VPN - Error 809 The network connection between your computer and the VPN server could not be established

· 3 min read

I ran into a weird issue, troubleshooting an 'Always On VPN' installation running off Windows Server 2019, the clients were getting the following error in the Application event log:

Error 809 The network connection between your computer and the VPN server could not be established

In my case, the issue wasn't due to IKEv2 Fragmentation or anything to do with NAT to allow the origin IP to flow to the Always-on VPN server. It was due to the ports being limited to: '2'. I found an old post regarding Windows Server 2008 R2:

"If more than two clients try to connect to the server at the same time, the Routing and Remote Access service rejects the IKEv2 connection requests. Additionally, the following message is logged in the Rastapi.log file:"

This matched my issue; I had never seen more than 2 connections at once.

Increase Ports

  1. Open Routing and Remote Access
  2. Click on your Routing and Remote Access server
  3. Right-click on Ports
  4. Click on: WAN Miniport (IKEv2)
  5. Click Configure
  6. Ensure that: To enable remote access, select Remote access connections (inbound only) is checked.
  7. Change Maximum ports from 2 (as an example) to a number that matches how many connections you want - I went with 128
  8. Click Ok
  9. Click Apply
  10. Restart the Routing and Remote Access server. You should now see more ports listed 'as inactive' until a new session comes along and uses it.

Routing and Remote Access

Routing and Remote Access

Enable TLS 1.1

Although this wasn't my initial fix, I had a Microsoft Support call opened regarding this issue; after analysing the logs, they recommended enabling TLS 1.1 (which was disabled by default on a Windows Server 2019 server). I would only do this as a last resort - if required.

Run the PowerShell script below (as Administrator) to Enable; you can always rerun the Disable script to remove the changes.

Enable TLS 1.1

function enable-tls-1.1
New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -Force
New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -Force
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'Enabled' -value '1' –PropertyType 'DWORD'
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'DisabledByDefault' -value '0' –PropertyType 'DWORD'
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'Enabled' -value '1' –PropertyType 'DWORD'
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'DisabledByDefault' -value '0' –PropertyType 'DWORD'
Write-Host 'Enabling TLSv1.1'

Disable TLS 1.1

function disable-tls-1.1
New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -Force
New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -Force
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'Enabled' -value '0' –PropertyType 'DWORD'
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Server' -name 'DisabledByDefault' -value '1' –PropertyType 'DWORD'
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'Enabled' -value '0' –PropertyType 'DWORD'
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.1\Client' -name 'DisabledByDefault' -value '1' –PropertyType 'DWORD'
Write-Host 'Disabling TLSv1.1'

Update-AdmPwdAdSchema - The requested attribute does not exist

· One min read

Are you attempting to update the Active Directory Schema for LAPS (Local Administrator Password Solution) and keep getting the error below?

Update-AdmPwdAdSchema: The requested attribute does not exist

Here are few things you can check:

  • Make sure you are a Schema Admin
  • Run PowerShell as Administrator
  • Run the PowerShell to update the schema directly from the Schema Master

You can use the snippet below to check which Domain Controller the Schema Master role is running from:

Get-ADDomainController -Filter * | Select-Object Name, Domain, Forest, OperationMasterRoles | Where-Object {$_.OperationMasterRoles}

AVD-Collect - Azure Virtual Desktop Diagnostics and Logging

· 7 min read

AVD-Collect is a handy PowerShell script created by Microsoft Customer Support Services to assist with troubleshooting and resolving issues with Azure Virtual Desktop (and Windows 365), by capturing Logs for analysis (which could then be passed to Microsoft or allow you to delve deeper) and running basic Diagnostics against some common known issues.

You can download this script from:

There is no publically avaliable github repository for it currently, Microsoft will retain the latest version of the script at this link.

This script was NOT created by me and comes 'As/Is', this article is merely intended to share the script to assit others in their AVD troubleshooting.

This script is intended to help support Microsoft Customer Support with assisting customers, but was made publically accessible to assist with MS Support cases and Azure Virtual Desktop diagnostics. No data is automatically uploaded to Microsoft.

Please be aware that the script may change and include new functionality not part of this article, please review the Changelog and Readme of the script directly.

A lot of the information below is contained in the script readme (including a list of the extensive diagnostics and log locations) and changelog; however, I am supplying this article for reference and to help share this nifty tool.

Script pre-requisites

  1. The script must be run with elevated permissions to collect all required data.

  2. All collected data will be archived into a .zip file located in the same folder as the script itself.

  3. As needed, run the script on AVD host VMs and/or Windows-based devices from where you connect to the AVD hosts.

  4. When launched, the script will present the Microsoft Diagnostic Tools End User License Agreement (EULA). You need to accept the EULA before you can continue using the script.

  5. If the script does not start, complaining about execution restrictions, then in an elevated PowerShell console run:

    	Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force -Scope Process

Acceptance of the EULA will be stored in the registry under HKCU\Software\Microsoft\CESDiagnosticTools, and you will not be prompted again to accept it as long as the registry key is in place. You can also use the "-AcceptEula" command line parameter to accept the EULA silently. This is a per-user setting, so each user running the script will accept the EULA once.

Script scenarios

Core - suitable for troubleshooting issues that do not involve Profiles or Teams or MSIX App Attach

  • Collects core troubleshooting data without including Profiles/FSLogix/OneDrive or Teams or MSIXAA related data
  • Runs Diagnostics.

Core + Profiles - suitable for troubleshooting Profiles issues

  • Collects all Core data
  • Collects Profiles/FSLogix/OneDrive related information, as available
  • Runs Diagnostics.

Core + Teams - suitable for troubleshooting Teams issues

  • Collects all Core data
  • Collects Teams related information, as available
  • Runs Diagnostics.

Core + MSIX App Attach - suitable for troubleshooting MSIX App Attach issues

  • Collects all Core data
  • Collects MSIX App Attach related information, as available
  • Runs Diagnostics.

Core + MSRA - suitable for troubleshooting Remote Assistance issues

  • Collects all Core data
  • Collects Remote Assistance related information, as available
  • Runs Diagnostics.

Extended (all) - suitable for troubleshooting most issues, including Profiles/FSLogix/OneDrive, Teams and MSIX App Attach

  • Collects all Core data
  • Collects Profiles/FSLogix/OneDrive related information, as available
  • Collects Microsoft Teams related information, as available
  • Collects MSIX App Attach related information, as available
  • Runs Diagnostics.


  • Skips all Core/Extended data collection and runs Diagnostics only (regardless of any other parameters that have been specified).

The default scenario is "Core".​​​​​​​

Available command line parameters (to preselect the desired scenario)

  • Core - Collects Core data + Runs Diagnostics
  • Extended - Collects all Core data + Extended (Profiles/FSLogix/OneDrive, Teams, MSIX App Attach) data + Runs Diagnostics
  • Profiles - Collects all Core data + Profiles/FSLogix/OneDrive data + Runs Diagnostics
  • Teams - Collects all Core data + Teams data + Runs Diagnostics
  • MSIXAA - Collects all Core data + MSIX App Attach data + Runs Diagnostics
  • MSRA - Collects all Core data + Remote Assistance data + Runs Diagnostics
  • DiagOnly - The script will skip all data collection and will only run the diagnostics part (even if other parameters have been included).
  • AcceptEula - Silently accepts the Microsoft Diagnostic Tools End User License Agreement.

Usage example with parameters

To collect only Core data (excluding Profiles/FSLogix/OneDrive, Teams, MSIX App Attach):

	.\AVD-Collect.ps1 -Core

To collect Core + Extended data (incl. Profiles/FSLogix/OneDrive, Teams, MSIX App Attach):

	.\AVD-Collect.ps1 -Extended

To collect Core + Profiles + MSIX App Attach data

	.\AVD-Collect.ps1 -Profiles -MSIXAA

To collect Core + Profiles data

	.\AVD-Collect.ps1 -Profiles

​​​​​​​If you are missing any of the data that the script should normally collect, check the content of the "__AVD-Collect-Log.txt" and "__AVD-Collect-Errors.txt" files for more information. Some data may not be present during data collection and thus not picked up by the script.

Execute the script

  1. Download the AVD-Collect script to the session host you need to collect the logs from, if you haven't already.

  2. Extract the script to a folder (i.e. C:\Users\%username&\Downloads\AVD-Collect)

  3. Right-click on: AVD-Collect.ps1, select Properties

  4. Because this file has been downloaded from the Internet, it may be in a protected/block status - select Unblock and click Apply

  5. Open Windows Powershell as Administrator

  6. Now we need to change the directory for where the script is located; in my example, the command I use is:

    cd 'C:\Users\Luke\Downloads\AVD-Collect'
  7. By default, the script will run as 'Core', and I want to include everything, profiles, Teams etc., so run Extended:

    .\AVD-Collect.ps1 -Extended -AcceptEula
  8. Read the notice from the Microsoft Customer Support centre and press 'Y' if you accept to move onto the next steps.

  9. The script will now run:

  10. AVD- Script Running

  11. You will start to see new folders get created in the directory that the script is running from with the extracted log files. The script will take a few minutes to complete as it extracts the logs and then zips them.

  12. Once the script has ran, there will now be a ZIP file of all the Logs collected by the script. In my example, the logs consisted of:

  • Certificates
  • Recent Event Log
  • FSLogix logs
  • Networking
  • Registry Keys
  • Teams information
  • System information
  • Networking and Firewall information
  1. AVD-Collect Logs
  2. If needed, you can now send or upload the ZIP file to Microsoft support. If you are troubleshooting yourself, you can navigate to the folders to look at the specific logs you want, all in one place!
  3. To look at Diagnostic information, open the: AVD-Diag.html file.
  4. You can now see a list of common issues, what the script is looking for, and whether the host has passed or failed these scripts (this can be very useful for Azure Virtual Desktop hosts, to make sure all the standard configuration is done or being applied, including making sure that the session host has access to all the external resources it needs):
  5. AVD-Collect Diagnostics