Skip to main content

Create a Site to Site VPN to Azure with a Ubiquiti Dream Machine Pro

· 7 min read

The Ubiquiti Dream Machine Pro has a lot of functionality built-in, including IPsec Site-to-site VPN_(Virtual Private Network)_ support.

I recently installed and configured a UDM-PRO at home, so now it's time to set up a site-to-vpn to my Microsoft Azure network.

I will create Virtual Network and Gateway resources using Azure Bicep, but please skip ahead.

My address range is as follows (so make sure you adjust to match your setup and IP ranges):

On-premisesAzure
192.168.1.0/2410.0.0.0/16

Prerequisites

  • The latest Azure PowerShell modules and Azure Bicep/Azure CLI for local editing
  • An Azure subscription that you have at least contributor rights to
  • Permissions to the UDM Pro to set up a new network connection

I will be using PowerShell splatting as it's easier to edit and display. You can easily take the scripts here to make them your own.

Deploy - Azure Network and Virtual Network Gateway

I will assume that you have both Azure Bicep andPowerShell Azure modules installed and the know-how to connect to Microsoft Azure.

Azure Bicep deployments (like ARM) have the following command: 'TemplateParameterObject'. 'TemplateParameterObject' allows Azure Bicep to accept parameters from PowerShell directly, which can be pretty powerful when used with a self-service portal or pipeline.

I will first make an Azure Resource Group using PowerShell for my Azure Virtual Network, then use the New-AzResourceGroupDeployment cmdlet to deploy my Virtual Network and subnets from my bicep file.

Along with the Virtual Network, we will also create 2 other Azure resources needed for a Site to Site VPN, a Local Network Gateway _(this will represent your on-premises subnet and external IP to assist with routing)_and a Virtual Network Gateway (which is used to send encrypted traffic over the internet between your on-premises site(s) and Azure).

Update the parameters of the PowerShell script below, to match your own needs, and you may need to edit the Bicep file itself to add/remove subnets and change the IP address space to match your standards.

The shared key will be used between the UDM Pro and your Azure network; make sure this is unique.

#Connects to Azure
Connect-AzAccount
#Resource Group Name
$resourcegrpname = 'network_rg'
#Creates a resource group for the storage account
New-AzResourceGroup -Name $resourcegrpname -Location 'AustraliaEast'
# Parameters splat, for Azure Bicep
# Parameter options for the Azure Bicep Template, this is where your Azure Bicep parameters go
$paramObject = @{
'sitecode' = 'luke'
'environment' = 'prod'
'contactEmail' = '[email protected]'
'sharedkey' = '18d5b51a17c68a42d493651bed88b73234bbaad0'
'onpremisesgwip' = '123.456.789.101'
'onpremisesaddress' = '192.168.1.0/24'
}
# Parameters for the New-AzResourceGroupDeployment cmdlet goes into.
$parameters = @{
'Name' = 'AzureNetwork-S2S'
'ResourceGroupName' = $resourcegrpname
'TemplateFile' = 'c:\temp\Deploy-AzVNETS2S.bicep'
'TemplateParameterObject' = $paramObject
'Verbose' = $true
}
#Deploys the Azure Bicep template
New-AzResourceGroupDeployment @parameters -WhatIf

Note: The '-whatif' parameter has been added as a safeguard, so once you know the changes are suitable, then remove and rerun.

The Virtual Network Gateway can take 20+ minutes to deploy, leave the Terminal/PowerShell window open, you can also check the Deployment in the Azure Portal (Under Deployments panel in the Resource Group).

Azure Portal - Resource Group Deployments

The Azure Bicep file is located here:

Deploy-AzVNETS2S.bicep
targetScope = 'resourceGroup'

///Parameter and Variable Setting

@minLength(3)
@maxLength(6)
param sitecode string = ''

param environment string = ''
param contactEmail string = ''

param resourceTags object = {
Application: 'Azure Infrastructure Management'
CostCenter: 'Operational'
CreationDate: dateTime
Environment: environment
CreatedBy: contactEmail
Notes: 'Created on behalf of: ${sitecode} for their Site to Site VPN.'
}

param dateTime string = utcNow('d')
param location string = resourceGroup().location

param sharedkey string = ''
param onpremisesaddress string = ''
param onpremisesgwip string = ''

//Resource Naming Parameters
param virtualNetworks_vnet_name string = '${sitecode}-vnet'
param connections_S2S_Connection_Home_name string = 'S2S_Connection_Home'
param publicIPAddresses_virtualngw_prod_name string = '${sitecode}-pip-vngw-${environment}'
param localNetworkGateways_localngw_prod_name string = '${sitecode}-localngw-${environment}'
param virtualNetworkGateways_virtualngw_prod_name string = '${sitecode}-virtualngw-${environment}'

resource localNetworkGateways_localngw_prod_name_resource 'Microsoft.Network/localNetworkGateways@2020-11-01' = {
name: localNetworkGateways_localngw_prod_name

location: location
properties: {
localNetworkAddressSpace: {
addressPrefixes: [
onpremisesaddress
]
}
gatewayIpAddress: onpremisesgwip
}
}

resource publicIPAddresses_virtualngw_prod_name_resource 'Microsoft.Network/publicIPAddresses@2020-11-01' = {
name: publicIPAddresses_virtualngw_prod_name
tags: resourceTags
location: location
sku: {
name: 'Standard'
tier: 'Regional'
}
properties: {
publicIPAddressVersion: 'IPv4'
publicIPAllocationMethod: 'Static'
idleTimeoutInMinutes: 4
ipTags: []
}
}

resource virtualNetworks_vnet_name_resource 'Microsoft.Network/virtualNetworks@2020-11-01' = {
name: virtualNetworks_vnet_name
location: location
tags: resourceTags
properties: {
addressSpace: {
addressPrefixes: [
'10.0.0.0/16'
]
}
subnets: [
{
name: 'GatewaySubnet'
properties: {
addressPrefix: '10.0.0.0/26'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
{
name: 'AzureBastionSubnet'
properties: {
addressPrefix: '10.0.0.64/27'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
{
name: 'AzureFirewallSubnet'
properties: {
addressPrefix: '10.0.0.128/26'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
{
name: 'appservers'
properties: {
addressPrefix: '10.0.2.0/24'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
]
virtualNetworkPeerings: []
enableDdosProtection: false
}
}

resource virtualNetworks_vnet_name_appservers 'Microsoft.Network/virtualNetworks/subnets@2020-11-01' = {
parent: virtualNetworks_vnet_name_resource
name: 'appservers'
properties: {
addressPrefix: '10.0.2.0/24'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}

resource virtualNetworks_vnet_name_AzureBastionSubnet 'Microsoft.Network/virtualNetworks/subnets@2020-11-01' = {
parent: virtualNetworks_vnet_name_resource
name: 'AzureBastionSubnet'
properties: {
addressPrefix: '10.0.0.64/27'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}

resource virtualNetworks_vnet_name_AzureFirewallSubnet 'Microsoft.Network/virtualNetworks/subnets@2020-11-01' = {
parent: virtualNetworks_vnet_name_resource
name: 'AzureFirewallSubnet'
properties: {
addressPrefix: '10.0.0.128/26'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}

resource virtualNetworks_vnet_name_GatewaySubnet 'Microsoft.Network/virtualNetworks/subnets@2020-11-01' = {
parent: virtualNetworks_vnet_name_resource
name: 'GatewaySubnet'
properties: {
addressPrefix: '10.0.0.0/26'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}

resource connections_S2S_Connection_Home_name_resource 'Microsoft.Network/connections@2020-11-01' = {
name: connections_S2S_Connection_Home_name
location: location
properties: {
virtualNetworkGateway1: {
id: virtualNetworkGateways_virtualngw_prod_name_resource.id
}
localNetworkGateway2: {
id: localNetworkGateways_localngw_prod_name_resource.id
}
connectionType: 'IPsec'
connectionProtocol: 'IKEv2'
routingWeight: 0
sharedKey: sharedkey
enableBgp: false
useLocalAzureIpAddress: false
usePolicyBasedTrafficSelectors: false
ipsecPolicies: []
trafficSelectorPolicies: []
expressRouteGatewayBypass: false
dpdTimeoutSeconds: 0
connectionMode: 'Default'
}
}

resource virtualNetworkGateways_virtualngw_prod_name_resource 'Microsoft.Network/virtualNetworkGateways@2020-11-01' = {
name: virtualNetworkGateways_virtualngw_prod_name
location: location
properties: {
enablePrivateIpAddress: false
ipConfigurations: [
{
name: 'default'
properties: {
privateIPAllocationMethod: 'Dynamic'
publicIPAddress: {
id: publicIPAddresses_virtualngw_prod_name_resource.id
}
subnet: {
id: virtualNetworks_vnet_name_GatewaySubnet.id
}
}
}
]
sku: {
name: 'VpnGw2'
tier: 'VpnGw2'
}
gatewayType: 'Vpn'
vpnType: 'RouteBased'
enableBgp: false
activeActive: false
bgpSettings: {
asn: 65515
bgpPeeringAddress: '10.0.0.62'
peerWeight: 0


}
vpnGatewayGeneration: 'Generation2'
}
}

Once deployed, run the following command to capture and copy the Gateway Public IP:

Get-AzPublicIPAddress | Select-Object Name, IpAddress 

Copy the Public IP, we will need this for configuring the UDM Pro, this would have been generated dynamically.

Configure - Ubiquiti Dream Machine Pro

  1. Login to the UDM-Pro

  2. Unifi OS

  3. Click on Network (under Applications heading)

  4. Click Settings (Gear icon)

  5. Unifi OS - Network

  6. Click VPN

  7. UDM Pro Unifi OS - VPN

  8. Scroll down and click + Create Site-to site-VPN

  9. Fill in the following information:

    • Network Name(ie Azure - SYD)
    • VPN Protocol (select Manual IPsec)
    • Pre-shared Key (enter in the SAME key that was used by Azure Bicep to create the Connection - if you have lost it, it can be updated in Azure, under Shared key on the connection attached to the Virtual network gateway, but will stop any other VPN connections using the old key)
    • Server Address (make sure you select the interface for your WAN/External IP)
    • Remote Gateway/Subnets (Enter in the Address Prefix of your Azure virtual network or subnets, remember to add any peered virtual networks and Press Enter)
    • Remote IP Address (Enter in the Public IP of the Virtual Network gateway, the same IP retrieved by Get-AzPublicIPAddress cmdlet )
  10. UDM Pro - Azure S2S VPN

  11. Select Manual

  12. UDM Pro - Azure S2S VPN

    Select IPSec Profile, and select Azure Dynamic Routing

  13. Click Apply Changes

After a few minutes, the VPN should become connected and you should be able to connect to devices on the Azure Network using their private IP address.

If you have problems, make sure that the Gateway IPs line up and are correct, along with the pre-shared key. You can also Pause the Network from the UDM-Pro and Resume to reinitiate the connection.

You can also troubleshoot the VPN connection, from the Azure Portal, by navigating the Virtual network gateway and selecting VPN Troubleshoot.

Azure Portal - VPN Troubleshoot

Azure Optimization Engine

· 21 min read

This post is a part of Azure Spring Clean, which is a community event focused on Azure management topics from March 14-18, 2022.

Thanks to Joe Carlyle and Thomas Thornton for putting in the time and organising this event.

This article, along with others of its kind (Articles, Videos etc.), cover Azure Management topics such as Azure Monitor, Azure Cost Management, Azure Policy, Azure Security Principles or Azure Foundations!

Today I will be covering the Azure Optimization Engine.

#AzureSpringClean - Azure Optimization Engine

Overview

The Azure Optimization Engine (AOE) is an extensible solution designed to generate optimization recommendations for your Azure environment, like a fully customizable Azure Advisor.

The first custom recommendations use-case covered by this tool was augmenting Azure Advisor Cost recommendations, particularly Virtual Machine right-sizing, with a fit score based on VM (Virtual Machine) metrics and properties.

The Azure Optimization Engine can…

  • Enable new custom recommendation types
  • Augment Azure Advisor recommendations with richer details that better drive action
  • Add fit score to recommendations.
  • Add historical perspective to recommendations (the older the recommendation, the higher the chances to remediate it)
  • Drive continuous automated optimisation

Azure Optimisation Engine combines multiple data sources to give you better data-driven decisions and recommendations, outside of that usually deployed by the inbuilt Azure Advisor, example use-cases and data sources can be seen below:

  • Azure Resource Graph (Virtual Machine and Managed Disks properties)
  • Azure Monitor Logs (Virtual Machine performance metrics)
  • Azure Consumption (consumption/billing usage details events)
  • Extracts data periodically to build a recommendations history
  • Joins and queries data in an analytics-optimised repository (Log Analytics)
  • Virtual Machine performance metrics collected with Log Analytics agent
  • Can leverage existing customer setup
  • Requires only a few metrics collected with a frequency >= 60 seconds

Besides collecting all Azure Advisor recommendations, AOE includes other custom recommendations that you can tailor to your needs:

  • Cost
    • Augmented Advisor Cost VM right-size recommendations, with fit score based on Virtual Machine guest OS metrics (collected by Log Analytics agents) and Azure properties
    • Underutilized VM Scale Sets
    • Unattached disks
    • Standard Load Balancers without backend pool
    • Application Gateways without backend pool
    • VMs deallocated since a long time ago (forgotten VMs)
    • Orphaned Public IPs
  • High Availability
    • Virtual Machine high availability (availability zones count, availability set, managed disks, storage account distribution when using unmanaged disks)
    • VM Scale Set high availability (availability zones count, managed disks)
    • Availability Sets structure (fault/update domains count)
  • Performance
    • VM Scale Sets constrained by lack of compute resources
  • Security
    • Service Principal credentials/certificates without expiration date
    • NSG rules referring to empty or non-existing subnets
    • NSG rules referring to orphan or removed NICs
    • NSG rules referring to orphan or removed Public IPs
  • Operational Excellence
    • Load Balancers without backend pool
    • Service Principal credentials/certificates expired or about to expire
    • Subscriptions close to the maximum limit of RBAC (Role Based Access Control) assignments
    • Management Groups close to the maximum limit of RBAC assignments
    • Subscriptions close to the maximum limit of resource groups
    • Subnets with low free IP space
    • Subnets with too much IP space wasted
    • Empty subnets
    • Orphaned NICs

Feel free to skip to the Workbook and PowerBI sections to look at some of the outs of box data and recommendations.

The Azure Optimisation Engine is battle-tested

  • Providing custom recommendations since Nov 2019
  • Serving Azure customers worldwide
  • From smaller 50-500 VMs customers to larger ones with more than 5K VMs
  • Several customer-specific developments (custom collectors and recommendation algorithms)
  • Flexibility options include (multi-subscription and multi-tenant capability)
  • Based on cheap services (Azure Automation, Storage, small SQL Database)

A few hours after setting up the engine, you will get access to a Power BI dashboard and Log Analytic Workbooks with all Azure optimisation opportunities, coming from both Azure Advisor and tailored recommendations included in the engine.

These recommendations are then updated every seven days.

It is worth noting that Azure Optimisation Engine is NOT an official Microsoft Product, and as such is under no offical support, it was created and maintened by: Hélder Pinto, a Senior Customer Engineer for Microsoft and would like to take the opportunity to thank Hélder the amazing work he is doing with this product on a continous basis, and giving me his blessing to write this article, on which he has already done an amazing job documenting on Github.

Architecture

Azure Optimization Engine Architecture

Azure Optimization Engine runs on top of Azure Automation (Runbooks for each data source) and Log Analytics. It is supplemented by a storage account to store JSON and Azure SQL database to help control ingestion (last processed blob and lines processed).

Install

Prerequisites

Taken directly from the Git repository readme, the prerequisite for Azure Optimization Engine are:

  • A supported Azure subscription (see the FAQson Github)
  • Azure Powershell 6.6.0+(Azure Bicep support is not currently available but is being worked on).
  • Microsoft.Graph.Authentication and Microsoft.Graph.Identity.DirectoryManagement PowerShell modules
  • A user account with Owner permissions over the chosen subscription, so that the Automation Managed Identity is granted the required privileges over the subscription (Reader) and deployment resource group (Contributor)
  • (Optional) A user account with at least Privileged Role Administrator permissions over the Azure AD tenant, so that the Managed Identity is granted the required privileges over Azure AD (Global Reader)

During deployment, you'll be asked several questions. It would be best if you planned for the following:

  • Whether you're going to reuse an existing Log Analytics Workspace or create a new one. IMPORTANT: you should ideally reuse a workspace where you have VMs onboarded and already sending performance metrics (Perf table); otherwise, you will not fully leverage the augmented right-size recommendations capability. If this is not possible/desired for some reason, you can still manage to use multiple workspaces (see Configuring Log Analytics workspaces).
  • An Azure subscription to deploy the solution (if you're reusing a Log Analytics workspace, you must deploy into the same subscription the workspace is in).
  • A unique name prefix for the Azure resources being created (if you have specific naming requirements, you can also choose resource names during deployment)
  • Azure region

If the deployment fails for some reason, you can repeat it, as it is idempotent (i.e. they can be applied multiple times without changing the result). The exact process is used to upgrade a previous deployment with the latest version. You have to keep the same deployment options, so make sure you document them.

We will now go through and install the prerequisites from scratch; as in this article, I will be deploying the Azure Optimization Engine from our local workstation.

You can also install from the Azure Cloud Shell,

Install Azure PowerShell & Microsoft Graph modules
  1. Open Windows PowerShell

  2. Type in:

    Install-Module -Name Az,Microsoft.Graph.Authentication,Microsoft.Graph.Identity.DirectoryManagement -Scope CurrentUser -Repository PSGallery -Force

Install

Now that we have the prerequisites installed! Let's set up Azure Optimization Engine!

  1. In your favourite web browser, navigate to the AzureOptimizationEngine GitHub repository.

  2. Select Code, Download Zip

  3. Azure Optimization Engine - GitHub

  4. Download and extract the ZIP file to a location you can easily navigate to in PowerShell (I have extracted it to C:\temp\AzureOptimizationEngine-master\AzureOptimizationEngine-master)

  5. Open PowerShell (or Windows Terminal)

  6. Because the scripts were downloaded from the internet, we will need to Unblock these so that we can run them, open PowerShell and run the script below (changing your path to the path that the files were extracted)

    Get-ChildItem -r 'C:\temp\AzureOptimizationEngine-master\AzureOptimizationEngine-master' | Unblock-File
  7. Now that the script and associated files have been unblocked change the directory to the location of the Deploy-AzureOptimizationEngine.ps1 file.

  8. Run: .\Deploy-AzureOptimizationEngine.ps1

  9. Windows Terminal -\Deploy-AzureOptimizationEngine.ps1

  10. A browser window will then popup, authenticate to Azure (connect to the Azure tenant that has access to the Azure subscription you wish to set up Azure Optimization Engine on).

  11. Once authentication, you will need to confirm the Azure subscription to which you want to deploy Azure Optimization Engine.

  12. Azure Optimization Engine - Select Subscription

  13. Once your subscription is selected, it's time to choose a naming prefix for your resources (if you choose Enter, you can manually name each resource); in my case, my prefix will be: aoegeek. Because Azure Optimization Engine will be creating resources that are globally available, make sure you select a prefix that suits your organisation/use-case as you may run into issues with the name already being used.

  14. Azure Optimization Engine - Select Region

  15. If you have an existing Log Analytics workspace that your Virtual Machines and resources are connected to, you can specify 'Y' here to select your existing resource; I am creating this from fresh so that I will choose 'N.'

  16. Azure Log Analytics

  17. The Azure Optimization Engine will now check that the names and resources are available to be deployed to your subscriptions and resources (nothing is deployed during this stage - if there is an error, you can fix the issue and go back).

  18. Once validation has passed, select the region that Azure Optimization will be deployed to; I will deploy to australiaeast, so I choose 1.

  19. Azure Optimization Engine now requires the SQL Admin username; for the SQL server and database it will create, I will go with: sqladmin

  20. Azure Optimization Engine - Region

  21. Now enter the password for the sqladmin account and press Enter

  22. Verify that everything is correct, then press Y to deploy Azure Optimization Engine!

  23. Windows Terminal - Deploy Azure Optimization Engine

  24. Deployment could take 10-25 minutes... (mine took 22 minutes and 51 seconds)

  25. While leaving the PowerShell window open, log into the Azure Portal; you should now have a new Resource Group, and your resources will start getting created... you can click on Deployments (under Settings navigation bar) in the Resource Group to review the deployment status.

  26. Azure Portal - Deployments

  27. If you notice a failure, in the Deployment tab for: 'PolicyDeployment' you can ignore this, as it may have failed if the SQL Server hasn't been provisioned yet; once it has been provisioned, you can navigate back to this failed deployment and click 'Redeploy', to deploy a SQL Security Alert policy.

Note: The Azure SQL database will have the Public IP from the location the script was deployed from, allowed on the Azure SQL database; you may need to adjust this depending on your requirements.

Configure

Onboard Azure VMs to Log Analytics using Azure Policy and PowerShell

Now that Azure Optimization has been installed, let's onboard our current and future Azure Virtual Machines to Azure Optimization Engine, using Azure Policy. This is required if you want to get Azure Advisor Virtual Machine right-size recommendations augmented with guest OS metrics. If you don't collect metrics from the Virtual Machines, you will still have a fully functional Optimisation Engine, with many recommendations, but the Advisor Virtual Machine right-size ones will be served as is.

  1. Open PowerShell and login to Azure using: Connect-AzAccount

  2. Connect to your Azure subscription that contains the Virtual Machines you want to onboard to Log Analytics

  3. Type:

    # Register the resource provider if it's not already registered
    Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights'
  4. The PowerShell script below will:

  5. Just update the variables to match your setup

    #requires -Version 1.0
    # Variables
    #Enter your subscription name
    $subscriptionName = 'luke.geek.nz'
    #Enter the name of yuour
    $policyDisplayName = 'Deploy - Log Analytics' #Cant Exceed 24 characters
    $location = 'australiaeast'
    $resourceGroup = 'aoegeek-rg'
    $UsrIdentityName = 'AOE_ManagedIdentityUsr'
    $param = @{
    logAnalytics = 'aoegeek-la'
    }
    # Get a reference to the subscription that will be the scope of the assignment
    $sub = Get-AzSubscription -SubscriptionName $subscriptionName
    $subid = $sub.Id
    #Creates User Managed identity
    $AzManagedIdentity = New-AzUserAssignedIdentity -ResourceGroupName $resourceGroup -Name $UsrIdentityName
    #Adds Contributor rights to User Managed identity to Subscription
    #Waits 10 seconds to allow for Azure AD to replicate and recognise Managed identity has been created.
    Start-Sleep -Seconds '10'
    #Assigns role assignement to managed identity
    New-AzRoleAssignment -Objectid $AzManagedIdentity.PrincipalId -scope ('/subscriptions/' + $subid ) -RoleDefinitionName 'Log Analytics Contributor'
    # Get a reference to the built-in policy definition that will be assigned
    $definition = Get-AzPolicyDefinition | Where-Object -FilterScript {
    $_.Properties.DisplayName -eq 'Deploy - Configure Log Analytics extension to be enabled on Windows virtual machines'
    }
    # Create the policy assignment with the built-in definition against your subscription
    New-AzPolicyAssignment -Name $policyDisplayName -DisplayName $policyDisplayName -Scope ('/subscriptions/' + $subid ) -PolicyDefinition $definition -IdentityType 'UserAssigned' -IdentityId $AzManagedIdentity.id -location $location -PolicyParameterObject $param
    #Creates R3mediation task, to deploy the extension to the VM
    $policyAssignmentID = Get-AzPolicyAssignment -Name $policyDisplayName | Select-Object -Property PolicyAssignmentId
    Start-AzPolicyRemediation -Name 'Deploy - LA Agent' -PolicyAssignmentId $policyAssignmentID.PolicyAssignmentId -ResourceDiscoveryMode ReEvaluateCompliance

Note: The default 'Deploy - Configure Log Analytics extension to be enabled on Windows virtual machines' policy doesn't currently support Gen 2 or Windows Server 2022 Virtual Machines; if you have these, then you can copy the Azure Policy definition and then make your own with the new imageSKUs, although this policy may be replaced by the: Configure Windows virtual machines to run Azure Monitor Agent policy. Although I haven't tested it yet, the same script above can be modified to suit.

Onboard Azure VMs to Log Analytics using the Azure Portal

If you do not want to onboard VMs with Policy, you can do it manually via the Azure Portal.

  1. Open Azure Portal
  2. Navigate to Log Analytic Workspaces
  3. Click on the Log Analytic workspace that was provisioned for Azure Optimization Engine
  4. Navigate to Virtual Machines (under Workspace Data Sources)
  5. Click on the Virtual Machine you want to link up to the Log Analytics workspace, and click Connect - this will trigger the Log Analytic extension and agent o be installed. Repeat for any further Virtual Machines.
  6. Log Analytics - Connect VM
Setup Log Analytic Performance Counters

Now that we have Virtual Machines reporting to our Log Analytic instance, it's time to make sure we are collecting as much data as we need to give suitable recommendations, luckily a script has already been included in the Azure Optimisation repository called 'Setup-LogAnalyticsWorkspaces.ps1' to configure the performance counters.

  1. Open PowerShell (or Windows Terminal)

  2. Change the directory to the location of the Setup-LogAnalyticsWorkspaces.ps1, in the root folder of the repository extracted earlier

  3. Run the following PowerShell commands to download the required PowerShell Modules:

    Install-Module -Name Az.ResourceGraph
    Install-Module -Name Az.OperationalInsights
  4. Then run: .\Setup-LogAnalyticsWorkspaces.ps1

  5. The script will then go through all Log Analytic workspaces that you have access to and check for performance counters.

  6. Windows PowerShell - \Setup-LogAnalyticsWorkspaces.ps1

  7. If they are missing from the Log Analytics workspace, then you can run:

    ./Setup-LogAnalyticsWorkspaces.ps1 -AutoFix

or

 #Fix specific workspaces configuration, using a custom counter collection frequency
./Setup-LogAnalyticsWorkspaces.ps1 -AutoFix -WorkspaceIds "d69e840a-2890-4451-b63c-bcfc5580b90f","961550b2-2c4a-481a-9559-ddf53de4b455" -IntervalSeconds 30
Setup Azure AD-based recommendations by granting permissions to Managed Identity.

Azure Optimization Engine, has the ability to do recommendations based on Microsoft Entra ID roles and permissions, but in order to do that, the System Assigned Identity of the Azure Optimization Engine account needs to be given 'Global Reader' rights. As part of the deployment, you may have gotten the following error:

Cannot bind argument to parameter 'DirectoryRoleId' because it is an empty string.

Could not grant role. If you want Azure AD-based recommendations, please grant the Global Reader role manually to the aoegeek-auto managed identity or, for previous versions of AOE, to the Run As Account principal.

We are going to grant the Azure Automation account 'Global Reader' rights manually in the Azure Portal.

  1. Open Azure Portal
  2. Navigate to Automation Accounts
  3. Open your Azure Optimisation Engine automation account
  4. Navigate down the navigation bar to the Account Settings section and select: Identity
  5. Azure Automation - Identity
  6. Copy the object ID
  7. Now navigate to Microsoft Entra ID
  8. Click on Roles and Administrators
  9. Search for: Global Reader
  10. Select Global Reader and select + Add assignments
  11. Paste in the object ID earlier, and click Ok to grant Global Reader rights to the Azure Automation identity.
Azure Automation - Runbooks & Automation

The wind that gives Azure Optimization Engine its lift is Azure Automation and Runbooks, at the time I deployed this - I had x1 Azure Automation account and 33 runbooks!

Looking at the runbooks deployed, you can get a sense of what Azure Optimization Engine is doing...

NAMETYPE
aoegeek-autoAutomation Account
Export-AADObjectsToBlobStorage (aoegeek-auto/Export-AADObjectsToBlobStorage)Runbook
Export-AdvisorRecommendationsToBlobStorage (aoegeek-auto/Export-AdvisorRecommendationsToBlobStorage)Runbook
Export-ARGAppGatewayPropertiesToBlobStorage (aoegeek-auto/Export-ARGAppGatewayPropertiesToBlobStorage)Runbook
Export-ARGAvailabilitySetPropertiesToBlobStorage (aoegeek-auto/Export-ARGAvailabilitySetPropertiesToBlobStorage)Runbook
Export-ARGLoadBalancerPropertiesToBlobStorage (aoegeek-auto/Export-ARGLoadBalancerPropertiesToBlobStorage)Runbook
Export-ARGManagedDisksPropertiesToBlobStorage (aoegeek-auto/Export-ARGManagedDisksPropertiesToBlobStorage)Runbook
Export-ARGNICPropertiesToBlobStorage (aoegeek-auto/Export-ARGNICPropertiesToBlobStorage)Runbook
Export-ARGNSGPropertiesToBlobStorage (aoegeek-auto/Export-ARGNSGPropertiesToBlobStorage)Runbook
Export-ARGPublicIpPropertiesToBlobStorage (aoegeek-auto/Export-ARGPublicIpPropertiesToBlobStorage)Runbook
Export-ARGResourceContainersPropertiesToBlobStorage (aoegeek-auto/Export-ARGResourceContainersPropertiesToBlobStorage)Runbook
Export-ARGUnmanagedDisksPropertiesToBlobStorage (aoegeek-auto/Export-ARGUnmanagedDisksPropertiesToBlobStorage)Runbook
Export-ARGVirtualMachinesPropertiesToBlobStorage (aoegeek-auto/Export-ARGVirtualMachinesPropertiesToBlobStorage)Runbook
Export-ARGVMSSPropertiesToBlobStorage (aoegeek-auto/Export-ARGVMSSPropertiesToBlobStorage)Runbook
Export-ARGVNetPropertiesToBlobStorage (aoegeek-auto/Export-ARGVNetPropertiesToBlobStorage)Runbook
Export-AzMonitorMetricsToBlobStorage (aoegeek-auto/Export-AzMonitorMetricsToBlobStorage)Runbook
Export-ConsumptionToBlobStorage (aoegeek-auto/Export-ConsumptionToBlobStorage)Runbook
Export-RBACAssignmentsToBlobStorage (aoegeek-auto/Export-RBACAssignmentsToBlobStorage)Runbook
Ingest-OptimizationCSVExportsToLogAnalytics (aoegeek-auto/Ingest-OptimizationCSVExportsToLogAnalytics)Runbook
Ingest-RecommendationsToSQLServer (aoegeek-auto/Ingest-RecommendationsToSQLServer)Runbook
Recommend-AADExpiringCredentialsToBlobStorage (aoegeek-auto/Recommend-AADExpiringCredentialsToBlobStorage)Runbook
Recommend-AdvisorAsIsToBlobStorage (aoegeek-auto/Recommend-AdvisorAsIsToBlobStorage)Runbook
Recommend-AdvisorCostAugmentedToBlobStorage (aoegeek-auto/Recommend-AdvisorCostAugmentedToBlobStorage)Runbook
Recommend-ARMOptimizationsToBlobStorage (aoegeek-auto/Recommend-ARMOptimizationsToBlobStorage)Runbook
Recommend-LongDeallocatedVmsToBlobStorage (aoegeek-auto/Recommend-LongDeallocatedVmsToBlobStorage)Runbook
Recommend-UnattachedDisksToBlobStorage (aoegeek-auto/Recommend-UnattachedDisksToBlobStorage)Runbook
Recommend-UnusedAppGWsToBlobStorage (aoegeek-auto/Recommend-UnusedAppGWsToBlobStorage)Runbook
Recommend-UnusedLoadBalancersToBlobStorage (aoegeek-auto/Recommend-UnusedLoadBalancersToBlobStorage)Runbook
Recommend-VMsHighAvailabilityToBlobStorage (aoegeek-auto/Recommend-VMsHighAvailabilityToBlobStorage)Runbook
Recommend-VMSSOptimizationsToBlobStorage (aoegeek-auto/Recommend-VMSSOptimizationsToBlobStorage)Runbook
Recommend-VNetOptimizationsToBlobStorage (aoegeek-auto/Recommend-VNetOptimizationsToBlobStorage)Runbook
Remediate-AdvisorRightSizeFiltered (aoegeek-auto/Remediate-AdvisorRightSizeFiltered)Runbook
Remediate-LongDeallocatedVMsFiltered (aoegeek-auto/Remediate-LongDeallocatedVMsFiltered)Runbook
Remediate-UnattachedDisksFiltered (aoegeek-auto/Remediate-UnattachedDisksFiltered)Runbook

A lot of the runbooks, such as the Log Analytics workspace ID, link up to Azure Automation variables, such as this period in Days to look back for Advisor recommendations, by default, this is '7' but you can change this variable to suit your organisation's needs.

Azure Automation - Runbooks & Automation

Azure Automation - Schedules

Along with containing the variables and configurations used by the Runbooks, it also contains the schedules for the ingest of data into the storage account and SQL databases, most of these are Daily, but schedules such as ingesting from the Azure Advisor are weekly, by default these times are in UTC.

Azure Automation - Schedules

When making changes to these schedules (or moving the Runbooks to be run from a Hybrid worker), it is recommended to use the Reset-AutomationSchedules.ps1 script. These times need to be in UTC.

Terminal - Reset-AutomationSchedules.ps1

Azure Automation - Credentials

When we set up the Azure SQL database earlier, as part of the Azure Optimisation setup, we configured the SQL Admin account and password, these credentials are stored and used by the Runbooks in the Azure Automation credential pane.

Azure Automation - Credentials

View Recommendations

It's worth noting, that because Azure Optimization Engine stores its data into Log Analytics and SQL, you can use languages such as KQL directly on the Log Analytics workspace to pull out any information you might need and develop integration into other toolsets.

Workbooks

There are x3 Azure Log Analytics workbooks included in the Azure Optimization Engine, these are as follows:

NAMETYPE
Resources InventoryAzure Workbook
Identities and RolesAzure Workbook
Costs GrowingAzure Workbook

They can be easily accessed in the Azure Portal.

  1. Log in to the Azure Portal
  2. Navigate to Log Analytics Workspace
  3. Click on the Log Analytics workspace you set up for Azure Optimization Engine earlier and click on Workbooks (under General).
  4. Click on: Workbooks filter at the top to display the 3 Azure Optimization Engine
  5. Log Analtics - Workbooks
  6. After a few days of collecting data, you should now be able to see data like below.
Resource Inventory - General

Resource Inventory - General

Resource Inventory - Virtual Machines

Resource Inventory - Virtual Machines

Resource Inventory - Virtual Machine ScaleSets

Resource Inventory - Virtual Machine ScaleSets

Resource Inventory - Virtual Machine ScaleSets Disks

Resource Inventory - Virtual Machine ScaleSets Disks

Resource Inventory - Virtual Networks

Resource Inventory - Virtual Networks

Identities and Roles - Overview

Identities and Roles - Overview

Power BI

The true power of the Azure Optimisation engine, is the data stored in the SQL database, using PowerBI you can pull the data into dashboards and make it more meaningful, and the recommendations given from PowerBI and SQL.

The Optimisation Engine already has a starter PowerBI file, which pulls data from the database.

Install PowerBI Desktop
  1. Open Microsoft Store and search for: Power BI Desktop
  2. Click Get
  3. Power BI Desktop
  4. Once Downloaded, click Open
Obtain Azure SQL Information

In order to connect PowerBI to the Azure SQL database, we need to know the URL of the database and make sure our IP has been opened on the Azure SQL Firewall.

  1. Open Azure Portal
  2. Navigate to SQL Servers
  3. Click on the SQL server created earlier, under the Security heading click on Firewall and Virtual Networks
  4. Under: Client IP address, make sure your public IP is added and click Save
  5. Azure SQL - Virtual Network
  6. Now that we have verified/added our client IP, we need to get the SQL database (not server) URL
  7. Click on Overview
  8. Click on the aoeoptimization database (under Available resources, down the bottom)
  9. Click on Copy to Clipboard for the server Name/URL
  10. Azure SQL - Database URL
Open PowerBI Desktop File

Now that we have PowerBI Desktop installed, it's time to open: AzureOptimizationEngine.pbix. This PowerBI file is located in the Views folder of the Azure Optimization Engine repository.

  1. Open: AzureOptimizationEngine.pbix in PowerBI Desktop
  2. On the Home page ribbon, click on Transform Data
  3. Click Data source settings
  4. Click Change Source
  5. Change the default SQL server of aoedevgithub-sql.database.windows.net to your SQL database, copied earlier.
  6. Click Ok
  7. Click Ok and press Apply Changes
  8. It will prompt for credentials, click on Database
  9. Enter in your SQLAdmin details entered as part of the Azure Optimization Engine setup
  10. Click Connect

After PowerBI updates its database and queries, your PowerBI report should now be populated with data like below.

PowerBI - Overview

PowerBI - Overview

PowerBI - Cost

PowerBI - Cost

PowerBI - High Availability

PowerBI - High Availability

PowerBI - Security

PowerBI - Security

PowerBI - Operational Excellence

PowerBI - Operational Excellence

Congratulations! You have now successfully stood up and configured Azure Optimization Engine!

Setup Azure Cloud Shell

· 3 min read

The Azure Cloud Shell allows connectivity to Microsoft Azure using an authenticated, browser-based shell experience that’s hosted in the cloud and accessible from virtually anywhere as long as you have access to your favourite browser!

Azure Cloud Shell is assigned per unique user account and automatically authenticated with each session.

Get a modern command-line experience from multiple access points, including the Azure portal, shell.azure.com, Azure mobile app, Azure docs_(e.g.Azure CLI,_ Azure PowerShell), and VS Code Azure Account extension.

Both Bash and PowerShell experiences are available.

Microsoft routinely maintains and updates Cloud Shell, which comes equipped with commonly used CLI tools including Linux shell interpreters, PowerShell modules, Azure tools, text editors, source control, build tools, container tools, database tools, and more. Cloud Shell also includes language support for several popular programming languages such as Node.js, .NET, and Python.

Along with native tools such as Azure PowerShell, it also contains Terraform, allowing you to implement and test functionality such as Infrastructure as Code, without needing to touch your local machine and is always up-to-date, a full list of tools can be found 'here'.

Just some noticeable things to be aware of regarding the Azure Cloud Shell:

  • Cloud Shell runs on a temporary host provided on a per-session, per-user basis
  • Cloud Shell times out after 20 minutes without interactive activity
  • Cloud Shell requires an Azure file share to be mounted
  • Cloud Shell uses the same Azure file share for both Bash and PowerShell
  • Cloud Shell is assigned one machine per user account
  • Cloud Shell persists $HOME using a 5-GB image held in your file share
  • Permissions are set as a regular Linux user in Bash

The Azure Cloud Shell is very easy to set up and get going, but in this article, I will show you the additional configuration options you have available, such as selecting your own storage account, region and resource group to conform to any naming policies and preferences you may have.

By default, CloudShell creates a new Resource Group, Storage account, Fileshare in the Southeast Asia region.

To set up Azure Cloud Shell

  1. Navigate to the Azure Portal
  2. Click on the little Terminal/PowerShell icon in the upper navigation bar
  3. Azure Portal - Cloud Shell
  4. You should get notified, "You have no storage mounted" click on Show advanced settings
  5. Azure Portal - Cloud Shell
  6. Here you can easily create your CloudShell storage account with your own preferences:
  • The subscription
  • Region
  • Resource Group (new or existing)
  • Storage account (new or existing)
  • Fileshare (new or existing)
  1. Azure Portal - Cloud Shell Storage Account
  2. Click on Create Storage when you are ready to start the verification (which happens after you click Create storage, don't worry as long as you have the window open you can make any additional changes) and deployment.
  3. Azure Portal - Cloud Shell

Using this method is handy when you have more than one person administering the subscription, each person can have its own file share, which can then be backed up using Azure Backup and easily removed later when not needed.

Native RDP Client & Azure Bastion

· 4 min read

In early February 2022, Azure Bastion Preview support for the native Windows SSH and RDP client came out, and this meant that we no longer have to rely on the Azure Portal and the limitations of a web browser - the support also includes File transfer through the clipboard by copying and pasted into the RDP session!

Azure Bastion is a fully managed service that provides more secure and seamless Remote Desktop Protocol (RDP) and Secure Shell Protocol (SSH) access to virtual machines (VMs) without any exposure through public IP addresses. Provision the service directly in your local or peered virtual network to get support for all the VMs within it.

Let’s test the native RDP client through a secure connection using Azure Bastion!

Prerequisites

  • This configuration requires the Standard tier for Azure Bastion.
  • A Virtual Machine(s) to connect
  • Latest Azure CLI
  • Reader role on the Virtual Machine
  • Read role on the Network Interface Card of the Virtual Machine.
  • Reader role on the Azure Bastion resource
  • Virtual Machine Administrator (or User) login role using Microsoft Entra IDauthentication.

Create Azure Bastion

If you have a Virtual Machine but haven't set up Azure Bastion, run through the below to set it up:

  1. Log in to the Azure Portal
  2. Click on Create a resource
  3. Search for: Bastion Azure - Bastion
  4. Click Create
  5. This is a Networking resource to place it in the same Resource Group as my Virtual Network.
  6. Please type in a Name for the Bastion instance; I will call mine: Bastion
  7. Select the Region that matches the Virtual Network region
  8. Select Standard Tier
  9. Select the Virtual Network
  10. It now warns you about creating an: AzureBastionSubnet with a prefix of at least /27, so we need to create one; click on Manage Subnet Configuration.
  11. Click + Subnet
  12. For the Name type in: AzureBastionSubnet
  13. For the Subnet address range: 10.0.1.0/27, If you get an error that indicates the address is overlapping with another subnet, it may be because the Address space is only a /24; click Cancel and click on Address Space in the Virtual Network and change the /24 to/16 to increase the address range.
  14. Click Save to create the subnet Azure - Bastion
  15. Up the Top, click Create a Bastion. Your subnet should be selected automatically to go back to the Bastion setup.
  16. You do need a Public IP for Bastion, so confirm the name is appropriate, then click Next: Tags.
  17. Azure Bastion
  18. Add in appropriate tags, then click Next: Advanced
  19. Check the box next to Native client support (Preview)
  20. Azure Bastion
  21. Click Next: Review + Create
  22. Click on Create to create your Bastion instance!

Note: Bastion may take 10-20 minutes to provision.

Check Bastion SKU

If you already have an Azure Bastion instance, let's check the SKU and, if needed, change it to Standard. Just a note:

Downgrading from a Standard SKU to a Basic SKU is not supported. To downgrade, you must delete and recreate Azure Bastion.

  1. Log in to the Azure Portal
  2. Navigate to your Bastion resource
  3. Click on: Configuration
  4. Change Tier to Standard
  5. Check: Native client support (Preview)
  6. Click Apply
  7. Azure Bastion

Connect to VM using Native RDP Support

  1. Open command prompt or Terminal

  2. Type: az login

  3. Login to your Azure subscription

  4. We need the resource ID of the VM we need to connect to, type in: az VM show --resource-group 'appserver-rg' --name 'APP-P01' --show-details

  5. Change the resource group and VM name above to match your VM

  6. Copy the id of the Virtual Machine you want to connect to

  7. Because I am running the Azure CLI from a PowerShell terminal, I am going to use the following variables:

    $BastionName = 'Bastion'
    $BastionRG = 'network-rg'
    $VMResourceID= '/subscriptions/000000-0000-0000-0000000/resourceGroups/appserver-rg/providers/Microsoft.Compute/virtualMachines/APP-P01'
    az network bastion rdp --name $BastionName --resource-group $BastionRG --target-resource-id $VMResourceID
  8. Run the command, your Remote Desktop window should open up, and the tunnel has been established; if you close the Azure CLI window, your RDP session will be dropped!

  9. Azure Bastion

As you could most likely tell, there are no options to enable drive passthrough, etc. You would usually find when connecting to Remote Desktop, but the copying of files/text, etc., does work!

AzureFeeds

· 2 min read

Over the past few months, I have been busy working on a new project...

A news aggregator for Azure news and updates, I tested some desire for this using AzureFeeds, on LinkedIn as a platform (which will continue)...

I didn't want my LinkedIn connections, to be spammed by Azure updates every day, when some connections were connected to me for other non-Azure related services, so I created a separate account that interested people could subscribe to AzureFeeds.

I was lucky enough to acquire both https://azureupdates.com and https://azurefeeds.com/ and wanted to do something a bit more substantial than just forwarding to the relevant Microsoft pages.

Inspired by PlanetPowerShell, I wanted an Azure specific service, so I created the AzureFeeds website.

Originally starting using official Microsoft services, due to the massive amount of updates and changes ongoing, I didn't want to make the feeds too busy, however with feedback from the community through a Twitter poll, I then added an author section and added community-based content to supplement official updates, mimicking the functionality of PlanetPowerShell being content community-driven a lot more.

You can subscribe to a single feed by clicking 'Download Feed' using your favourite RSS reader (even Outlook) and get the content delivered straight to you from official and community updates! Allowing you to keep up with Azure Updates and Azure community-driven updates easily, or just browse the webpage and filter by Author or Category for content that you may be after.

Check out.. https://azurefeeds.com/

Azure Feeds