Skip to main content

Deallocate 'Stopped' Virtual Machines using Azure Automation

· 13 min read

Virtual Machines in Microsoft Azure have different states and, depending on what state the Virtual Machine is in, will determine whether you get billed or not (for the Compute, storage and network adapters are still billed).

Power stateDescriptionBilling
StartingVirtual Machine is powering up.Billed
RunningVirtual Machine is fully up. This is the standard working state.Billed
StoppingThis is a transitional state between running and stopped.Billed
StoppedThe Virtual Machine is allocated on a host but not running. Also called PoweredOff state or Stopped (Allocated). This can be result of invoking the PowerOff API operation or invoking shutdown from within the guest OS. The Stopped state may also be observed briefly during VM creation or while starting a VM from Deallocated state.Billed
DeallocatingThis is the transitional state between running and deallocated.Not billed
DeallocatedThe Virtual Machine has released the lease on the underlying hardware and is completely powered off. This state is also referred to as Stopped (Deallocated).Not billed

Suppose a Virtual Machine is not being used. In that case, turning off a Virtual Machine from the Microsoft Azure Portal (or programmatically via PowerShell/Azure CLI) is recommended to ensure that the Virtual Machine is deallocated and its affinity on the host has been released.

Microsoft Azure - Virtual Machine Power States

However, you need to know this, and those new to Microsoft Azure, or users who don't have Virtual Machine Administrator rights to deallocate a Virtual Machine, may simply shut down the operating system, leaving the Virtual Machine in a 'Stopped' state, but still tied to an underlying Azure host and incurring cost.

Our solution can help; by triggering an Alert when a Virtual Machine becomes unavailable due to a user-initiated shutdown, we can then start an Azure Automation runbook to deallocate the Virtual Machine.

Overview

Today, we are going to set up an Azure Automation runbook, triggered by a Resource Health alert that will go through the following steps:

  1. User shutdowns Virtual Machine from within the Operating System
  2. The Virtual Machine enters an unavailable state
  3. A Resource Alert is triggered when the Virtual Machine becomes unavailable (after being available) by a user initiated event
  4. The Alert triggers a Webhook to an Azure Automation runbook
  5. Using permissions assigned to the Azure Automation account through a System Managed Identity connects to Microsoft Azure and checks the VM state; if the Virtual Machine state is still 'Stopped', then deallocate the virtual machine.
  6. Then finally, resolve the triggered alert.

To do this, we need a few resources.

  • Azure Automation Account
  • Az.AlertsManagement module in the Azure Automation account
  • Az.Accounts module (updated in the Azure Automation account)
  • Azure Automation runbook (I will supply this below)
  • Resource Health Alert
  • Webhook (to trigger to the runbook and pass the JSON from the alert)

And, of course, 'Contributor' rights to the Microsoft Azure subscription to provide the resources and the alerts and resources and set up the system managed identity.

We will set up this from scratch using the Azure Portal and an already created PowerShell Azure Automation runbook.

Deploy Deallocate Solution

Setup Azure Automation Account

Create Azure Automation Account

First, we need an Azure Automation resource.

  1. Log into the Microsoft Azure Portal.
  2. Click + Create a resource.
  3. Type in automation
  4. Select Create under Automation, and select Automation.
  5. Create Azure Automation Account
  6. Select your subscription
  7. Select your Resource Group or Create one if you don't already have one (I recommend placing your automation resources in an Azure Management or Automation resource group, this will also contain your Runbooks)
  8. Select your region
  9. Create Azure Automation Account
  10. Select Next
  11. Make sure: System assigned is selected for Managed identities (this will be required for giving your automation account permissions to deallocate your Virtual Machine, but it can be enabled later if you already have an Azure Automation account).
  12. Click Next
  13. Leave Network connectivity as default (Public access)
  14. Click Next
  15. Enter in appropriate tags
  16. Create Azure Automation Account
  17. Click Review + Create
  18. After validation has passed, select Create
Configure System Identity

Now that we have our Azure Automation account, its time to set up the System Managed Identity and grant it the following roles:

  • Virtual Machine Contributor (to deallocate the Virtual Machine)
  • Monitoring Contributor (to close the Azure Alert)

You can set up a custom role to be least privileged and use that instead. But in this article, we will stick to the built-in roles.

  1. Log into the Microsoft Azure Portal.
  2. Navigate to your Azure Automation account
  3. Click on: Identity
  4. Make sure that the System assigned toggle is: On and click Azure role assignments.
  5. Azure Automation Account managed identity
  6. Click + Add role assignments
  7. Select the Subscription (make sure this subscription matches the same subscription your Virtual Machines are in)
  8. Select Role: Virtual Machine Contributor
  9. Click Save
  10. Now we repeat the same process for Monitoring Contributor
  11. lick + Add role assignments
  12. Select the Subscription (make sure this subscription matches the same subscription your Virtual Machines are in)
  13. Select Role: Monitoring Contributor
  14. Click Save
  15. Click Refresh (it may take a few seconds to update the Portal, so if it is blank - give it 10 seconds and try again).
  16. You have now set up the System Managed identity and granted it the roles necessary to execute the automation.
Import Modules

We will use the Azure Runbook and use a few Azure PowerShell Modules; by default, Azure Automation has the base Azure PowerShell modules, but we will need to add Az.AlertsManagement, and update the Az.Accounts as required as a pre-requisite for Az.AlertsManagement.

  1. Log into the Microsoft Azure Portal.
  2. Navigate to your Azure Automation account
  3. Click on Modules
  4. Click on + Add a module
  5. Click on Browse from Gallery
  6. Click: Click here to browse from the gallery
  7. Type in: Az.Accounts
  8. Press Enter
  9. Click on Az.Accounts
  10. Click Select
  11. Import Az.Accounts module
  12. Make sure that the Runtime version is: 5.1
  13. Click Import
  14. Now that the Az.Accounts have been updated, and it's time to import Az.AlertsManagement!
  15. Click on Modules
  16. Click on + Add a module
  17. Click on Browse from Gallery
  18. Click: Click here to browse from the gallery
  19. Type in: Az.AlertsManagement (note its Alerts)
  20. Click Az.AlertsManagement
  21. Az.AlertsManagement module
  22. Click Select
  23. Make sure that the Runtime version is: 5.1
  24. Click Import (if you get an error, make sure that Az.Accounts has been updated, through the Gallery import as above)
  25. Now you have successfully added the dependent modules!
Import Runbook

Now that the modules have been imported into your Azure Automation account, it is time to import the Azure Automation runbook.

  1. Log into the Microsoft Azure Portal.
  2. Navigate to your Azure Automation account
  3. Click on Runbooks
  4. Click + Create a runbook
  5. Specify a name (i.e. Deallocate-AzureVirtualMachine)
  6. Select Runbook type of: PowerShell
  7. Select Runtime version of: 5.1
  8. Type in a Description that explains the runbook (this isn't mandatory, but like Tags is recommended, this is an opportunity to indicate to others what it is for and who set it up)
  9. Create Azure Runbook
  10. Click Create
  11. Now you will be greeted with a blank edit pane; paste in the Runbook from below:
Deallocate-AzureVirtualMachine.ps1
#requires -Version 3.0 -Modules Az.Accounts, Az.AlertsManagement
<#
.SYNOPSIS
PowerShell Azure Automation Runbook for Stopping Virtual Machines, that have been Shutdown within the Windows Operating System (Stopped and not Deallocated).
.AUTHOR
Luke Murray (https://github.com/lukemurraynz/)
#>

[OutputType('PSAzureOperationResponse')]
param (
[Parameter(Mandatory = $true, HelpMessage = 'Data from the WebHook/Azure Alert')][Object]$WebhookData
)

Import-Module Az.AlertsManagement
$ErrorActionPreference = 'stop'

# Get the data object from WebhookData
$WebhookData = $WebhookData.RequestBody
Write-Output -InputObject $WebhookData
$Schema = $WebhookData | ConvertFrom-Json

#Sets the Webhook data into object
$Essentials = [object] ($Schema.data).essentials
Write-Output -InputObject $Essentials

# Get the first target only as this script doesn't handle multiple and and export variables for the resource.
$alertIdArray = (($Essentials.alertId)).Split('/')
$alertTargetIdArray = (($Essentials.alertTargetIds)[0]).Split('/')
$alertid = ($alertIdArray)[6]
$SubId = ($alertTargetIdArray)[2]
$ResourceGroupName = ($alertTargetIdArray)[4]
$ResourceType = ($alertTargetIdArray)[6] + '/' + ($alertTargetIdArray)[7]
$ResourceName = ($alertTargetIdArray)[-1]
$status = $Essentials.monitorCondition
Write-Output -InputObject $alertTargetIdArray
Write-Output -InputObject "status: $status" -Verbose

#Sets VM shutdown
if (($status -eq 'Activated') -or ($status -eq 'Fired')) {
$status = $Essentials.monitorCondition
Write-Output -InputObject "resourceType: $ResourceType" -Verbose
Write-Output -InputObject "resourceName: $ResourceName" -Verbose
Write-Output -InputObject "resourceGroupName: $ResourceGroupName" -Verbose
Write-Output -InputObject "subscriptionId: $SubId" -Verbose

# Determine code path depending on the resourceType
if ($ResourceType -eq 'Microsoft.Compute/virtualMachines') {
# This is an Resource Manager VM
Write-Output -InputObject 'This is an Resource Manager VM.' -Verbose

# Ensures you do not inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process

# Connect to Azure with system-assigned managed identity
$AzureContext = (Connect-AzAccount -Identity).context

# set and store context
$AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
Write-Output -InputObject $AzureContext
#Checks Azure VM status
$VMStatus = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ResourceName -Status

Write-Output -InputObject $VMStatus
If ($VMStatus.Statuses[1].Code -eq 'PowerState/stopped') {
Write-Output -InputObject "Stopping the VM, it was Shutdown without being Deallocated - $ResourceName - in resource group - $ResourceGroupName" -Verbose
Stop-AzVM -Name $ResourceName -ResourceGroupName $ResourceGroupName -DefaultProfile $AzureContext -Force -Verbose

#Check VM Status after deallocation
$VMStatus = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ResourceName -Status -Verbose

Write-Output -InputObject $VMStatus

If ($VMStatus.Statuses[1].Code -eq 'PowerState/deallocated') {
#Closes Alert
Write-Output -InputObject $VMStatus.Statuses[1].Code
Write-Output -InputObject $alertid
Get-AzAlert -AlertId $alertid -verbose -DefaultProfile $AzureContext
Get-AzAlert -AlertId $alertid -verbose -DefaultProfile $AzureContext | Update-AzAlertState -State 'Closed' -Verbose -DefaultProfile $AzureContext
}
}

Elseif ($VMStatus.Statuses[1].Code -eq 'PowerState/deallocated') {
Write-Output -InputObject 'Already deallocated' -Verbose
}

Elseif ($VMStatus.Statuses[1].Code -eq 'PowerState/running') {
Write-Output -InputObject 'VM running. No further actions' -Verbose
}

# [OutputType(PSAzureOperationResponse")]
}
}
else {
# The alert status was not 'Activated' or 'Fired' so no action taken
Write-Output -InputObject ('No action taken. Alert status: ' + $status) -Verbose
}
  1. Click Save
  2. Azure Automation runbook
  3. Click Publish (so the runbook is actually in production and can be used)
  4. You can select View or Edit at any stage, but you have now imported the Azure Automation runbook!
Setup Webhook

Now that the Azure runbook has been imported, we need to set up a Webhook for the Alert to trigger and start the runbook.

  1. Log into the Microsoft Azure Portal.
  2. Navigate to your Azure Automation account
  3. Click on Runbooks
  4. Click on the runbook you just imported (i.e. Deallocate-AzureVirtualMachine)
  5. Click on Add webhook
  6. Click Create a new webhook
  7. Enter a name for the webhook
  8. Make sure it is Enabled
  9. You can edit the expiry date to match your security requirements; make sure you record the expiry date, as it will need to be renewed before it expires.
  10. Copy the URL and paste it somewhere safe (you won't see this again! and you need it for the next steps)
  11. Create Azure webhook
  12. Click Ok
  13. Click on Configure parameters and run settings.
  14. Because we will be taking in dynamic data from an Azure Alert, enter in: [EmptyString]
  15. Click Ok
  16. Click Create
  17. You have now set up the webhook (make sure you have saved the URL from the earlier step as you will need it in the following steps)!

Setup Alert & Action Group

Now that the Automation framework has been created with the Azure Automation account, runbook and webhook, we now need a way to detect if a Virtual Machine has been Stopped; this is where a Resource Health alert will come in.

  1. Log into the Microsoft Azure Portal.
  2. Navigate to: Monitor
  3. Click on Service Health
  4. Select Resource Health
  5. Select + Add resource health alert
  6. Select your subscription
  7. Select Virtual machine for Resource Type
  8. You can target specific Resource Groups for your alert (and, as such, your automation) or select all.
  9. Check Include all future resource groups
  10. Check include all future resources
  11. Under the Alert conditions, make sure Event Status is: All selected
  12. Set Current resource status to Unavailable
  13. Set Previous resource status to All selected
  14. For reason type, select: User initiated and unknown
  15. Create Azure Resource Health Alert
  16. Now that we have the Alert rule configured, we need to set up an Action group. That will get triggered when the alert gets fired.
  17. Click Select Action groups.
  18. Click + Create action group
  19. Select your subscription and resource group (this is where the Action alert will go, I recommend your Azure Management/Monitoring resource group that may have a Log Analytics workspace as an example).
  20. Give your Action Group a name, i.e. AzureAutomateActionGroup
  21. The display name will be automatically generated, but feel free to adjust it to suit your naming convention
  22. Click Next: Notifications
  23. Under Notifications, you can trigger an email alert, which can be handy in determining how often the runbook runs. This can be modified and removed if it is running, especially during testing.
  24. Click Next: Actions
  25. Under Action Type, select Webhook
  26. Paste in the URI created earlier when setting up the Webhook
  27. Select Yes to enable the common alert schema (this is required as the JSON that the runbook is parsing is expecting it to the in the schema, if it isn't the runbook will fail)
  28. Create Azure Action Group
  29. Click Ok
  30. Give the webhook a name.
  31. Click Review + create
  32. Click Create
  33. Finally, enter in an Alert name and description, specify the resource group for the Alert to go into and click Save.

Test Deallocate Solution

So now we have stood up our:

  • Azure automation account
  • Alert
  • Action Group
  • Azure automation runbook
  • Webhook

It is time to test! I have a VM called: VM-D01, running Windows (theoretically, this runbook will also run against Linux workloads, as its relying on the Azure agent to send the correct status to the Azure Logs, but in my testing, it was against Windows workloads) in the same subscription that the alert has been deployed against.

As you can see below, I shut down the Virtual Machine. After a few minutes (be patient, Azure needs to wait for the status of the VM to be triggered), an Azure Alert was fired into Azure Monitor, which triggered the webhook and runbook, and the Virtual Machine was deallocated, and the Azure Alert was closed.

Azure deallocate testing

Hidden Tags in Azure

· 4 min read

Tags in Microsoft Azure are pivotal to resource management, whether it's used for reporting or automation.

But sometimes, you need that extra bit of information to help discover what resources are for, or you may way to add information to a resource that isn't directly displayed in the portal, especially when complex tags are in use that might be used in automation initiatives.

This is where 'hidden' Azure Tags come in handy.

Tags starting with the prefix of 'hidden-' will not be displayed under Tags in the Azure Portal; however, they will be displayed in the resource metadata and utilised by PowerShell and Azure CLI for automation initiatives.

Examples are:

TagsValue
hidden-titleWeb Server
hidden-ShutdownAutomationYes

hidden-title

As I mentioned above, every tag with 'hidden-' in front of it will be hidden in the Azure Portal. However, 'hidden-title' behaves differently.

You may have noticed that some resources in Azure, especially if the Azure ARM (Azure Resource Manager) creates them and the name is GUID based, has a '(name)' around them after the resource name; this is because of the hidden-title tag.

The hidden-title tag is especially useful for being able to pick resources that belong to a specific service or application.

An example is below:

Azure Portal - Hidden Title Tag

In this case, I have used the hidden-title of 'Web Server', allowing me to quickly view what resources may be mapped to my Web Server.

You may notice that the Test-Virtual Machines title, is displayed in the Resource Groups search blade and not in the actual Resource Group, there are some areas of the Portal that will not display the hidden-title tag currently.

If I navigate to my Virtual Machine and click on the Tags blade, all I see is my CreatedBy tag.

Azure Portal - Tags

However, if I navigate to the Overview page and click on JSON View, I can see the hidden tags in the resource metadata.

Azure Portal - Resource Tags

hidden tags

Azure Portal

You can use the Azure Portal directly to add the Tags to apply hidden tags.

Azure Portal - Add Tags

You can remove the Tag by adding the hidden-tag again and keeping the value empty (ie blanking out the hidden-title will remove the title), but it will still be against the metadata (Resource Graph) as a Tag that exists (as seen in the screenshot below) - it is much cleaner to use PowerShell.

Azure - Resource Tags

PowerShell

Get-AzTag and Remove-AzTag, do not display the hidden tags, to add and remove the tags, you need to add them through 'Update-AzTag' and 'Replace' or 'Merge' to overwrite the tags, which requires the Resource targetted by Resource ID.

A handy snippet to use to add/remove the Tags on individual or multiple resources is:

$replacedTags = @{"hidden-title" = "Web Server"; "hidden-ShutdownAutomation" = "Yes"}
$resouceGroup = 'vm-dev-rg'
Get-AzResource -ResourceGroupName $resouceGroup | Select-Object ResourceId | Out-GridView -PassThru | Update-AzTag -Tag $replacedTags -Operation Merge

This will snippet will gather all the resources in your Resource Group, then select their Resource IDs; the script will then prompt with a GUI allowing you to select which resources or resources you want to update your tags on, then once you click Ok, it will update the Tags on the resources you selected.

PowerShell - Add Azure Tags

You may be wondering if the Hidden tags are useful for automation, but if the 'Get-AzTag' cmdlet doesn't work, how can I retrieve the resources? It's a good question, and that is where 'Get-AzResource' comes to the rescue.

Examples are:

Get-AzResource -TagName hidden-ShutdownAutomation

Get-AzResource -TagValue Yes

$TagName = 'hidden-title'
$TagValue = 'Web Server'
Get-AzResource -TagName $TagName -TagValue $TagValue | Where-Object -FilterScript {
$_.ResourceType -like 'Microsoft.Compute/virtualMachines'
}

Azure Bicep

You can also add the Tags, with Azure Bicep.

Example is:

param resourceTags object = {
hidden-title: 'Web Server'
hidden-ShutdownAutomation: 'Yes'
}

tags: resourceTags

Azure Arc Bridge - Implementation and Testing

· 8 min read

Azure Arc Bridge(currently in preview) is part of the core Azure Arc Hybrid Cloud platform.

Overview

The Azure Arc resource bridge allows for VM (Virtual Machine) self-servicing and managing on-premises Azure Stack HCI and VMWare virtualised workloads, supporting Linux and Windows.

Along with standard integration of Azure Arc workloads, such as support for Azure Policy and Azure extensions, Azure Update Management and Defender for Cloud support. The Azure Arc resource bridge offers the following self-service functionality direct from the Microsoft Azure portal, offering a single pane of a glass of your workloads, whether they exist on-premises or in Azure:

  • Start, stop and restart a virtual machine
  • Control access and add Azure tags
  • Add, remove, and update network interfaces
  • Add, remove, and update disks and update VM size (CPU cores and memory)
  • Enable guest management
  • Install extensions
  • Azure Stack HCI - You can provision and manage on-premises Windows and Linux virtual machines (VMs) running on Azure Stack HCI clusters.

The resource bridge is a packaged virtual machine, which hosts a management Kubernetes cluster that requires no user management. This virtual appliance delivers the following benefits:

  • Enables VM self-servicing from Azure without having to create and manage a Kubernetes cluster
  • It is fully supported by Microsoft, including update of core components.
  • Designed to recover from software failures.
  • Supports deployment to any private cloud hosted on Hyper-V or VMware from the Azure portal or using the Azure Command-Line Interface (CLI).

All management operations are performed from Azure, no local configuration is required on the appliance.

Azure Arc - Overview

Azure Arc resource bridge currently supports the following Azure regions:

  • East US
  • West Europe

These regions hold the Resource Bridge metadata for the resources.

Today, we will stand up an Azure Arc Bridge that supports VMWare vSphere.

I will be running vSphere 6.7 on a single host in my home lab, connected to my Visual Studio subscription.

Prerequisites

Private cloud environments

The following private cloud environments and their versions are officially supported for the Azure Arc resource bridge:

  • VMware vSphere version 6.7
  • Azure Stack HCI

Note: You are unable to set this up on vSphere 7.0.3, as it is not currently supported - I tried!

Permissions

  • Contributor rights to the Resource Group that the Azure Arc bridge resource will be created.
  • vSphere account (with at least Read and modify VM rights)

Required Azure resources

  • Resource Group for your Azure Arc Resource Bridge

Required On-premises resources

  • Resource pool with a reservation of at least 16 GB of RAM and four vCPUs. It should also have access to a datastore with at least 100 GB of free disk space.
  • A workstation with rights to run PowerShell and install Python and the Azure CLI, with a line of sight to vCenter.

Networking

  • The Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443
  • At least one free IP (Internet Protocol) address on the on-premises network (or three if there isn't a DHCP server). Make sure this isn't a used IP; you will need to enter this during the bridge provisioning script.

Create Azure Arc Resource Bridge

Create Resource Bridge

  1. Log in to the Azure Portal
  2. In the search box up the top, type in: Azure Arc
  3. Click Azure Arc
  4. Click on: VMware vCenters (preview)
  5. Click Add
  6. Azure Arc - Resource Bridge
  7. Click: Create a new resource bridge
  8. Azure Arc - Resource Bridge
  9. Click Next: Basics
  10. Enter the following information to suit your environment:
  • Name (of the Resource Bridge resource)
  • Select the region for your Metadata
  • Create a Custom Location(that matches your on-premises location, where your resources are stored, i.e. could be a data centre prefix that matches your naming convention)
  • Enter in the name of your vCenter resource (this will represent your vCenter in Azure, so make sure it is easily identifiable)
  1. Azure Arc - vCenter

  2. Click Next: Tags

  3. A list of default tags has been supplied; feel free to enter or change these to suit your environment.

  4. Azure Arc - vCenter

  5. Click Next: Download and run the script.

  6. Click on Register to register the Azure Arc Provider to your subscription. Please wait for this process to complete (it may take a minute or two, you will see: Successfully register your subscription(s) when completed).

  7. Once completed, download the onboarding PowerShell script

  8. Run the PowerShell script from a computer that has access to Azure and vCenter. This script will download the necessary dependencies (Azure CLI, Python) and, if necessary, authenticate to Azure.

    Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
    ./resource-bridge-onboarding-script.ps1
  9. When the script runs, you will be prompted for the following information.

    • Proxy information (if the Workstation is behind a proxy)
    • UAC (User Access Control) approval for the script to install Azure CLI/Python on the workstation
    • Azure authentication
    • vCenter FQDN/Address
    • vCenter Username & Password
    • vCenter datastore
    • vCenter folder (to place the template in)
    • IP address
  10. Azure Arc - vCenter Onboarding

You may not need to do the below, but my Bridge was in a 'running' state but hadn't added in the connection to vCenter.

  1. Log in to the Azure Portal
  2. In the search box up the top, type in: Azure Arc
  3. Click Azure Arc
  4. Click on: Resource bridges (preview)
  5. Click on your Azure Arc Bridge and verify the status is 'Running' (if it is not, make sure it has been started on-premises)
  6. In the Azure Portal, click on VMWare vCenters (preview)
  7. Click Add
  8. Click Use an existing resource bridge
  9. Click Next: Basics
  10. Create your Custom Location, then enter in the on-premises vCenter details
  11. Azure Arc - vCenter Onboarding
  12. On the next blade, enter in your appropriate Tags, then click Create
  13. Wait for the deployment to complete; this could take 2-5 minutes.
  14. In the search box up the top, type in: Azure Arc
  15. Click Azure Arc
  16. Click on: VMware vCenters (preview)
  17. You should now see your vSphere instance in a Connected state.
  18. Azure Arc - vCenter

Enable vCenter resources to be managed in Microsoft Azure

Now that the Bridge has been created, we need to allow resources (such as Virtual Machines, Datastores, Networks).

  1. Log in to the Azure Portal
  2. In the search box up the top, type in: Azure Arc
  3. Click Azure Arc
  4. Click on: VMware vCenters (preview)
  5. Click on your vCenter instance
  6. Under vCenter Inventory, select Virtual Machines
  7. Azure Arc - vCenter
  8. Select the Virtual Machines you want to enable for management in Azure and click 'Enable in Azure'
  9. Select your applicable Subscription and Resource Group (this is where the Azure Arc VM resources will be placed)
  10. Make sure 'Enable Guest management' is selected.
  11. Enter in your Administrator (this is the Admin Username and password of the workloads you want to install the Azure guest management too)
  12. Azure Arc - On-premises VM
  13. Click Enable
  14. It can take a few minutes to onboard these clients. If it fails, pick a single Virtual Machine and attempt to onboard that.
  15. You can now repeat the process to onboard Networks, Resource Pools etc.

Manage Virtual Machines in Microsoft Azure

Now that you have set up an Azure Arc Bridge and onboarded vCenter resources. You can now see and manage your vCenter Virtual Machines in Azure, examples below.

  • Ensure that you have VMWare Tools installed and up-to-date to help full functionality, such as Restart, or there may be issues managing these.

Stop/Stop Virtual Machines

Azure Arc - Start/Stop VM

Resize Virtual Machines - CPU/Memory

Azure Arc - Resize VM

Resize Virtual Machines - Disk

Azure Arc - Resize Disk

Troubleshooting

  • The 'resource-bridge-onboarding-script.ps1' script contains an output file, named: arcvmware-output.log. This log file exists in the same directory as the script and is useful for investigating any errors.
  • If you get no Folders, listed when the script prompts you to select a folder (i.e. Please select folder):
  1. Right-click the Datacenter in vSphere
  2. Select New Folder
  3. Select New VM and Templates folder
  4. Create a folder
  • If your Center becomes unavailable, it is most likely because you specified the same IP for the Azure Arc Appliance; if this is the case, log in to the host containing your Azure Arc Bridge and stop/delete the resources from the disk and remove from inventory. Then rerun deployment, this time selecting an appropriate IP.

Create a Site to Site VPN to Azure with a Ubiquiti Dream Machine Pro

· 7 min read

The Ubiquiti Dream Machine Pro has a lot of functionality built-in, including IPsec Site-to-site VPN_(Virtual Private Network)_ support.

I recently installed and configured a UDM-PRO at home, so now it's time to set up a site-to-vpn to my Microsoft Azure network.

I will create Virtual Network and Gateway resources using Azure Bicep, but please skip ahead.

My address range is as follows (so make sure you adjust to match your setup and IP ranges):

On-premisesAzure
192.168.1.0/2410.0.0.0/16

Prerequisites

  • The latest Azure PowerShell modules and Azure Bicep/Azure CLI for local editing
  • An Azure subscription that you have at least contributor rights to
  • Permissions to the UDM Pro to set up a new network connection

I will be using PowerShell splatting as it's easier to edit and display. You can easily take the scripts here to make them your own.

Deploy - Azure Network and Virtual Network Gateway

I will assume that you have both Azure Bicep andPowerShell Azure modules installed and the know-how to connect to Microsoft Azure.

Azure Bicep deployments (like ARM) have the following command: 'TemplateParameterObject'. 'TemplateParameterObject' allows Azure Bicep to accept parameters from PowerShell directly, which can be pretty powerful when used with a self-service portal or pipeline.

I will first make an Azure Resource Group using PowerShell for my Azure Virtual Network, then use the New-AzResourceGroupDeployment cmdlet to deploy my Virtual Network and subnets from my bicep file.

Along with the Virtual Network, we will also create 2 other Azure resources needed for a Site to Site VPN, a Local Network Gateway _(this will represent your on-premises subnet and external IP to assist with routing)_and a Virtual Network Gateway (which is used to send encrypted traffic over the internet between your on-premises site(s) and Azure).

Update the parameters of the PowerShell script below, to match your own needs, and you may need to edit the Bicep file itself to add/remove subnets and change the IP address space to match your standards.

The shared key will be used between the UDM Pro and your Azure network; make sure this is unique.

#Connects to Azure
Connect-AzAccount
#Resource Group Name
$resourcegrpname = 'network_rg'
#Creates a resource group for the storage account
New-AzResourceGroup -Name $resourcegrpname -Location 'AustraliaEast'
# Parameters splat, for Azure Bicep
# Parameter options for the Azure Bicep Template, this is where your Azure Bicep parameters go
$paramObject = @{
'sitecode' = 'luke'
'environment' = 'prod'
'contactEmail' = '[email protected]'
'sharedkey' = '18d5b51a17c68a42d493651bed88b73234bbaad0'
'onpremisesgwip' = '123.456.789.101'
'onpremisesaddress' = '192.168.1.0/24'
}
# Parameters for the New-AzResourceGroupDeployment cmdlet goes into.
$parameters = @{
'Name' = 'AzureNetwork-S2S'
'ResourceGroupName' = $resourcegrpname
'TemplateFile' = 'c:\temp\Deploy-AzVNETS2S.bicep'
'TemplateParameterObject' = $paramObject
'Verbose' = $true
}
#Deploys the Azure Bicep template
New-AzResourceGroupDeployment @parameters -WhatIf

Note: The '-whatif' parameter has been added as a safeguard, so once you know the changes are suitable, then remove and rerun.

The Virtual Network Gateway can take 20+ minutes to deploy, leave the Terminal/PowerShell window open, you can also check the Deployment in the Azure Portal (Under Deployments panel in the Resource Group).

Azure Portal - Resource Group Deployments

The Azure Bicep file is located here:

Deploy-AzVNETS2S.bicep
targetScope = 'resourceGroup'

///Parameter and Variable Setting

@minLength(3)
@maxLength(6)
param sitecode string = ''

param environment string = ''
param contactEmail string = ''

param resourceTags object = {
Application: 'Azure Infrastructure Management'
CostCenter: 'Operational'
CreationDate: dateTime
Environment: environment
CreatedBy: contactEmail
Notes: 'Created on behalf of: ${sitecode} for their Site to Site VPN.'
}

param dateTime string = utcNow('d')
param location string = resourceGroup().location

param sharedkey string = ''
param onpremisesaddress string = ''
param onpremisesgwip string = ''

//Resource Naming Parameters
param virtualNetworks_vnet_name string = '${sitecode}-vnet'
param connections_S2S_Connection_Home_name string = 'S2S_Connection_Home'
param publicIPAddresses_virtualngw_prod_name string = '${sitecode}-pip-vngw-${environment}'
param localNetworkGateways_localngw_prod_name string = '${sitecode}-localngw-${environment}'
param virtualNetworkGateways_virtualngw_prod_name string = '${sitecode}-virtualngw-${environment}'

resource localNetworkGateways_localngw_prod_name_resource 'Microsoft.Network/localNetworkGateways@2020-11-01' = {
name: localNetworkGateways_localngw_prod_name

location: location
properties: {
localNetworkAddressSpace: {
addressPrefixes: [
onpremisesaddress
]
}
gatewayIpAddress: onpremisesgwip
}
}

resource publicIPAddresses_virtualngw_prod_name_resource 'Microsoft.Network/publicIPAddresses@2020-11-01' = {
name: publicIPAddresses_virtualngw_prod_name
tags: resourceTags
location: location
sku: {
name: 'Standard'
tier: 'Regional'
}
properties: {
publicIPAddressVersion: 'IPv4'
publicIPAllocationMethod: 'Static'
idleTimeoutInMinutes: 4
ipTags: []
}
}

resource virtualNetworks_vnet_name_resource 'Microsoft.Network/virtualNetworks@2020-11-01' = {
name: virtualNetworks_vnet_name
location: location
tags: resourceTags
properties: {
addressSpace: {
addressPrefixes: [
'10.0.0.0/16'
]
}
subnets: [
{
name: 'GatewaySubnet'
properties: {
addressPrefix: '10.0.0.0/26'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
{
name: 'AzureBastionSubnet'
properties: {
addressPrefix: '10.0.0.64/27'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
{
name: 'AzureFirewallSubnet'
properties: {
addressPrefix: '10.0.0.128/26'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
{
name: 'appservers'
properties: {
addressPrefix: '10.0.2.0/24'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
]
virtualNetworkPeerings: []
enableDdosProtection: false
}
}

resource virtualNetworks_vnet_name_appservers 'Microsoft.Network/virtualNetworks/subnets@2020-11-01' = {
parent: virtualNetworks_vnet_name_resource
name: 'appservers'
properties: {
addressPrefix: '10.0.2.0/24'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}

resource virtualNetworks_vnet_name_AzureBastionSubnet 'Microsoft.Network/virtualNetworks/subnets@2020-11-01' = {
parent: virtualNetworks_vnet_name_resource
name: 'AzureBastionSubnet'
properties: {
addressPrefix: '10.0.0.64/27'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}

resource virtualNetworks_vnet_name_AzureFirewallSubnet 'Microsoft.Network/virtualNetworks/subnets@2020-11-01' = {
parent: virtualNetworks_vnet_name_resource
name: 'AzureFirewallSubnet'
properties: {
addressPrefix: '10.0.0.128/26'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}

resource virtualNetworks_vnet_name_GatewaySubnet 'Microsoft.Network/virtualNetworks/subnets@2020-11-01' = {
parent: virtualNetworks_vnet_name_resource
name: 'GatewaySubnet'
properties: {
addressPrefix: '10.0.0.0/26'
delegations: []
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}

resource connections_S2S_Connection_Home_name_resource 'Microsoft.Network/connections@2020-11-01' = {
name: connections_S2S_Connection_Home_name
location: location
properties: {
virtualNetworkGateway1: {
id: virtualNetworkGateways_virtualngw_prod_name_resource.id
}
localNetworkGateway2: {
id: localNetworkGateways_localngw_prod_name_resource.id
}
connectionType: 'IPsec'
connectionProtocol: 'IKEv2'
routingWeight: 0
sharedKey: sharedkey
enableBgp: false
useLocalAzureIpAddress: false
usePolicyBasedTrafficSelectors: false
ipsecPolicies: []
trafficSelectorPolicies: []
expressRouteGatewayBypass: false
dpdTimeoutSeconds: 0
connectionMode: 'Default'
}
}

resource virtualNetworkGateways_virtualngw_prod_name_resource 'Microsoft.Network/virtualNetworkGateways@2020-11-01' = {
name: virtualNetworkGateways_virtualngw_prod_name
location: location
properties: {
enablePrivateIpAddress: false
ipConfigurations: [
{
name: 'default'
properties: {
privateIPAllocationMethod: 'Dynamic'
publicIPAddress: {
id: publicIPAddresses_virtualngw_prod_name_resource.id
}
subnet: {
id: virtualNetworks_vnet_name_GatewaySubnet.id
}
}
}
]
sku: {
name: 'VpnGw2'
tier: 'VpnGw2'
}
gatewayType: 'Vpn'
vpnType: 'RouteBased'
enableBgp: false
activeActive: false
bgpSettings: {
asn: 65515
bgpPeeringAddress: '10.0.0.62'
peerWeight: 0


}
vpnGatewayGeneration: 'Generation2'
}
}

Once deployed, run the following command to capture and copy the Gateway Public IP:

Get-AzPublicIPAddress | Select-Object Name, IpAddress 

Copy the Public IP, we will need this for configuring the UDM Pro, this would have been generated dynamically.

Configure - Ubiquiti Dream Machine Pro

  1. Login to the UDM-Pro

  2. Unifi OS

  3. Click on Network (under Applications heading)

  4. Click Settings (Gear icon)

  5. Unifi OS - Network

  6. Click VPN

  7. UDM Pro Unifi OS - VPN

  8. Scroll down and click + Create Site-to site-VPN

  9. Fill in the following information:

    • Network Name(ie Azure - SYD)
    • VPN Protocol (select Manual IPsec)
    • Pre-shared Key (enter in the SAME key that was used by Azure Bicep to create the Connection - if you have lost it, it can be updated in Azure, under Shared key on the connection attached to the Virtual network gateway, but will stop any other VPN connections using the old key)
    • Server Address (make sure you select the interface for your WAN/External IP)
    • Remote Gateway/Subnets (Enter in the Address Prefix of your Azure virtual network or subnets, remember to add any peered virtual networks and Press Enter)
    • Remote IP Address (Enter in the Public IP of the Virtual Network gateway, the same IP retrieved by Get-AzPublicIPAddress cmdlet )
  10. UDM Pro - Azure S2S VPN

  11. Select Manual

  12. UDM Pro - Azure S2S VPN

    Select IPSec Profile, and select Azure Dynamic Routing

  13. Click Apply Changes

After a few minutes, the VPN should become connected and you should be able to connect to devices on the Azure Network using their private IP address.

If you have problems, make sure that the Gateway IPs line up and are correct, along with the pre-shared key. You can also Pause the Network from the UDM-Pro and Resume to reinitiate the connection.

You can also troubleshoot the VPN connection, from the Azure Portal, by navigating the Virtual network gateway and selecting VPN Troubleshoot.

Azure Portal - VPN Troubleshoot

Azure Optimization Engine

· 21 min read

This post is a part of Azure Spring Clean, which is a community event focused on Azure management topics from March 14-18, 2022.

Thanks to Joe Carlyle and Thomas Thornton for putting in the time and organising this event.

This article, along with others of its kind (Articles, Videos etc.), cover Azure Management topics such as Azure Monitor, Azure Cost Management, Azure Policy, Azure Security Principles or Azure Foundations!

Today I will be covering the Azure Optimization Engine.

#AzureSpringClean - Azure Optimization Engine

Overview

The Azure Optimization Engine (AOE) is an extensible solution designed to generate optimization recommendations for your Azure environment, like a fully customizable Azure Advisor.

The first custom recommendations use-case covered by this tool was augmenting Azure Advisor Cost recommendations, particularly Virtual Machine right-sizing, with a fit score based on VM (Virtual Machine) metrics and properties.

The Azure Optimization Engine can…

  • Enable new custom recommendation types
  • Augment Azure Advisor recommendations with richer details that better drive action
  • Add fit score to recommendations.
  • Add historical perspective to recommendations (the older the recommendation, the higher the chances to remediate it)
  • Drive continuous automated optimisation

Azure Optimisation Engine combines multiple data sources to give you better data-driven decisions and recommendations, outside of that usually deployed by the inbuilt Azure Advisor, example use-cases and data sources can be seen below:

  • Azure Resource Graph (Virtual Machine and Managed Disks properties)
  • Azure Monitor Logs (Virtual Machine performance metrics)
  • Azure Consumption (consumption/billing usage details events)
  • Extracts data periodically to build a recommendations history
  • Joins and queries data in an analytics-optimised repository (Log Analytics)
  • Virtual Machine performance metrics collected with Log Analytics agent
  • Can leverage existing customer setup
  • Requires only a few metrics collected with a frequency >= 60 seconds

Besides collecting all Azure Advisor recommendations, AOE includes other custom recommendations that you can tailor to your needs:

  • Cost
    • Augmented Advisor Cost VM right-size recommendations, with fit score based on Virtual Machine guest OS metrics (collected by Log Analytics agents) and Azure properties
    • Underutilized VM Scale Sets
    • Unattached disks
    • Standard Load Balancers without backend pool
    • Application Gateways without backend pool
    • VMs deallocated since a long time ago (forgotten VMs)
    • Orphaned Public IPs
  • High Availability
    • Virtual Machine high availability (availability zones count, availability set, managed disks, storage account distribution when using unmanaged disks)
    • VM Scale Set high availability (availability zones count, managed disks)
    • Availability Sets structure (fault/update domains count)
  • Performance
    • VM Scale Sets constrained by lack of compute resources
  • Security
    • Service Principal credentials/certificates without expiration date
    • NSG rules referring to empty or non-existing subnets
    • NSG rules referring to orphan or removed NICs
    • NSG rules referring to orphan or removed Public IPs
  • Operational Excellence
    • Load Balancers without backend pool
    • Service Principal credentials/certificates expired or about to expire
    • Subscriptions close to the maximum limit of RBAC (Role Based Access Control) assignments
    • Management Groups close to the maximum limit of RBAC assignments
    • Subscriptions close to the maximum limit of resource groups
    • Subnets with low free IP space
    • Subnets with too much IP space wasted
    • Empty subnets
    • Orphaned NICs

Feel free to skip to the Workbook and PowerBI sections to look at some of the outs of box data and recommendations.

The Azure Optimisation Engine is battle-tested

  • Providing custom recommendations since Nov 2019
  • Serving Azure customers worldwide
  • From smaller 50-500 VMs customers to larger ones with more than 5K VMs
  • Several customer-specific developments (custom collectors and recommendation algorithms)
  • Flexibility options include (multi-subscription and multi-tenant capability)
  • Based on cheap services (Azure Automation, Storage, small SQL Database)

A few hours after setting up the engine, you will get access to a Power BI dashboard and Log Analytic Workbooks with all Azure optimisation opportunities, coming from both Azure Advisor and tailored recommendations included in the engine.

These recommendations are then updated every seven days.

It is worth noting that Azure Optimisation Engine is NOT an official Microsoft Product, and as such is under no offical support, it was created and maintened by: Hélder Pinto, a Senior Customer Engineer for Microsoft and would like to take the opportunity to thank Hélder the amazing work he is doing with this product on a continous basis, and giving me his blessing to write this article, on which he has already done an amazing job documenting on Github.

Architecture

Azure Optimization Engine Architecture

Azure Optimization Engine runs on top of Azure Automation (Runbooks for each data source) and Log Analytics. It is supplemented by a storage account to store JSON and Azure SQL database to help control ingestion (last processed blob and lines processed).

Install

Prerequisites

Taken directly from the Git repository readme, the prerequisite for Azure Optimization Engine are:

  • A supported Azure subscription (see the FAQson Github)
  • Azure Powershell 6.6.0+(Azure Bicep support is not currently available but is being worked on).
  • Microsoft.Graph.Authentication and Microsoft.Graph.Identity.DirectoryManagement PowerShell modules
  • A user account with Owner permissions over the chosen subscription, so that the Automation Managed Identity is granted the required privileges over the subscription (Reader) and deployment resource group (Contributor)
  • (Optional) A user account with at least Privileged Role Administrator permissions over the Azure AD tenant, so that the Managed Identity is granted the required privileges over Azure AD (Global Reader)

During deployment, you'll be asked several questions. It would be best if you planned for the following:

  • Whether you're going to reuse an existing Log Analytics Workspace or create a new one. IMPORTANT: you should ideally reuse a workspace where you have VMs onboarded and already sending performance metrics (Perf table); otherwise, you will not fully leverage the augmented right-size recommendations capability. If this is not possible/desired for some reason, you can still manage to use multiple workspaces (see Configuring Log Analytics workspaces).
  • An Azure subscription to deploy the solution (if you're reusing a Log Analytics workspace, you must deploy into the same subscription the workspace is in).
  • A unique name prefix for the Azure resources being created (if you have specific naming requirements, you can also choose resource names during deployment)
  • Azure region

If the deployment fails for some reason, you can repeat it, as it is idempotent (i.e. they can be applied multiple times without changing the result). The exact process is used to upgrade a previous deployment with the latest version. You have to keep the same deployment options, so make sure you document them.

We will now go through and install the prerequisites from scratch; as in this article, I will be deploying the Azure Optimization Engine from our local workstation.

You can also install from the Azure Cloud Shell,

Install Azure PowerShell & Microsoft Graph modules
  1. Open Windows PowerShell

  2. Type in:

    Install-Module -Name Az,Microsoft.Graph.Authentication,Microsoft.Graph.Identity.DirectoryManagement -Scope CurrentUser -Repository PSGallery -Force

Install

Now that we have the prerequisites installed! Let's set up Azure Optimization Engine!

  1. In your favourite web browser, navigate to the AzureOptimizationEngine GitHub repository.

  2. Select Code, Download Zip

  3. Azure Optimization Engine - GitHub

  4. Download and extract the ZIP file to a location you can easily navigate to in PowerShell (I have extracted it to C:\temp\AzureOptimizationEngine-master\AzureOptimizationEngine-master)

  5. Open PowerShell (or Windows Terminal)

  6. Because the scripts were downloaded from the internet, we will need to Unblock these so that we can run them, open PowerShell and run the script below (changing your path to the path that the files were extracted)

    Get-ChildItem -r 'C:\temp\AzureOptimizationEngine-master\AzureOptimizationEngine-master' | Unblock-File
  7. Now that the script and associated files have been unblocked change the directory to the location of the Deploy-AzureOptimizationEngine.ps1 file.

  8. Run: .\Deploy-AzureOptimizationEngine.ps1

  9. Windows Terminal -\Deploy-AzureOptimizationEngine.ps1

  10. A browser window will then popup, authenticate to Azure (connect to the Azure tenant that has access to the Azure subscription you wish to set up Azure Optimization Engine on).

  11. Once authentication, you will need to confirm the Azure subscription to which you want to deploy Azure Optimization Engine.

  12. Azure Optimization Engine - Select Subscription

  13. Once your subscription is selected, it's time to choose a naming prefix for your resources (if you choose Enter, you can manually name each resource); in my case, my prefix will be: aoegeek. Because Azure Optimization Engine will be creating resources that are globally available, make sure you select a prefix that suits your organisation/use-case as you may run into issues with the name already being used.

  14. Azure Optimization Engine - Select Region

  15. If you have an existing Log Analytics workspace that your Virtual Machines and resources are connected to, you can specify 'Y' here to select your existing resource; I am creating this from fresh so that I will choose 'N.'

  16. Azure Log Analytics

  17. The Azure Optimization Engine will now check that the names and resources are available to be deployed to your subscriptions and resources (nothing is deployed during this stage - if there is an error, you can fix the issue and go back).

  18. Once validation has passed, select the region that Azure Optimization will be deployed to; I will deploy to australiaeast, so I choose 1.

  19. Azure Optimization Engine now requires the SQL Admin username; for the SQL server and database it will create, I will go with: sqladmin

  20. Azure Optimization Engine - Region

  21. Now enter the password for the sqladmin account and press Enter

  22. Verify that everything is correct, then press Y to deploy Azure Optimization Engine!

  23. Windows Terminal - Deploy Azure Optimization Engine

  24. Deployment could take 10-25 minutes... (mine took 22 minutes and 51 seconds)

  25. While leaving the PowerShell window open, log into the Azure Portal; you should now have a new Resource Group, and your resources will start getting created... you can click on Deployments (under Settings navigation bar) in the Resource Group to review the deployment status.

  26. Azure Portal - Deployments

  27. If you notice a failure, in the Deployment tab for: 'PolicyDeployment' you can ignore this, as it may have failed if the SQL Server hasn't been provisioned yet; once it has been provisioned, you can navigate back to this failed deployment and click 'Redeploy', to deploy a SQL Security Alert policy.

Note: The Azure SQL database will have the Public IP from the location the script was deployed from, allowed on the Azure SQL database; you may need to adjust this depending on your requirements.

Configure

Onboard Azure VMs to Log Analytics using Azure Policy and PowerShell

Now that Azure Optimization has been installed, let's onboard our current and future Azure Virtual Machines to Azure Optimization Engine, using Azure Policy. This is required if you want to get Azure Advisor Virtual Machine right-size recommendations augmented with guest OS metrics. If you don't collect metrics from the Virtual Machines, you will still have a fully functional Optimisation Engine, with many recommendations, but the Advisor Virtual Machine right-size ones will be served as is.

  1. Open PowerShell and login to Azure using: Connect-AzAccount

  2. Connect to your Azure subscription that contains the Virtual Machines you want to onboard to Log Analytics

  3. Type:

    # Register the resource provider if it's not already registered
    Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights'
  4. The PowerShell script below will:

  5. Just update the variables to match your setup

    #requires -Version 1.0
    # Variables
    #Enter your subscription name
    $subscriptionName = 'luke.geek.nz'
    #Enter the name of yuour
    $policyDisplayName = 'Deploy - Log Analytics' #Cant Exceed 24 characters
    $location = 'australiaeast'
    $resourceGroup = 'aoegeek-rg'
    $UsrIdentityName = 'AOE_ManagedIdentityUsr'
    $param = @{
    logAnalytics = 'aoegeek-la'
    }
    # Get a reference to the subscription that will be the scope of the assignment
    $sub = Get-AzSubscription -SubscriptionName $subscriptionName
    $subid = $sub.Id
    #Creates User Managed identity
    $AzManagedIdentity = New-AzUserAssignedIdentity -ResourceGroupName $resourceGroup -Name $UsrIdentityName
    #Adds Contributor rights to User Managed identity to Subscription
    #Waits 10 seconds to allow for Azure AD to replicate and recognise Managed identity has been created.
    Start-Sleep -Seconds '10'
    #Assigns role assignement to managed identity
    New-AzRoleAssignment -Objectid $AzManagedIdentity.PrincipalId -scope ('/subscriptions/' + $subid ) -RoleDefinitionName 'Log Analytics Contributor'
    # Get a reference to the built-in policy definition that will be assigned
    $definition = Get-AzPolicyDefinition | Where-Object -FilterScript {
    $_.Properties.DisplayName -eq 'Deploy - Configure Log Analytics extension to be enabled on Windows virtual machines'
    }
    # Create the policy assignment with the built-in definition against your subscription
    New-AzPolicyAssignment -Name $policyDisplayName -DisplayName $policyDisplayName -Scope ('/subscriptions/' + $subid ) -PolicyDefinition $definition -IdentityType 'UserAssigned' -IdentityId $AzManagedIdentity.id -location $location -PolicyParameterObject $param
    #Creates R3mediation task, to deploy the extension to the VM
    $policyAssignmentID = Get-AzPolicyAssignment -Name $policyDisplayName | Select-Object -Property PolicyAssignmentId
    Start-AzPolicyRemediation -Name 'Deploy - LA Agent' -PolicyAssignmentId $policyAssignmentID.PolicyAssignmentId -ResourceDiscoveryMode ReEvaluateCompliance

Note: The default 'Deploy - Configure Log Analytics extension to be enabled on Windows virtual machines' policy doesn't currently support Gen 2 or Windows Server 2022 Virtual Machines; if you have these, then you can copy the Azure Policy definition and then make your own with the new imageSKUs, although this policy may be replaced by the: Configure Windows virtual machines to run Azure Monitor Agent policy. Although I haven't tested it yet, the same script above can be modified to suit.

Onboard Azure VMs to Log Analytics using the Azure Portal

If you do not want to onboard VMs with Policy, you can do it manually via the Azure Portal.

  1. Open Azure Portal
  2. Navigate to Log Analytic Workspaces
  3. Click on the Log Analytic workspace that was provisioned for Azure Optimization Engine
  4. Navigate to Virtual Machines (under Workspace Data Sources)
  5. Click on the Virtual Machine you want to link up to the Log Analytics workspace, and click Connect - this will trigger the Log Analytic extension and agent o be installed. Repeat for any further Virtual Machines.
  6. Log Analytics - Connect VM
Setup Log Analytic Performance Counters

Now that we have Virtual Machines reporting to our Log Analytic instance, it's time to make sure we are collecting as much data as we need to give suitable recommendations, luckily a script has already been included in the Azure Optimisation repository called 'Setup-LogAnalyticsWorkspaces.ps1' to configure the performance counters.

  1. Open PowerShell (or Windows Terminal)

  2. Change the directory to the location of the Setup-LogAnalyticsWorkspaces.ps1, in the root folder of the repository extracted earlier

  3. Run the following PowerShell commands to download the required PowerShell Modules:

    Install-Module -Name Az.ResourceGraph
    Install-Module -Name Az.OperationalInsights
  4. Then run: .\Setup-LogAnalyticsWorkspaces.ps1

  5. The script will then go through all Log Analytic workspaces that you have access to and check for performance counters.

  6. Windows PowerShell - \Setup-LogAnalyticsWorkspaces.ps1

  7. If they are missing from the Log Analytics workspace, then you can run:

    ./Setup-LogAnalyticsWorkspaces.ps1 -AutoFix

or

 #Fix specific workspaces configuration, using a custom counter collection frequency
./Setup-LogAnalyticsWorkspaces.ps1 -AutoFix -WorkspaceIds "d69e840a-2890-4451-b63c-bcfc5580b90f","961550b2-2c4a-481a-9559-ddf53de4b455" -IntervalSeconds 30
Setup Azure AD-based recommendations by granting permissions to Managed Identity.

Azure Optimization Engine, has the ability to do recommendations based on Microsoft Entra ID roles and permissions, but in order to do that, the System Assigned Identity of the Azure Optimization Engine account needs to be given 'Global Reader' rights. As part of the deployment, you may have gotten the following error:

Cannot bind argument to parameter 'DirectoryRoleId' because it is an empty string.

Could not grant role. If you want Azure AD-based recommendations, please grant the Global Reader role manually to the aoegeek-auto managed identity or, for previous versions of AOE, to the Run As Account principal.

We are going to grant the Azure Automation account 'Global Reader' rights manually in the Azure Portal.

  1. Open Azure Portal
  2. Navigate to Automation Accounts
  3. Open your Azure Optimisation Engine automation account
  4. Navigate down the navigation bar to the Account Settings section and select: Identity
  5. Azure Automation - Identity
  6. Copy the object ID
  7. Now navigate to Microsoft Entra ID
  8. Click on Roles and Administrators
  9. Search for: Global Reader
  10. Select Global Reader and select + Add assignments
  11. Paste in the object ID earlier, and click Ok to grant Global Reader rights to the Azure Automation identity.
Azure Automation - Runbooks & Automation

The wind that gives Azure Optimization Engine its lift is Azure Automation and Runbooks, at the time I deployed this - I had x1 Azure Automation account and 33 runbooks!

Looking at the runbooks deployed, you can get a sense of what Azure Optimization Engine is doing...

NAMETYPE
aoegeek-autoAutomation Account
Export-AADObjectsToBlobStorage (aoegeek-auto/Export-AADObjectsToBlobStorage)Runbook
Export-AdvisorRecommendationsToBlobStorage (aoegeek-auto/Export-AdvisorRecommendationsToBlobStorage)Runbook
Export-ARGAppGatewayPropertiesToBlobStorage (aoegeek-auto/Export-ARGAppGatewayPropertiesToBlobStorage)Runbook
Export-ARGAvailabilitySetPropertiesToBlobStorage (aoegeek-auto/Export-ARGAvailabilitySetPropertiesToBlobStorage)Runbook
Export-ARGLoadBalancerPropertiesToBlobStorage (aoegeek-auto/Export-ARGLoadBalancerPropertiesToBlobStorage)Runbook
Export-ARGManagedDisksPropertiesToBlobStorage (aoegeek-auto/Export-ARGManagedDisksPropertiesToBlobStorage)Runbook
Export-ARGNICPropertiesToBlobStorage (aoegeek-auto/Export-ARGNICPropertiesToBlobStorage)Runbook
Export-ARGNSGPropertiesToBlobStorage (aoegeek-auto/Export-ARGNSGPropertiesToBlobStorage)Runbook
Export-ARGPublicIpPropertiesToBlobStorage (aoegeek-auto/Export-ARGPublicIpPropertiesToBlobStorage)Runbook
Export-ARGResourceContainersPropertiesToBlobStorage (aoegeek-auto/Export-ARGResourceContainersPropertiesToBlobStorage)Runbook
Export-ARGUnmanagedDisksPropertiesToBlobStorage (aoegeek-auto/Export-ARGUnmanagedDisksPropertiesToBlobStorage)Runbook
Export-ARGVirtualMachinesPropertiesToBlobStorage (aoegeek-auto/Export-ARGVirtualMachinesPropertiesToBlobStorage)Runbook
Export-ARGVMSSPropertiesToBlobStorage (aoegeek-auto/Export-ARGVMSSPropertiesToBlobStorage)Runbook
Export-ARGVNetPropertiesToBlobStorage (aoegeek-auto/Export-ARGVNetPropertiesToBlobStorage)Runbook
Export-AzMonitorMetricsToBlobStorage (aoegeek-auto/Export-AzMonitorMetricsToBlobStorage)Runbook
Export-ConsumptionToBlobStorage (aoegeek-auto/Export-ConsumptionToBlobStorage)Runbook
Export-RBACAssignmentsToBlobStorage (aoegeek-auto/Export-RBACAssignmentsToBlobStorage)Runbook
Ingest-OptimizationCSVExportsToLogAnalytics (aoegeek-auto/Ingest-OptimizationCSVExportsToLogAnalytics)Runbook
Ingest-RecommendationsToSQLServer (aoegeek-auto/Ingest-RecommendationsToSQLServer)Runbook
Recommend-AADExpiringCredentialsToBlobStorage (aoegeek-auto/Recommend-AADExpiringCredentialsToBlobStorage)Runbook
Recommend-AdvisorAsIsToBlobStorage (aoegeek-auto/Recommend-AdvisorAsIsToBlobStorage)Runbook
Recommend-AdvisorCostAugmentedToBlobStorage (aoegeek-auto/Recommend-AdvisorCostAugmentedToBlobStorage)Runbook
Recommend-ARMOptimizationsToBlobStorage (aoegeek-auto/Recommend-ARMOptimizationsToBlobStorage)Runbook
Recommend-LongDeallocatedVmsToBlobStorage (aoegeek-auto/Recommend-LongDeallocatedVmsToBlobStorage)Runbook
Recommend-UnattachedDisksToBlobStorage (aoegeek-auto/Recommend-UnattachedDisksToBlobStorage)Runbook
Recommend-UnusedAppGWsToBlobStorage (aoegeek-auto/Recommend-UnusedAppGWsToBlobStorage)Runbook
Recommend-UnusedLoadBalancersToBlobStorage (aoegeek-auto/Recommend-UnusedLoadBalancersToBlobStorage)Runbook
Recommend-VMsHighAvailabilityToBlobStorage (aoegeek-auto/Recommend-VMsHighAvailabilityToBlobStorage)Runbook
Recommend-VMSSOptimizationsToBlobStorage (aoegeek-auto/Recommend-VMSSOptimizationsToBlobStorage)Runbook
Recommend-VNetOptimizationsToBlobStorage (aoegeek-auto/Recommend-VNetOptimizationsToBlobStorage)Runbook
Remediate-AdvisorRightSizeFiltered (aoegeek-auto/Remediate-AdvisorRightSizeFiltered)Runbook
Remediate-LongDeallocatedVMsFiltered (aoegeek-auto/Remediate-LongDeallocatedVMsFiltered)Runbook
Remediate-UnattachedDisksFiltered (aoegeek-auto/Remediate-UnattachedDisksFiltered)Runbook

A lot of the runbooks, such as the Log Analytics workspace ID, link up to Azure Automation variables, such as this period in Days to look back for Advisor recommendations, by default, this is '7' but you can change this variable to suit your organisation's needs.

Azure Automation - Runbooks &amp; Automation

Azure Automation - Schedules

Along with containing the variables and configurations used by the Runbooks, it also contains the schedules for the ingest of data into the storage account and SQL databases, most of these are Daily, but schedules such as ingesting from the Azure Advisor are weekly, by default these times are in UTC.

Azure Automation - Schedules

When making changes to these schedules (or moving the Runbooks to be run from a Hybrid worker), it is recommended to use the Reset-AutomationSchedules.ps1 script. These times need to be in UTC.

Terminal - Reset-AutomationSchedules.ps1

Azure Automation - Credentials

When we set up the Azure SQL database earlier, as part of the Azure Optimisation setup, we configured the SQL Admin account and password, these credentials are stored and used by the Runbooks in the Azure Automation credential pane.

Azure Automation - Credentials

View Recommendations

It's worth noting, that because Azure Optimization Engine stores its data into Log Analytics and SQL, you can use languages such as KQL directly on the Log Analytics workspace to pull out any information you might need and develop integration into other toolsets.

Workbooks

There are x3 Azure Log Analytics workbooks included in the Azure Optimization Engine, these are as follows:

NAMETYPE
Resources InventoryAzure Workbook
Identities and RolesAzure Workbook
Costs GrowingAzure Workbook

They can be easily accessed in the Azure Portal.

  1. Log in to the Azure Portal
  2. Navigate to Log Analytics Workspace
  3. Click on the Log Analytics workspace you set up for Azure Optimization Engine earlier and click on Workbooks (under General).
  4. Click on: Workbooks filter at the top to display the 3 Azure Optimization Engine
  5. Log Analtics - Workbooks
  6. After a few days of collecting data, you should now be able to see data like below.
Resource Inventory - General

Resource Inventory - General

Resource Inventory - Virtual Machines

Resource Inventory - Virtual Machines

Resource Inventory - Virtual Machine ScaleSets

Resource Inventory - Virtual Machine ScaleSets

Resource Inventory - Virtual Machine ScaleSets Disks

Resource Inventory - Virtual Machine ScaleSets Disks

Resource Inventory - Virtual Networks

Resource Inventory - Virtual Networks

Identities and Roles - Overview

Identities and Roles - Overview

Power BI

The true power of the Azure Optimisation engine, is the data stored in the SQL database, using PowerBI you can pull the data into dashboards and make it more meaningful, and the recommendations given from PowerBI and SQL.

The Optimisation Engine already has a starter PowerBI file, which pulls data from the database.

Install PowerBI Desktop
  1. Open Microsoft Store and search for: Power BI Desktop
  2. Click Get
  3. Power BI Desktop
  4. Once Downloaded, click Open
Obtain Azure SQL Information

In order to connect PowerBI to the Azure SQL database, we need to know the URL of the database and make sure our IP has been opened on the Azure SQL Firewall.

  1. Open Azure Portal
  2. Navigate to SQL Servers
  3. Click on the SQL server created earlier, under the Security heading click on Firewall and Virtual Networks
  4. Under: Client IP address, make sure your public IP is added and click Save
  5. Azure SQL - Virtual Network
  6. Now that we have verified/added our client IP, we need to get the SQL database (not server) URL
  7. Click on Overview
  8. Click on the aoeoptimization database (under Available resources, down the bottom)
  9. Click on Copy to Clipboard for the server Name/URL
  10. Azure SQL - Database URL
Open PowerBI Desktop File

Now that we have PowerBI Desktop installed, it's time to open: AzureOptimizationEngine.pbix. This PowerBI file is located in the Views folder of the Azure Optimization Engine repository.

  1. Open: AzureOptimizationEngine.pbix in PowerBI Desktop
  2. On the Home page ribbon, click on Transform Data
  3. Click Data source settings
  4. Click Change Source
  5. Change the default SQL server of aoedevgithub-sql.database.windows.net to your SQL database, copied earlier.
  6. Click Ok
  7. Click Ok and press Apply Changes
  8. It will prompt for credentials, click on Database
  9. Enter in your SQLAdmin details entered as part of the Azure Optimization Engine setup
  10. Click Connect

After PowerBI updates its database and queries, your PowerBI report should now be populated with data like below.

PowerBI - Overview

PowerBI - Overview

PowerBI - Cost

PowerBI - Cost

PowerBI - High Availability

PowerBI - High Availability

PowerBI - Security

PowerBI - Security

PowerBI - Operational Excellence

PowerBI - Operational Excellence

Congratulations! You have now successfully stood up and configured Azure Optimization Engine!