Skip to main content

Create a Public Holidays API using Microsoft Azure

· 15 min read

Using a previous blog post I did on using a third-party API (Application Programming Interface) to start a Virtual Machine when it wasn't a Public Holiday, I had a thought on what could be an option if I wanted an API only accessible on an internal network or if I wanted to include custom Holidays such as Star Wars day or company holidays? Could I create and query my API using Microsoft Azure services? You can!

Overview

Today we will create a base Public Holidays API using several Microsoft Azure serverless services, such as Azure Function, Azure Storage Account and API Management.

Note: As this is a demonstration, I will be using a Consumption-based Azure Function and Azure storage account, and although it is a good place to start - depending on your requirements, you may be better off with Azure Function Premium Plan to avoid cold-start times, and if you need a high amount of requests and writes (GET and POSTs) and resiliency, then replace the Storage account table with a Cosmos DB.

The solution will be made up of the following:

Azure ServiceNamePlanNote
Application Insightsai-nzPublicHolidays-prd-ae
Azure API Managementapims-publicholidays-prd-aeDeveloper (No SLA)
Azure Functionfunc-nzpublicHolidays-prd-aeFunction App - Consumption
Azure Storage AccountfuncnzpublicholidaystgacStorageV2 (general purpose v2) - Locally-redundant storage (LRS)Contains 'PublicHolidays' table
Azure Storage Accountrgnzpublicholidayspb4edStorage (general purpose v1) - Locally-redundant storage (LRS)Contains Azure Functions App Files
Resource Grouprg-publicholidays-prd-aeResource Group - containing above resources.

Azure Resource Group - Diagram

Pre-requisites

Note: AzTables is not part of the standard Az PowerShell module set and is a separate module you will need to install (Install-Module AzTables).

We will use a mix of the Azure Portal and PowerShell to deploy this solution from start to finish; you can find the source data and code directly in the GitHub repository here: lukemurraynz/PublicHoliday-API for reference (feel free to fork, raise pull requests etc.). In this guide, I will try not to assume preexisting knowledge (other than general Azure and PowerShell knowledge).

Deployment

The deployment steps will be separated into different sections to help simplify implementation.

First, make sure you adjust the names of your resources and locations to suit your naming conventions and regional locations (such as Australia East or West Europe). Your deployments may fail if a name is already in use. See "Microsoft Azure Naming Conventions" for more about Naming conventions.

Create Resource Group

The Resource Group will contain all resources related to the API that we will deploy today.

However, I recommend you consider what resources might be shared outside of this API - such as API Management, and put them in a separate Shared or Common Resource Group, to keep the li.e.ecycle of your resources together (ie API resources all in one place, so if it gets decommissioned, it is as easy a deleting the Resource Group).

  1. Log in to the Microsoft Azure Portal
  2. Click Click on the burger and click Resource groups
  3. Click + Create
  4. Select your Subscription
  5. Type in a name for your Resource Group (like 'rg-publicholidays-prd-ae')
  6. Select your Region and click Next: Tags
  7. Enter in applicable tags (i.e. Application: Public Holidays API)
  8. Click Next: Review + create
  9. Click Create

Create a resource group

If you prefer PowerShell, you can deploy a new Resource Group with the below:

New-AzResourceGroup -Name 'rg-publicholidays-prd-ae' -Location 'Australia East' -Tag @{Application="Public Holidays API"}

Create Storage Account

Now that the Resource Group has been created, it's time to import our Storage Account - which will hold our Table of data around Public Holidays.

  1. Log in to the Microsoft Azure Portal
  2. Click Click on the burger and click Storage Accounts
  3. Click + Create
  4. Select the Subscription and Resource Group you created earlier
  5. Enter in a Name for your Storage Account (like 'funcnzpublicholidaystgac')
  6. Select your Region (i.e. Australia East)
  7. For Performance, I am going to select: Standard
  8. For Redundancy, as this is a demo, I will select Locally-redundant storage (LRS). However, if you plan on running this in production, you may consider ZRS for zone redundancy.
  9. If you plan on locking down the Storage Account to your Virtual Network or specific IP addresses, continue to the Networking Tab; we can accept the defaults and click: Review.
  10. Click Create

If you prefer PowerShell, you can deploy a new Storage account with the below:

New-AzStorageAccount -ResourceGroupName 'rg-publicholidays-prd-ae' -Name 'funcnzpublicholidaystgac' -Location 'Australia East' -SkuName 'Standard_LRS' -Kind StorageV2

Import Public Holiday data

Create Azure Storage Account Table

Now that we have the Storage account that will hold our Public Holiday time to import the data.

Most of this task will be done with PowerShell, but first, we need to create the Table that will hold our Public Holidays.

  1. Log in to the Microsoft Azure Portal
  2. Click Click on the burger and click Storage Accounts
  3. Navigate to your created Storage account
  4. In the Navigation blade, click Tables
  5. Click + Table
  6. For a Table Name, I will go with PublicHolidays
  7. Click Ok

Create Azure Storage Account Table

You can use PowerShell to create the Table below:

$storageAccount = Get-AzStorageAccount -ResourceGroupName 'rg-publicholidays-prd-ae' -Name 'funcnzpublicholidaystgac'
$storageContext = $storageAccount.Context
New-AzStorageTable -Name 'PublicHolidays' -Context $storageContext
Import Public Holiday Data into Table

Now that we have the Azure storage account and PublicHolidays table, it's time to import the data.

If you want to do this manually, the Azure Table will have the following columns:

| Date | Country | Type | Name | Day | Year | Comments |

We could enter the data manually, but I will leverage the Nager API to download and parse a CSV file for a few countries. You can find the source data and code directly in the GitHub repository here: lukemurraynz/PublicHoliday-API for reference.

To do this, we will need PowerShell, so assuming you have logged into PowerShell and set the context to your Azure subscription, let us continue.

I have created a CSV (Comma-separated values) file with a list of countries (i.e. US, NZ, AU) called 'SourceTimeDate.CSV', but you can adjust this to suit your requirements and place it in a folder on my C:\ drive called: Temp\API.

Open PowerShell and run the following:

$Folder = 'C:\Temp\API\'
$Csv = Import-csv "$Folder\DateTimeSource\SourceTimeDate.csv"
$CurrentYear = (Get-Date).Year

ForEach ($Country in $Csv)
{
$CountryCode = $Country.Country
Invoke-WebRequest -Uri "https://date.nager.at/PublicHoliday/Country/$CountryCode/$CurrentYear/CSV" -OutFile "$FolderAPI\DateTimeSource\Country$CountryCode$CurrentYear.csv"
}

These cmdlets will download a bunch of CSV files into the API folder, with the Public Holidays for each Country for this year, and then you can adjust the $CurrentYear variable for future years (i.e. 2025).

Once you have all the CSV files for your Public Holidays and before we import the data into the Azure storage table, now is the time to create a new Custom Holidays CSV; you can easily use an existing one to create a new CSV containing your company's public holidays or other days that may be missing from the standard list, make sure it matches the correct format and save it into the same folder.

Custom Public Holidays API

Now that you have all your CSV files containing the Public Holidays in your Country or countries, it's time to import them into the Azure Table. First, we import the data using a PowerShell session logged into Azure.

# Imports Public Holiday into Azure Storage table
# Requires AzTable Module (not part of the normal Az cmdlets)
Import-Module AzTable

#Imports data from CSV files into $GLobalHolidays variable
$Folder = 'C:\Temp\API\'

$GlobalHolidays = Get-ChildItem "$Folder\DateTimeSource\*.csv" | Foreach-Object {
$basename = $_.BaseName
import-csv $_
}

#Connect-AzAccount
#Connects to Azure Storage Account
$storageAccountName = 'funcnzpublicholidaystgac'
$resourceGroupName = 'rg-publicHolidays-prd-ae'
$tableName = 'PublicHolidays'
$storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName
$storageContext = $storageAccount.Context
$cloudTable = (Get-AzStorageTable -Name $tableName -Context $storageContext).CloudTable


#Imports CSV data into Azure Table
$counter = 0
ForEach ($Holiday in $GlobalHolidays)

{
$Date = [DateTime]($Holiday.Date)
$Dayofweek = $Date.DayOfWeek | Out-String
$Year = $Date.Year
$HolidayDate = Get-Date $Date -format "dd-MM-yyyy"

Add-AzTableRow `
-table $cloudTable `
-partitionKey '1' `
-rowKey ((++$counter)) -property @{"Date"=$HolidayDate;"Country"=$Holiday.CountryCode;"Type"=$Holiday.Type;"Name"=$Holiday.LocalName;"Day"=$Dayofweek;"Year"=$Year;"Comments"=$Holiday.Counties}

}


#Validate the data in the Storage table
Get-AzTableRow -table $cloudTable

Import CSV to Azure Storage Account

Validate Azure Storage Account Table

Now the Public Holidays are imported into the Azure storage account table with additional information, such as the Day it falls, and the Date format has been changed to suit the NZ format (DD-MM-YYYY).

If we log in to the Azure Portal, navigate to the Storage account and under Storage Browser, we can now see our Table is full of Public Holidays.

Create API

That we have our Table with Public Holiday data, it's time to create our Azure Function to act as the API that will talk to the azure storage account!

Create Azure Function
  1. Log in to the Microsoft Azure Portal
  2. Click Click on the burger and click Resource groups
  3. Navigate to your resource group and click + Create
  4. Search for: Function
  5. Select Function App, and click Create
  6. Enter your Function App Name (i.e. 'func-nzpublicHolidays-prd-ae')
  7. For Runtime Stack, select PowerShell Core
  8. Select the latest version (at this time, it's 7.2)
  9. Select your Region
  10. Select Windows
  11. Set your Plan (in my example, its Consumption (Serverless))
  12. Click Review + Create
  13. Click Create

Azure Function - Create

Configure Environment Variables

Now that the Function App has been created before creating the GetPublicHoliday function, we need to add a few environment variables that the Function will use; these variables will contain the ResourceGroup and Storage account name.

  1. Navigate to your Azure Function
  2. Click Configuration
  3. Click + New application setting
  4. Under the name, add: PublicHolidayRESOURCEGROUPNAME
  5. For value, type in the name of your resource group.
  6. Add a second application setting named: PublicHolidaySTORAGEACCNAME
  7. For value, type in the name of your storage account that contains the Public Holiday table.
  8. Click Save (to save the variables).

Azure Function - Variables

Configure Managed Identity

Next, we need to give the Function App the ability to read the Azure storage account. To do this, we need to configure a System assigned managed identity.

  1. Navigate to your Azure Function
  2. Click Identity
  3. Under the System assigned heading, toggle the status to On
  4. Click Save
  5. Select Yes, to enable the System assigned managed identity
  6. Under Permissions, click Azure role assignments
  7. Click + Add role assignment
  8. For Scope, select Storage
  9. Select your Subscription and storage account containing your Public Holiday data
  10. For role, select Contributor (Storage Table Data Reader is not enough).
  11. Click Save
Configure Requirements

The Azure function app will rely on a few PowerShell Modules; for the FunctionApp to load them, we need to add them to the requirements.psd1 file.

  1. Navigate to your Azure Function

  2. Click App files

  3. Change the dropdown to requirements.psd1

  4. In the hash array, comment out the #Az module line (as this will load the entire Az Module set, which will cause an increased delay in the startup as those extra modules aren't needed), and add the following:

    # This file enables modules to be automatically managed by the Functions service.
    # See https://aka.ms/functionsmanageddependency for additional information.
    #
    @{
    # For latest supported version, go to 'https://www.powershellgallery.com/packages/Az'.
    # To use the Az module in your function app, please uncomment the line below.
    #'Az' = '8.*'
    'Az.Accounts' = '2.*'
    'Az.Storage' = '4.*'
    'Az.Resources' = '2.*'
    'AzTable' = '2.*'

    }
  5. Click Save

Create Function PublicHolidays

Now that the Function App has been configured, it is time to create our Function.

  1. Navigate to your Azure Function
  2. Click Functions
  3. Click + Create
  4. Change Development environment to Develop in Portal
  5. Select Template, an HTTP trigger
  6. For the New Function name, I will go with GetPublicHoliday
  7. Change Authorization level to Anonymous (if you aren't going to implement API Management, select Function and look at whitelisting your IP only, we will be locking it down to API Management later).
  8. Click Create

Create Azure Function App

  1. Click Code + Test

  2. Copy the following Code into the run.ps1 file, this code is core to the Function that will read the HTTP request and bring back a PowerShell object with the Public Holiday information as part of a GET request:

    <# The code above does the following, explained in English:
    1. Read the query parameters from the request.
    2. Read the body of the request.
    3. Write to the Azure Functions log stream.
    4. Interact with query parameters or the body of the request.
    5. Associate values to output bindings by calling 'Push-OutputBinding'.
    https://luke.geek.nz/ #>

    using namespace System.Net

    # Input bindings are passed in via param block.
    param([Parameter(Mandatory = $true)]$Request, [Parameter(Mandatory = $true)]$TriggerMetadata)

    # Write to the Azure Functions log stream.
    Write-Host 'GetPublicHoliday function processed a request.'


    # Interact with query parameters or the body of the request.
    $date = $Request.Query.Date
    $country = $Request.Query.CountryCode

    $resourceGroupName = $env:PublicHolidayRESOURCEGROUPNAME
    $storageAccountName = $env:PublicHolidaySTORAGEACCNAME
    $tableName = 'PublicHolidays'

    $ClientIP = $Request.Headers."x-forwarded-for".Split(":")[0]

    try {

    $storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName
    $storageContext = $storageAccount.Context
    $cloudTable = (Get-AzStorageTable -Name $tableName -Context $storageContext).CloudTable
    Import-Module AzTable

    $Tables = Get-AzTableRow -table $cloudTable


    ForEach ($table in $Tables)
    {


    [string]$Filter1 = [Microsoft.Azure.Cosmos.Table.TableQuery]::GenerateFilterCondition("Country", [Microsoft.Azure.Cosmos.Table.QueryComparisons]::Equal, $country)
    [string]$Filter2 = [Microsoft.Azure.Cosmos.Table.TableQuery]::GenerateFilterCondition("Date", [Microsoft.Azure.Cosmos.Table.QueryComparisons]::Equal, $date)
    [string]$finalFilter = [Microsoft.Azure.Cosmos.Table.TableQuery]::CombineFilters($Filter1, "and", $Filter2)
    $object = Get-AzTableRow -table $cloudTable -CustomFilter $finalFilter


    $body = @()

    $System = New-Object -TypeName PSObject
    Add-Member -InputObject $System -MemberType NoteProperty -Name CountryCode -Value $object.Country
    Add-Member -InputObject $System -MemberType NoteProperty -Name HolidayDate -Value $object.Date
    Add-Member -InputObject $System -MemberType NoteProperty -Name HolidayYear -Value $object.Year
    Add-Member -InputObject $System -MemberType NoteProperty -Name HolidayName -Value $object.Name
    Add-Member -InputObject $System -MemberType NoteProperty -Name HolidayType -Value $object.Type
    Add-Member -InputObject $System -MemberType NoteProperty -Name Comments -Value $object.Comments
    Add-Member -InputObject $System -MemberType NoteProperty -Name RequestedIP -Value $ClientIP

    $body += $System
    $System = New-Object -TypeName PSObject

    $status = [Net.HttpStatusCode]::OK

    }


    }
    catch {
    $body = "Failure connecting to table for state data, $_"
    $status = [Net.HttpStatusCode]::BadRequest
    }
    #$body = $TriggerMetadata


    # Associate values to output bindings by calling Push-OutputBinding'
    Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
    StatusCode = $status
    Body = $body
    }
    )
  3. Click Save

Test Function - PublicHolidays

Before proceeding with the next step, it's time to test the function.

  1. Navigate to your Azure Function
  2. Click Functions
  3. Click GetPublicHoliday
  4. Click Code + Test
  5. Click Test/Run
  6. Change HTTP method to Get
  7. Under Query, add Country value and Date value.

Note: Make sure the date and country formats match what is in the Azure storage account.

You can also Invoke the function app directly with PowerShell, with the Date and Country as Parameters at the end:

Invoke-RestMethod -URI "https://func-nzpublicholidays-prd-ae.azurewebsites.net/api/GetPublicHoliday?Date=25-12-2023&CountryCode=NZ"

Test Public Holiday API

Congratulations! You have now created a Public Holiday API that you can call for automation! You can lock down the Function App to only certain IPs or proceed to configure Azure API Management.

Configure Azure API Management

Now that the Function App responds to requests, we can expose the HTTP endpoint through Azure API Management. Azure API Management will give greater flexibility and security over API endpoints, particularly when dealing with more than one API. Azure API Management also offers inbuilt shared cache functionality and integration into Azure Cache for Redis.

  1. Log in to the Microsoft Azure Portal
  2. Navigate to your Azure Function
  3. On the Navigation blade, select API Management
  4. Click Create New
  5. Select your subscription, Region, and organisation name.
  6. Select a Pricing Tier
  7. Click Review + Create
  8. Click Create

Create Azure API Management

  1. Wait for 10 minutes to half an hour for provisioning to take place, and Azure API Management will be in an activating state.

  2. Once API Management has been provisioned, you can copy the Virtual IP (VIP) addresses of API Management and restrict your function app to only allow inbound access from that IP.

  3. Once you have done that, add the GetPublicHoliday function app into Azure API Management, add the paths to add a version, and then, using the subscription key, you run the following command to pull data.

    Invoke-RestMethod -uri "https://apims-nzpublicholidays-prd-ae.azure-api.net/v1/GetPublicHoliday?Date=4/05/2022&CountryCode=NZ&Ocp-Apim-Subscription-Key=$KEY"

Microsoft Azure - ZonalAllocationFailed

· 5 min read

Error code: AllocationFailed or ZonalAllocationFailed

Error message: "Allocation failed. We do not have sufficient capacity for the requested VM size in this region. Read more about improving likelihood of allocation success at https://aka.ms/allocation-guidance"

When you create a virtual machine (VM), start stopped (deallocated) VMs, or resize a VM, Microsoft Azure allocates compute resources to your subscription.

ZonalAllocationFailed

Microsoft is continually investing in additional infrastructure and features to ensure that they always have all VM types available to support customer demand. However, you may occasionally experience resource allocation failures because of unprecedented growth in demand for Azure services in specific regions.

These tips, also apply to the 'Following SKUs have failed for Capacity Restrictions' error.

This error could also be caused by a parameter issue with your Infrastructure as Code deployments, if you are to restrictive and it attempts to create a resource that isn't supported - an example is a SKU that doesn't support Accelerated Networking, or an attempt to deploy an Ultra SSD disk for a SKU that doesn't deploy it.

Waiting for more Compute to be added to the Azure server clusters may not be an option, so what can you do?

Raise a Support Case
  • Take a screenshot of the error
  • Copy the Activity/Deployment ID
  • Take note of the Region
  • Take note of the Availability Zone.

Let Azure Support know; that Microsoft may already be aware, but raising a support request helps identify potentially impacted customers. If you know of other SKUs you need to deploy, you can let them know.

Purchase On-demand Capacity Reservation

On-demand Capacity Reservation enables you to reserve Compute capacity in an Azure region or an Availability Zone for any duration of time.

Unlike Reserved Instances, you do not have to sign up for a 1-year or a 3-year term commitment.

Once the Capacity Reservation is created, the capacity is available immediately and is exclusively reserved for your use until the reservation is deleted.

Capacity Reservations are priced at the same rate as the underlying VM size.

For example, if you create a reservation for the D2s_v3 VMs, you will start getting billed for the D2s_v3 VMs, even if the reservation is not being used.

So why would you purchase On-demand Capacity reservations?

  • You are operating Azure workloads that scale out and run off a fresh image, like a Citrix farm and want to ensure the capacity is available for the minimum workloads you need.
  • You have a project coming up where you need the capacity to be available.
Redeploy to another Availability Zone

The server cluster that ARM (Azure Resource Manager) attempted to deploy your workload may not have the necessary capacity, but another Availability Zone (datacenter) might.

Make sure your Virtual Machine is not in a Proximtry or Avalibility Group and do the following.

  1. Take note of the Availability Zone that your deployment failed (i.e. Availability Zone 1)
  2. Remove any resources that may have been created as part of the original failed deployment.
  3. Redeploy your workload and select another Availability Zone, such as (2 - if your failed deployment was in Zone 1)
Change the Virtual Machine version

By version, I don't mean Generation 1 and Generation 2 Virtual Machines; I mean the version of underlying Compute; when you look at a VM SKU size, you will see:

Standard_DC24s_v3

[Family] + [Sub-family]* + [# of vCPUs] + [Constrained vCPUs]* + [Additive Features] + [Accelerator Type]* + [Version]

You can read more about Virtual Machine Naming conversions "here".

The version of the VM series links to the underlying hardware associated with the Virtual Machine series; with most new hardware releases, the version changes; an example is: from v3 to v4.

#Tip: Microsoft may run a promotion on the pricing for early adopters from time to time, to move to the new version; they can be seen from the Azure Portal with "Promo" in the name.

  1. You can change the version of the SKU by looking in the Azure Portal, Sizing, and you should be select different versions of the same SKU; if you are at v5, try resizing to v4 - or the other way around.

Remember that changing the VM SKU will force the Virtual Machine to deallocate (stop), as it triggers ARM to stand up the Virtual Machine on different server clusters/hardware.

I have found that there are no noticeable decreases in performance for most workloads, but keep in mind you may be returning on older hardware - but it should get you going, and then you can update the SKU to the latest version later.

Increase regional vCPU quotas

Azure Resource Manager enforces two types of vCPU quotas for virtual machines:

  • standard vCPU quotas

  • spot vCPU quotas Standard vCPU quotas apply to pay-as-you-go VMs and reserved VM instances. They are enforced at two tiers, for each subscription, in each region:

  • The first tier is the total regional vCPU quota.

  • The second tier is the VM-family vCPU quota such as D-series vCPUs.

Check your subscription quotas and if necessary, raise a request to increase them by following the guide here: [Increase regional vCPU quotas]{https://learn.microsoft.com/azure/quotas/regional-quota-requests?WT.mc_id=AZ-MVP-5004796}

Azure Virtual Machine and a custom MAC address

· 3 min read

You may need an Azure Virtual Machine to install or license software bound to a media access control address (MAC address).

In Microsoft Azure, you can make changes to the Primary Network interface; these changes include manually setting the IP settings to changing the MAC address - these settings are managed by the underlying Network Interface and Azure host.

If you do inadvertently make changes to this, you will lose connection to the Azure Virtual Machine, however, don't panic! Until its rebooted and the configuration is reset by the Azure fabric.

This causes issues when the software is licensed to a specific MAC address; you could reissue the license to the new MAC address OR create a Secondary Interface in Microsoft Azure and update the MAC address on the secondary network interface.

You can easily create a new Network Interface from the Azure Portal and then attach it to the Virtual Machine (the virtual machine needs to be off to allow the NIC to be attached).

Change Network Adapter MAC using PowerShell

Once the NIC is created and attached, run the following PowerShell command in the Azure Virtual Virtual machine (assuming this is a Windows OS, but the same process should work for Linux):

Get-NetAdapter

You want to make sure you are targeting the right Network Adapter; in my example, it is the Hyper-V Interface #2 (with #1 being my Primary NIC).

Add the new MAC address into the $MACAddress variable, and make sure you update the InterfaceDescription to match the Network Adapter you are targeting (note the wildcard before the #2, this targets any network adapter with #2 at the end).

$MACAddress = '000000000000'
$NetAdapter = Get-NetAdapter -InterfaceDescription "*#2"
Set-NetAdapter $NetAdapter.Name -MacAddress $MACAddress
Change Network Adapter MAC using Device Manager

You can also use Device Manager to check and update the MAC address:

  1. Open the Device Manager.
  2. Expand the Network Adapters section.
  3. Right-click on your adapter.
  4. Click the Advanced tab.
  5. Enter your new MAC address.
  6. Reboot your computer to enable the changes.
  7. Check that the changes took effect.

Finally - make sure you document this MAC address somewhere with the reasons WHY the change was made. You can also Tag the secondary MAC address in Azure with notes, such as the reason why it exists, who created it etc.

Deploy Azure Naming Tool into an Azure WebApp as a container

· 14 min read

Organising your cloud workloads to support governance, operational management, and accounting requirements can take a lot of effort before the first resource is created.

Well-defined naming and metadata tagging conventions help to locate and manage resources quickly. These conventions also help associate cloud usage costs with business teams via chargeback and show-back accounting mechanisms, along with rapidly identifying what services are used across services.

A useful naming convention composes resource names from important information about each resource. A well-chosen name helps you quickly identify the resource's type, associated workload, deployment environment, and the Azure region hosting it. Some resource names, such as PaaS services with public endpoints or virtual machine DNS labels, have global scopes, so they must be unique across the Azure platform.

There's no one size fits Azure naming convention; it needs to suit your organisation. However, it is worth noting that there are limitations to naming rules for Azure resources.

With rules around naming resources that are Global, specific to Resource Groups or that have maximum character limits that can't contain specific characters - it can become a project on its own, the world of Cloud where resources are treated as cattle and not pets - the effort to develop a proper naming convention, used across teams or even companies can be quite complex.

This is where the Azure Naming Tool, as part of the Microsoft Cloud Adoption framework, comes into play.

Overview

The Naming Tool (v2 as of June 2022) was developed using a naming pattern based on Microsoft's best practices. Once the organisational components have been defined by an administrator, users can use the tool to generate a name for the desired Azure resource.

Azure [naming-tool]

This tool sitting in the Azure Naming Tool GitHub repository runs as a standalone Web (.NET 6 Blazor application) application using stateless JSON files for its Configuration and offers users the ability to generate and customise their own Microsoft Azure Naming convention taking all the restrictions into account. In addition, Azure Naming Tool - also provides a Swagger API that can be used in your Infrastructure as Code deployments to generate the names of resources on the fly.

Azure Naming Tool - Reference

This information is straight from the project README.md:

Project Components

  • UI/Admin
  • API
  • JSON configuration files
  • Dockerfile

Important Notes

The following are important notes/aspects of the Azure Naming Tool:

  • The application is designed to run as a stand-alone solution, with no internet/Azure connection.
  • The application can be run as a .NET 6 site, or as a Docker container.
  • The site can be hosted in any environment, including internal or in a public/private cloud.
  • The application uses local JSON files to store the configuration of the components.
  • The application requires persistent storage. If running as a container, a volume is required to store configuration files.
  • The application contains a repository folder, which contains the default component configuration JSON files. When deployed, these files are copied to the settings folder.
  • The Admin interface allows configurations to be "reset", if needed. This process copies the configuration from the repository folder to the settings folder.
  • The API requires an API Key for all executions. A default API Key (guid) will be generated on first launch. This value can be updated in the Admin section.
  • On first launch, the application will prompt for the Admin password to be set.

Deployment

Prerequisites

Today, we will deploy the Azure Naming Tool into an Azure WebApp, running as a Container.Azure Naming Tool - High-Level Architecture

The Azure resources we will create are:

You need Contributor rights in at least a Resource Group to deploy these Azure resources.

We will be using a mix of services such as:

To reduce the need to set up these dependencies on individual workstations, we will use a mix of the Azure Cloud Shell and Azure Portal. If you haven't set up your Azure Cloud Shell, you can refer to an article I wrote previously "here" for this remainder of this article I am going to assume you have it set up already.

Note: I will connect to the Cloud Shell using the Windows Terminal so that any screenshots will be of the Terminal, but it's the same behaviour if I used the browser experience.

Clone the Git Repository

Now is time to clone the git repository into our Cloud Shell so that we can build the docker image definition.

  1. Log in to the Microsoft Azure Portal and open up the Azure Cloud Shell (make sure you are in PowerShell (not Bash)).

  2. Run the following commands and wait for the Repository to be cloned directly into the CloudShell virtual instance:

    git clone https://github.com/mspnp/AzureNamingTool

Azure Naming Tool - Clone Repo

Create Resource Group & Azure Container Registry

Now that we have our Repository, it's time to create our Resource Group and Container Registry (Public); we will use a few PowerShell cmdlets to develop the resources; make sure you change the name of your Container Registry and Resource Group to match your environment.

  1. Log in to the Microsoft Azure Portal and open up the Azure Cloud Shell (make sure you are in PowerShell (not Bash)).
  2. Run the following commands to create the Resource Group and the Azure Container Registry:

Remember to change the name of the Container Registry - this is a globally unique resource, so if someone else has already created a registry with the same name, yours won't deploy.

   $ResourceGroup = New-AzResourceGroup -Name 'AzNamingTool-PROD-RG' -Location 'Australia East'
$registry = New-AzContainerRegistry -ResourceGroupName 'AzNamingTool-PROD-RG' -Name "ContainerRegistryAzNamingTool" -EnableAdminUser -Sku Basic
Connect-AzContainerRegistry -Name $registry.Name

AzureNaming Tool - Create Resource Group &amp; Azure Container Registry

Build your image to the Azure Container Registry

The Azure Container Registry will be stored to host and build your image definition, as Docker support is not native to the Azure Cloud Shell; now that we have created it is time to build the image and push it to the registry. Ensure you are in the AzNamingTool folder (CloudAdoptionFramework/ready/AzNamingTool/).

  1. Run the following Azure CLI command:

    az acr build --image azurenamingtool:v1 --registry $registry.Name --file Dockerfile .

AzureNaming Tool - Azure Container Registry

Deploy Azure App Service and WebApp

For the following, we will use a mix of Azure Bicep and the Azure Portal (I ran into an Access Key error and PowerShell issue when attempting to map the share using Bicep and PowerShell - if you managed to complete the setup feel free to add a comment in the comments below).

Azure Bicep will be used to create the App Service and Storage account + file share, and then we will use the Azure Portal to complete the setup (Azure WebApp as a Container and mapping the persistent file share).

First, we need to install Azure Bicep and import the Bicep file into Cloud Shell; we could Upload the file straight from the Portal or clone a repo with the file - but because I am using Azure Cloud Shell from the Terminal because Azure Cloud Shell runs on Linux - I am going to use 'nano' to create the Bicep file manually - feel free to do any of the above options to get the Azure Bicep into Cloud Shell.

Install Azure Bicep
  1. To install Azure Bicep, run:

    az bicep install

Azure Naming Tool - Install Azure Bicep

Create Azure Bicep File

We will use Nano, copy the Azure Bicep file and Paste it into Nano, and make sure you adjust the parameters to suit your environment before deploying.

  1. In the Azure Cloud Shell, let us create the file by typing.

    nano AzNamingTool_main.bicep
  2. Paste the Azure Bicep file and do any final edits

  3. Now we need to save the file; press Ctrl+X on your keyboard

  4. Press Y to save the file

  5. Verify the file name and press Enter to accept the filename.

    Azure Naming Tool - Create Bicep file

Remember to edit the Azure Bicep parameters, the Resource Names need to be globally unique, so you may run into problems if someone has deployed using the same name!

AzNamingTool_main.bicep
//Related to a Blog Article: https://luke.geek.nz for setting up Azure Naming Tool.
///Parameter Setting
param location string = resourceGroup().location

//Adjust Parameter values to match your naming conventions

param serverfarms_AzNamingTool_ASP_Prod_name string = 'AzNamingTool-ASP-Prod'
param storageAccounts_aznamingstgacc_name string = 'aznaming'

// The following Parameters are used add Tags to your deployed resources. Adjust for your own needs.

param dateTime string = utcNow('d')
param resourceTags object = {
Application: 'Azure Naming Tool'
Version: 'v2.0'
CostCenter: 'Operational'
CreationDate: dateTime
Createdby: 'Luke Murray (luke.geek.nz)'
}

/// Deploys Resources

//Deploys Azure Storage Account for Azure File Share for AzNamingtool persistant data

resource storageAccounts_aznamingstgacc_name_resource 'Microsoft.Storage/storageAccounts@2021-09-01' = {
name: '${storageAccounts_aznamingstgacc_name}${uniqueString(resourceGroup().id)}'
location: location
tags: resourceTags
sku: {
name: 'Standard_LRS'
}
kind: 'StorageV2'
properties: {
dnsEndpointType: 'Standard'
defaultToOAuthAuthentication: false
publicNetworkAccess: 'Enabled'
allowCrossTenantReplication: false
minimumTlsVersion: 'TLS1_2'
allowBlobPublicAccess: true
allowSharedKeyAccess: true
networkAcls: {
bypass: 'AzureServices'
defaultAction: 'Allow'
}
supportsHttpsTrafficOnly: true
encryption: {
requireInfrastructureEncryption: false
services: {
file: {
keyType: 'Account'
enabled: true
}
blob: {
keyType: 'Account'
enabled: true
}
}
keySource: 'Microsoft.Storage'
}
accessTier: 'Hot'
}
}
// Deploys Azure File Share from the Storage Account above.

resource Microsoft_Storage_storageAccounts_fileServices_storageAccounts_aznamingstgacc_name_default 'Microsoft.Storage/storageAccounts/fileServices@2021-09-01' = {
parent: storageAccounts_aznamingstgacc_name_resource
name: 'default'

properties: {

shareDeleteRetentionPolicy: {
enabled: true
days: 7
}
}
}

resource storageAccounts_aznamingstgacc_name_default_aznamingtool 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-09-01' = {
parent: Microsoft_Storage_storageAccounts_fileServices_storageAccounts_aznamingstgacc_name_default
name: 'aznamingtool'
properties: {
accessTier: 'TransactionOptimized'
shareQuota: 100
enabledProtocols: 'SMB'
}
}

//Deploys the App Service PLan for AzNamingTool

resource serverfarms_AzNamingTool_ASP_Prod_name_resource 'Microsoft.Web/serverfarms@2021-03-01' = {
name: serverfarms_AzNamingTool_ASP_Prod_name
tags: resourceTags
location: location
sku: {
name: 'B1'
tier: 'Basic'
size: 'B1'
family: 'B'
capacity: 1
}
kind: 'linux'
properties: {
perSiteScaling: false
elasticScaleEnabled: false
maximumElasticWorkerCount: 1
isSpot: false
reserved: true
isXenon: false
hyperV: false
targetWorkerCount: 0
targetWorkerSizeId: 0
zoneRedundant: false
}
}
Deploy Azure Bicep

Now it's time to create the Azure App Service Plan and Storage account (remove the -what if flag at the end, when you confirmed there are no errors).

  1. Run the following command to deploy the App Service and Storage account into your Resource Group:

    New-AzResourceGroupDeployment -Name 'AzNamingTool-WebApp' -ResourceGroupName 'AzNamingTool-PROD-RG' -TemplateFile .\AzNamingTool_main.bicep -WhatIf

Azure Naming Tool - Deploy Azure Bicep resources

Your resources (App Service, Storage account with File Share) should now be deployed, and we can now close our trusty Cloud Shell.

Deploy and configure WebApp as a Container
  1. Log in to the Microsoft Azure Portal
  2. Click + Create a Resource
  3. Search for: Web App and click Create a Web App
  4. Select your Subscription and Resource Group
  5. Select a name for your Web App (AzNamingTool-AS-Prod)
  6. In Publish, select Docker Container
  7. For Operating system: Select Linux
  8. Select the Region that your App Service plan was deployed to
  9. Select the App Service Plan created earlier, then Select Next: Docker
  10. Azure Naming Tool - Web App Deployment
  11. Under Options, select Single Container
  12. Change Image Source to Azure Container Registry
  13. Select your Registry and Azure Naming Tool image, then select Next: Networking
  14. Azure Naming Tool - Registry
  15. If you want to enable Network injection, by placing it on your Virtual Network, you can configure this, and we are just going head to Monitoring.
  16. Application Insights isn't required, but it is recommended - even if it is just for Availability alerting); I always enable it, so select Yes and Next Tags.
  17. Azure App Deployment - Application Insights
  18. Enter in any applicable Tags and finally click Review + Create
  19. Click Create
  20. Now that your container is running, we need to mount the Azure file share, so any persistent data is saved.
  21. Open your newly created App Service.
  22. Navigate to Configuration, under Settings in the navigation bar
  23. Click on Path mappings
  24. Click + New Azure Storage Mount
  25. Give the mount a name: i.e. a naming tool-stg-mnt
  26. Select Basic Configuration
  27. Select the Storage account created earlier (as part of the Bicep deployment) and select Azure File share
  28. Select your Storage container and enter in**/app/settings** to the mount path and click Ok
  29. Azure App Service - Mount Azure File Share
  30. Then select Save to Save the Path Mappings
Optional: Azure App Service Tweaks

By now, your Azure Naming Tool should be accessible, you don't need to do any of the following, but I recommend them at a bare minimum (environment and use case depending).

Enable Always On
  1. In your App Service, select Configuration, then General Settings
  2. Check 'On' under 'Always On'
  3. Click Save
Configure Firewall

Your App Service will be publically accessible by default, and although you may want to link it to your network via a Private Endpoint, locking down by Public IP may be suitable in some scenarios (such as this demo environment).

  1. To lock it down to a specific Public IP, in your App Service, Select Networking, then Access restriction.
  2. Add in your Public IP to restrict it from being accessible from your network and click Ok.
  3. Make sure you select the scm instance and select: Same restrictions so that the SCM instance isn't also publically accessible.

Let's take a look!

Now that you have successfully deployed the Azure Naming Tool let's take a look.

To open your Azure Naming Tool, navigate to your App Service and select Browse (or copy the URL).

When you open it the first time, you will have the option to create an Admin password, set your Password and select Save; if the Azure File Share wasn't mounted to the Web App - then your Password won't be saved if the App Services crashes or gets reloaded to another node.

Azure Naming Tool

Click on Generate

You can immediately generate a naming standard out of the box (and it already contains the prefix for the NZ North Azure region!).

Azure Naming Standard - Generate

If you click Reference, you can see the reference criteria that Azure Naming Tool works with generating your Naming schema; for example, for ApiManagement APS, we can see that the short name is: API; it supports up to 256 characters but cannot have a '#', and does not need a globally unique name.

Azure Naming Tool - Reference API Management

If you navigate to: Configuration, this is where you can specify any Custom changes to suit your Organisation or Organisations (yes, you can use this as a Cloud Architect or Consultant to generate names of multiple organisations). If you don't like the default prefixes for the Resources, Regions, Environment or even Delimiters, you can adjust them here.

Azure Naming Tool - Configuration

You can also Export and Import a configuration from a previous install on the Configuration pane.

There is also an Azure Naming Tool Swagger API that you can leverage (the API key can be found under Admin) in your Infrastructure as Code or script deployments.

Azure Naming Tool - API

Add Custom DNS servers and set Azure Point to Site VPN to Connect automatically

· 2 min read

The Azure Point to Site VPN will take the DNS servers from the Virtual Network, that the Gateway is peering into by default, but due to VNET Peering or custom configuration if you may want to point this to custom DNS servers.

To do this, you need to edit the 'azurevpnconfig.xml' file and reimport the VPN connection.

  1. Open: azurevpnconfig.xml in your favourite editor (ie Visual Studio Code or Notepad)
  2. Underneath the (which you can also change, as this is the name that users will see in Windows) add: < clientconfig>.

For example:

  <name>Luke's Azure Point to Site VPN</name>
<clientconfig>
<!-- need to specify always on = true for the VPN to connect automatically -->
<AlwaysOn>true</AlwaysOn>
<!-- Add custom DNS Servers -->
<dnsservers>
<dnsserver>10.100.1.1</dnsserver>
<dnsserver>10.100.1.2</dnsserver>
</dnsservers>
<!-- Add custom DNS suffixes -->
<dnssuffixes>
<dnssuffix>.luke.geek.nz</dnssuffix>
</dnssuffixes>
</clientconfig>

Save your azurevpnconfig.xml and import it into the Azure VPN client.

Once the VPN has been re-established your Custom DNS settings and suffxies should take effect. If you included the this will reconnect automatically, after your first connection and after computer reboots.

If you need assistance setting up a Point to Site VPN, check out my article here: Create Azure Point to Site VPN using Microsoft Entra ID authentication