Skip to main content

My Website Setup

· 2 min read

Pretty simple article today regarding 'My website setup'.

I've had a few people ask what CMS (Content Management System) my website runs on - and no it's not running on an Azure App Service!

I am using:

  • Github Pages (running Jekyll and Ruby on Rails)
  • Cloudflare as my DNS CDN (which also allows me to set HTTPS) and cache the website across the planet

Because the pages are in a git repository, I have version control across my pages, can roll back or make any changes easily and allow others to submit pull requests for changes, or issues natively.

The pages are created using Markdown, I usually have a OneNote page with an idea or blurb, then Forestry to do the initial post, and then manually edit the files and verify the syntax is correct, add tables into the page and fix any issues that may have been caused (Forestry doesn't support markdown tables and can make some content look a bit weird and unstructured, but its usually an easy fix editing the markdown manually).

Having it on Github pages, helped me learn a lot more about using git and source control, versioning methodologies.

Then for comments, I use Disqus and for analytics, Google Analytics and Bing Webmaster Tools.

All in all - I just have to pay for the domain, everything else is free and because it's stateless, caching content is a lot easier and I don't have to worry about keeping a CMS up to date/patched or a database tuned!

If you're wondering why it's not running on an Azure App Service? I wanted something cheap, could further challenge and learn from, at the end of the day I wanted a stateless website (static websites in Storage account, wasn't available when I set this up) and I wanted to reserve my limited Azure credits to be able to actually learn and play more. I have no regrets in putting it in Github Pages and depending on your requirements - recommend you try it out!

Azure Public DNS as Code

· 13 min read

The Microsoft Azure ecosystem offers a lot of capabilities that empower individuals and businesses; one of those capabilities that are often overlooked is DNS(Domain Name System).

Azure DNS allows you to host your DNS domain in Azure, so you can manage your DNS records using the same credentials, billing, and support contract as your other Azure services. Zones can be either public or private, where Private DNS Zones (in Managed Preview) are only visible to VMs that are in your virtual network.

You can configure Azure DNS to resolve hostnames in your public domain. For example, if you purchased the contoso.xyz domain name from a domain name registrar, you can configure Azure DNS to host the contoso.xyz domain and resolve www.contoso.xyz to the IP address of your web server or web app.

In this article, we are going to focus on Azure Public DNS.

I had my external DNS under source control using Terraform and the Cloudflare provider a few years ago. I wanted to see if I use source control and continuous integration to do the same thing using Azure DNS and Azure Bicep.

My theory was I could make a change to a file and then commit it and have the Azure DNS records created or modified automatically, allowing changes to DNS to be gated, approved, scheduled and audited, allowing changes and rollback a lot easier – without having to give people access to be able to create DNS records with no auditability, turns out you can!

Using an Azure DevOps pipeline and repository and Azure Bicep, we will deploy an Azure Public DNS zone to a resource group automatically on a successful commit and any records.

Azure Bicep - Pipeline High Level

Create Azure Public DNS as Code

Prerequisites

  • An Azure DevOps account and permissions to create a service endpoint
  • An Azure subscription that you have at least contributor rights to
  • A git repository (I am going to use the repository in Azure DevOps, but you could use a nested repository from GitHub)
  • The latest Azure PowerShell modules and Azure Bicep/Azure CLI for local editing
  • A domain name and rights to change the nameservers to point towards Azure DNS

In this article, I will be using an Azure subscription. I have access to an Azure DevOps (free) subscription and a custom domain I joined named 'badasscloud.com'.

I will assume that you have nothing set up but feel free to skip the sections that aren't relevant.

That that we have the prerequisites sorted let's set it up...

Create Azure DevOps Repository

  1. Sign in to Azure DevOps

  2. Select + New Project

  3. Give your project a name (i.e., I am going with: DNSAsCode)

  4. Azure DevOps - Create New Project

  5. Click Create (your project will now be created)

  6. Click on Repos

  7. Click on Files

  8. Find the 'Initialize Main branch with a README or gitignore' section and click Initialize.

  9. Azure DevOps - Create New Project

  10. You should now have an empty git repository!

    Create Azure DevOps Service Connection

    For Azure DevOps to connect to Microsoft Azure, we need to set up a service principal; you can create the service connection in Azure DevOps. However, it usually generates a service principal with a name that could be unrecognizable in the future in Azure, and I prefer to develop them according to naming convention and something that I can look at and instantly recognize its use-case. To do that, we will create it using Azure CLI.

    1. Open PowerShell

    2. Run the following commands to connect to Azure and create your Service Principal with Contributor access to Azure:

      #Connects to Microsoft Azure
      az.cmd login
      #Set SPN name
      $AppRegName = 'SPN.AzureSubscription.Contributor'
      #Creates SPN and sets SPN as Contributor to the subscription
      $spn = az.cmd ad sp create-for-rbac --name $AppRegName --role 'contributor'
      #Exports Password, Tenant & App ID for better readability - required for Azure DevOps setup
      $spn | ConvertFrom-Json | Select-Object -Property password, tenant, appId
      az.cmd account show --query id --output tsv
      az.cmd account show --query name --output tsv
    3. Make sure you record the password, application ID and the subscription ID/name; you will need this for the next step - you won't be able to view it anywhere else; if you lose it, you can rerun the sp create command to generate a new password. Now that we have the SPN, we need to add the details into Azure DevOps.

    4. Sign in to Azure DevOps

    5. Navigate to the DNS As Code project you created earlier

    6. Click on Project Settings (bottom right-hand side of the window)

    7. Click on Service connections

    8. Click on: Create a service connection

    9. Select Azure Resource Manager

    10. Click Next

    11. Click on: Service Principal (Manual) and click Next

    12. Enter in the following details that we exported earlier from the creation of the service principal:

      • Subscription ID
      • Subscription Name
      • Service Principal ID (the appId)
      • Service principal key (password)
      • Tenant ID
    13. Click Verify to verify that Azure DevOps can connect to Azure; you should hopefully see a Verification succeeded.

    14. Give the Service connection a name (this is the display name that is visual in Azure DevOps)

    15. Add a description (i.e. created by, created on, created for)

    16. Click on Verify and save

    17. You now have a new Service connection!

    18. Azure DevOps - Service Connection

Note: The password for the service principal is valid for one year, so when they expire, you can come into the Azure DevOps service connection and update it here.

Add Azure Bicep to Repository

Now that Azure DevOps has the delegated rights to create resources in Microsoft Azure, we need to add the Azure Bicep for Azure DNS Zone.

I have created the below Azure Bicep file named: Deploy-PublicDNS.bicep

Don't edit the file yet. You can add your DNS records later - after we add some variables into the Azure Pipeline.

This file will:

  • Create a new public Azure DNS zone, if it doesn't exist
  • Add/Remove and modify any records

I have added CNAME, A Record and TXT Records as a base.

Deploy-PublicDNS.bicep
///Variables - Edit, these variables can be set in the script or implemented as part of Azure DevOps variables.
//Set the Domain Name Zone:
param PrimaryDNSZone string = ''
//Deploys to the location of your resource group, that is specified during the deployment.
var location = 'Global'
//Variable array for your A records. Add, remove and amend as needed, any new record needs to be included in {}.
var arecords = [
{
name: '@'
ipv4Address: '8.8.8.8'
}
{
name: 'webmail'
ipv4Address: '8.8.8.8'
}
]
//Variable array for your CNAME records. Add, remove and amend as needed, any new record needs to be included in {}.
var cnamerecords = [
{
name: 'blog'
value: 'luke.geek.nz'
}
]

//

var txtrecords = [
{
name: '@'
value: 'v=spf1 include:spf.protection.outlook.com -all'
}

]

///Deploys your infrastructure below.

//Deploys your DNS Zone.

resource DNSZone 'Microsoft.Network/dnsZones@2018-05-01' = {
name: toLower(PrimaryDNSZone)
location: location
properties: {
zoneType: 'Public'
}
}

//Deploys your A records that are listed in the arecord variable table above.

resource DNSARecords 'Microsoft.Network/dnsZones/A@2018-05-01' = [for arecord in arecords: {
name: toLower(arecord.name)
parent: DNSZone
properties: {
TTL: 3600
ARecords: [
{
ipv4Address: arecord.ipv4Address
}

]
targetResource: {}
}
}]

//Deploys your CNAME records that are listed in the cnamerecord variable table above.

resource CNAMErecords 'Microsoft.Network/dnsZones/CNAME@2018-05-01' = [for cnamerecord in cnamerecords: {
name: toLower(cnamerecord.name)
parent: DNSZone

properties: {
'TTL': 3600
CNAMERecord: {

cname: cnamerecord.value

}
targetResource: {}
}
}]

resource TXTrecords 'Microsoft.Network/dnsZones/TXT@2018-05-01' = [for txtrecord in txtrecords: {
name: toLower(txtrecord.name)
parent: DNSZone

properties: {
'TTL': 3600
TXTRecords: [
{
value: [
txtrecord.value
]
}

]
}


}]


output cnamerecords string = CNAMErecords[0].properties.CNAMERecord.cname
output arecords string = arecords[0].ipv4Address

To add the Azure Bicep file into Azure DevOps, you can commit it into the git repository; see a previous post on 'Git using Github Desktop on Windows for SysAdmins' to help get started. However, at this stage, I will create it manually in the portal.

  1. Sign in to Azure DevOps
  2. Navigate to the DNS As Code project you created earlier
  3. Click on Repos
  4. Click on Files
  5. Click on the Ellipsis on the right-hand side
  6. Click New
  7. Click File
  8. Azure DevOps - New File
  9. Type in the name of your file (including the bicep extension), i.e. Deploy-PublicDNS.bicep
  10. Click Create
  11. Copy the contents of the Azure Bicep file supplied above and paste them into the Contents of Deploy-PublicDNS.bicep in Azure DevOps
  12. Azure DevOps - Azure Bicep
  13. Click Commit
  14. Click Commit again
  15. While we are here, let's delete the README.md file (as it will cause issues with the pipeline later on), click on the README.md file.
  16. Click on the Ellipsis on the right-hand side
  17. Click Delete
  18. Click Commit
  19. You should now only have your: Deploy-PublicDNS.bicep in the repository.

Create Azure DevOps Pipeline

Now that we have the initial Azure Bicep file, it's time to create our pipeline that will do the heavy lifting. I have created the base pipeline that you can download, and we will import it into Azure DevOps.

azure-pipelines.yml
# Variable 'location' was defined in the Variables tab
# Variable 'PrimaryDNSZone' was defined in the Variables tab
# Variable 'ResourceGroupName' was defined in the Variables tab
# Variable 'SPN' is defined in the Variables tab
trigger:
branches:
include:
- refs/heads/main
jobs:
- job: Job_1
displayName: Agent job 1
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
- task: AzureCLI@2
displayName: 'Azure CLI '
inputs:
connectedServiceNameARM: $(SPN)
scriptType: pscore
scriptLocation: inlineScript
inlineScript: >2-
az group create --name $(ResourceGroupName) --location $(location)
az deployment group create `
--template-file $(Build.SourcesDirectory)\Deploy-PublicDNS.bicep `
--resource-group $(ResourceGroupName) `
--parameters PrimaryDNSZone=$(PrimaryDNSZone)
powerShellErrorActionPreference: continue

This pipeline will run through the following steps:

  • Spin up an Azure-hosted agent running Ubuntu (it already has the Azure CLI and PowerShell setup)
  • Create the Azure resource group to place your DNS zone into (if it doesn't already exist)
  • Finally, do the actual Azure Bicep deployment and create your Primary DNS zone resource, and, if necessary, modify any resources.

Copy the contents of the YAML pipeline above, and let's import it to Azure DevOps.

  1. Sign in to Azure DevOps
  2. Navigate to the DNS As Code project you created earlier
  3. Click on Pipelines
  4. Click on the Create Pipeline
  5. Select Azure Repos Git (YAML)
  6. Select your DNSAsCode repository
  7. Select Starter pipeline
  8. Overwrite the contents of the starter pipeline with the YAML file supplied
  9. Azure DevOps - YAML
  10. Click on the arrow next to Save and Run and select Save
  11. Select Commit directly to the main branch
  12. Click Save
  13. You may get an error about the trigger. You can ignore it - we will need to set the variables and trigger now.
  14. Click on Pipelines, select your newly created pipeline
  15. Select Edit
  16. Click Variables
  17. Click on New Variable
  18. We need to add four variables. To make the deployment more environment-specific, add the following variables into Azure DevOps (these variables will be accessible by this pipeline only).
VariableNote
locationLocation where you want to deploy the Resource into – i.e. ‘Australia East’
PrimaryDNSZoneThe name of your domain you want the public zone to be, i.e. badasscloud.com
ResourceGroupNameThe name of the Resource Group that the DNS Zone resource will be deployed into, i.e. DNS-PRD-RG
SPNThe name of the Service Connection, that we created earlier to connect Azure DevOps to Azure, i.e., SPN.AzureDNSCode
  1. Azure DevOps Variables

  2. Click Save

    Test & final approval of Azure DevOps Pipeline

    Now that the Azure Pipeline has been created and variables set, it's time to test, warning this will run an actual deployment to your Azure subscription!

    We will deploy a once-off to grant the pipeline access to the service principal created earlier and verify that it works.

  3. Sign in to Azure DevOps

  4. Navigate to the DNS As Code project you created earlier

  5. Click on Pipelines

  6. Click on your Pipeline

  7. Select Run pipeline

  8. Click Run

  9. Click on Agent job 1

  10. You will see a message: This pipeline needs permission to access a resource before this run can continue

  11. Click View

  12. Azure DevOps - badasscloud.com DNS deployment

  13. Click Permit

  14. Click Permit again, to authorise your SPN access to your pipeline for all future runs

  15. Your pipeline will be added to the queue and once an agent becomes available will start to run.

As seen below, there were no resources before my deployment and the Azure Pipeline agent kicked off and created the resources in the Azure portal.

Note: You can expand the Agent Job to see the steps of the job, I hid it as it revealed subscription ID information etc during the deployment.

Azure DevOps - badasscloud.com DNS deployment

Remember to update your nameserver records for your domain to point towards the nameserver entries in the Azure DNS zone resource, to use Azure DNS!

Edit the Bicep file

Now that you have successfully deployed your Azure Bicep file, you can go into the Azure Bicep and update the A, CNAME records to match your own environment - any new change to this repository will automatically trigger Continous Integration and deployment, you can override this behaviour by editing the Pipeline, clicking Edit Trigger and unselect 'Enable; continuous integration

Each variable (var object) (cnames, arecords) is enclosed in brackets, this array allows you to add multiple records, for example, if I wanted to add another name record, it would look like this:

//Variable array for your CNAME records. Add, remove and amend as needed, any new record needs to be included in {}.
var cnamerecords = [
{
name: 'blog'
value: 'luke.geek.nz'
}
{
name: 'fancierblog'
value: 'azure.com'
}
]

Simply add another object under the first, as long as it is included in the brackets, then upon deployment Azure Bicep will parse the variable array and for each record, create/modify the DNS records, you only ever need to edit the content in the variable without touching the actual resource deployment.

As records are added and removed over time, you will develop a commit history and with the power of Azure DevOps, can implement scheduling changes at certain times and approval!

Hopefully, this article helps you achieve Infrastructure as Code for your Azure DNS resource, the same concept can be applied for other resources using Azure Bicep as well.

Controlled Chaos in Azure using Chaos Studio

· 13 min read

Chaos engineering has been around for a while; Netflix runs their own famous Chaos Monkey, supposedly running 24/7, taking down their resources and pushing them to the limit continuously; it almost sounds counter-intuitive – but it's not.

Chaos engineering is defined as “the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production” (Principles of Chaos Engineering, http://principlesofchaos.org/). In other words, it’s a software testing method focusing on finding evidence of problems before they are experienced by users.

Chaos engineering is a methodology that helps developers attain consistent reliability by hardening services against failures in production. Another way to think about chaos engineering is that it's about embracing the inherent chaos in complex systems and, through experimentation, growing confidence in your solution's ability to handle it.

A common way to introduce chaos is to deliberately inject faults that cause system components to fail. The goal is to observe, monitor, respond to, and improve your system's reliability under adverse circumstances. For example, taking dependencies offline (stopping API apps, shutting down VMs, etc.), restricting access (enabling firewall rules, changing connection strings, etc.), or forcing failover (database level, Front Door, etc.), is a good way to validate that the application is able to handle faults gracefully.

Introducing controlled Chaos tools such as Chaos Monkey and now – Azure Chaos Studio allows you to put pressure and, in some cases, take down your services to teach you how your services will react under strain and identity areas of improvement as resiliency and scalability to improve your systems.

Chaos

Azure Chaos Studio (currently in Preview and only supported in several regionsnow) is an enabler for 'controlled Chaos' in the Microsoft Azure ecosystem. Using that same tool that Microsoft uses to test and improve their services – you can as well!

Chaos Studio works by creating Experiments (i.e., Faults/Capabilities) that run against Targets (your resources, whether they are agent or service-based).

There are two types of methods you can use to target your resources:

  • Service-direct
  • Agent-based

Service-direct is tied into the Azure fabric and puts pressure on your resources from outside them (i.e., supported on most resources that don't need agent-based, PaaS resources, such as Network Security Groups). For example, a service-direct capability may be to add or remove a security rule from your network security group for faulty findings.

Agent-based relies on an agent installed; these are targeted at resources such as Virtual Machine and Virtual Machine scale sets; agent-based targets use a user-assigned managed identity to manage an agent on your virtual machines and wreak havoc by running capabilities such as stopping services and putting memory and disk pressure on your workloads.

Just a word of warning, before you proceed to allow Chaos to reign in your environment, make sure it is done out of hours or, better yet – against development or test resources, also make sure that any resources that support autoscaling are disabled – or you might suddenly find ten more instances of that resource you were running (unless of course you're testing that autoscaling is working)! 😊

In my test setup, I have the following already pre-created that I will be running my experiments against:

  • Virtual Machine Scale set (running Windows with two instances)
  • Single Virtual Machine (running Windows) to test shutdown against

The currently supported resource types of Azure Chaos Studio can be found 'here'.

Setup Azure Chaos Studio

Create Managed Identity

Because we will use Agent-based capabilities to generate our Faults, I needed to create a Managed Identity to give Chaos Studio the ability to wreak havoc on my resources!

  1. In the Azure Portal, search for Managed Identities
  2. Click on Create
  3. Select the subscrSubscriptionng the resources that you want to test against
  4. Select your Resource Group to place the managed identity in (I suggest creating a new Resource Group, as your Chaos experiments may have a different lifecycle than your resources, but it's just a preference, I will be placing mine in the Chaos Studio resource group so I can quickly delete it later).
  5. Select the RegionRegionur resources
  6. Type in a name (this will be the identity that you will see in logs running these experiments, so make sure its something you can identify with)
  7. Azure Portal - Create User Management Identity
  8. Click Next: Tags
  9. Make sure you enter appropriate tags to make sure that the resource can be identified and tracked, and click Review + Create
  10. Azure Portal Tags
  11. Verify that everything looks good and click Create to create your User Assigned Managed identity.

Create Application Insights

Now, it's time to create an Application Insights resource. Applications Insights is for the logs of the experiments to go into, so you can see the faults and their behaviours.

  1. In the Azure Portal, search for Application Insights
  2. Click on Create
  3. Select the Subscription the resources that you want to test against
  4. Select your Resource Group to place the Application Insights resource into (I suggest creating a new Resource Group, as your Chaos experiments may have a different lifecycle than your resources, but it's just a preference, I will be placing mine in the Chaos Studio resource group so I can easily delete it later).
  5. Select the Region the resources are in
  6. Type in a name
  7. Select your Log Analytics workspace you want to link Application Insights to (if you don't have a Log Analytics workspace, you can create one 'here').
  8. Azure Portal - Application Insights
  9. Click Tags
  10. Make sure you enter appropriate tags to make sure that the resource can be identified and tracked, and click Review + Create
  11. Verify that everything looks good and click Create to create your Application Insights.

Setup Chaos Studio Targets

It is now time to add the resources targets to Chaos Studio

  1. In the Azure Portal, search for Chaos Studio
  2. On the left band side Blade, select Targets
  3. Azure Chaos Studio
  4. As you can see, I have a Virtual Machine Scale Set and a front-end Network Security Group.
  5. Select the checkbox next to Name to select all the Resources
  6. Select Enable Targets
  7. Azure Chaos Studio
  8. Select Enable service-direct targets (All resources)
  9. Enabling the service-direct targets will then add the capabilities supported by Service-direct targets into Chaos Studio for you to use.
  10. Once completed, I will select the scale set and click Enable Target
  11. Then finally, Enable agent-based targets (VM, VMSS)
  12. This is where you link the user-managed identity, and Application Insights created earlier
  13. Select your Subscription
  14. Select your managed identity
  15. Select Enabled for Application Insights and select your Application Insights account. The instrumentation key should be selected manually.
  16. Azure Chaos Studio - Enable targets
  17. If your instrumentation key isn't filled in, you can find it on the Overview pane of the Application Insights resource.
  18. Click Review + Enable
  19. Review the resources you want to enable Chaos Studio to target and select Enable
  20. Finally, you should now be back at the Targets pane make sure you select Manage actions and make sure that all actions are ticked and click Save
  21. Azure Chaos Studio Capabilities

Configure and run Azure Chaos Studio

Action exclusions

There may be actions that you don't want to be run against specific resources; an example might be you don't want anyone to kill any processes on a Virtual Machine.

  1. In the Target pane of Chaos Studio, select Actions next to the resource
  2. Unselect the capability you don't want to run on that resource
  3. Select Save
  4. Azure Chaos Studio Actions

Configure Experiments

An experiment is a collection of capabilities to create faults, put pressure on your resources, and cause Chaos that will run against your target resources. These experiments are saved so you can run them multiple times and edit them later, although currently, you cannot reassign the same experiments to other resources.

Note: If you name an Experiment the same as another experiment, it will replace the older Experiment with your new one and retain the previous history.

  1. In the Azure Portal, search for Chaos Studio.
  2. On the left band side Blade, select Experiments
  3. Click + Create
  4. Select your Subscription
  5. Select your Resource Group to save the Experiment into
  6. Type in a name for your Experiment that makes sense; in this case, we will put some Memory pressure on the VM scale set.
  7. Select your Region
  8. Click Next: Experiment Designer
  9. Using Experiment Designer, you can design your Faults; you can have multiple capabilities hit a resource with expected delays, i.e., you can have Memory pressure on a VM for 10 minutes, then CPU pressure, then shutdown.
  10. We are going to select Add Action
  11. Then Add Fault
  12. I am going to select Physical Memory pressure
  13. Leave the duration to 10 minutes
  14. Because this will go against my VM scale set, I will add in the instances I want to target (if you aren't targeting a VM Scale set, you can leave this blank, you can find the instance ID by going to your VM Scale set click on Instances, click on the VM instance you want to target and you should see the Instance ID in the Overview pane)
  15. Azure Chaos Studio - Add fault
  16. Select Next: Target resources
  17. Select your resources (you will notice as this is an Agent-based capability, only agent supported resources are listed)
  18. Select Add
  19. I am then going to Add delay for 5 Minutes
  20. Then add an abrupt VM shutdown for 10 minutes (Chaos Studio will automatically restart the VM after the 10-minute duration).
  21. Azure Chaos Studio create experiment
  22. As you can see with the Branches (items that will run in parallel) and actions, you can have multiple faults running at once in parallel by using branches or one after the other sequentially.
  23. Now that we are ready with our faulty, we are going to click Review + Create
  24. Click Create

Note: I had an API error; after some investigation, I found it was having problems with the '?' in my experiment name, so I removed it and continued to create the Experiment.

Assign permissions for the Experiments

Now that the Experiment has been created, we need to give rights to the Managed User account created earlier (and/or the System managed identity that was created when the Experiment was created for service-direct experiments).

I will assign permissions to the Resource Group that the VM Scale set exists in, but you might be better off applying the rights to the individual resource for more granular control. You can see suggested roles to give resources: Supported resource types and role assignments for the Chaos Studio Microsoft page.

  1. In the Azure Portal, click on the Resource Group containing the resources you want to run the Experiment against
  2. Select Access control (IAM)
  3. Click + Add
  4. Click Add Role Assignment
  5. Click Reader
  6. Click Next
  7. Select Assign access to Managed identity
  8. Click on + Select Members
  9. Select the User assigned management identity
  10. Click Review and assign.
  11. Because the shutdown is a service-direct, go back and give the experiment system managed identity Virtual Machine Contributor rights, so it has access to shutdown the VM.

Run Experiments

Now that the Experiment has been created, it should appear as a resource in the resource group you selected earlier; if you open it, you can see the Experiment's History, Start, and Edit buttons.

  1. Click Start
  2. Azure Chaos studio - Run experiment
  3. Click Ok to start the Experiment (and place it into the queue)
  4. Click on Details to see the experiment progress (and any errors), and if it fails one part, it may move to the next step depending on the fault.
  5. Azure Chaos studio - Run experiment
  6. Azure Chaos studio should now run rampant and do best – cause Chaos!

This service is still currently in Preview. If you have any issues, take a look at the: Troubleshoot issues with Azure Chaos Studio.

Monitor and Auditing of Azure Chaos Studio

Now that Azure Chaos Studio is in use by your organization, you may want to know what auditing is available, along with reporting to Application Insights.

Azure Activity Log

When an Azure Chaos Studio experiment has touched a resource, there will be an audit trail in the Azure activity log of that resource; here, you can see that 'WhatMemory', which is the Name of my Chaos Experiment, has successfully powered off and on my VM.

Azure Activity Log - Azure Chaos Studio

Azure Alerts

It is easy to set up alerts when a Chaos experiment kicks off; to create an Azure, do the following.

  1. In the Azure Portal, click on Azure Monitor
  2. Click on Alerts
  3. Click + Create
  4. Select Alert Rule
  5. Click Create resource
  6. Filter your resource type to Chaos Experiments
  7. Filter your alert to Subscription and click Done
  8. Click Add Condition
  9. Select: Starts a Chaos Experiment
  10. Make sure that: *Event initiated by is set to (All services and users)
  11. Click Done
  12. Click Add Action Group
  13. If you have one, assign an action group (these are who and how the alerts will get to you). If you don't have one, click: + Create an action group.
  14. Specify a resource group to hold your action groups (usually a monitor or management resource group)
  15. Type the Action Group name
  16. Type the Action group Display name
  17. Click Next: Notifications
  18. Select Notification Type
  19. Select email
  20. Select Email
  21. Type in your email address to be notified
  22. Click ok
  23. Type in the Name of the mail to be a reference in the future (i.e. Help Desk)
  24. Click Review + Create
  25. Click Create to create your Action group
  26. Type in your rule name (i.e. Alert – Chaos Experiment – Started)
  27. Type in a description
  28. Specify the resource group to place the alert in (again, usually a monitor or management resource group)
  29. Check Enable alert rule on creation
  30. Click Create alert rule

Note: Activity Log alerts are hidden types; they are not shown in the resource group by default, but if you check the: Show hidden types box, they will appear.

Azure Activity Log - Azure Chaos Studio

Microsoft Entra ID Application Proxy Implementation

· 11 min read

Are you running internal web-based applications that you want to give access to users working remotely securely without the need for a VPN or firewall? Do you want to enforce or use Azure Conditional Access policies to protect and manage access?

Let me introduce the Microsoft Microsoft Entra ID Application Proxy...

Application Proxy is a feature of Azure AD that enables users to access on-premises web applications from a remote client. Application Proxy includes both the Application Proxy service which runs in the cloud, and the Application Proxy connector, which runs on an on-premises server. Azure AD, the Application Proxy service, and the Application Proxy connector work together to securely pass the user sign-on token from Azure AD to the web application. Application Proxy also supports single sign-on.

Application Proxy is recommended for giving remote users access to internal resources. Application Proxy replaces the need for a VPN or reverse proxy.

Overview

The Microsoft Entra ID Application Proxyhas been around for a few years, but appears to be a hidden gem; the Application Proxy allows users_(by using Microsoft Entra ID and an Application Proxy Connector(s))_ to connect to internally hosted web applications, by the connector relaying the traffic.

Azure Application Proxy - Network Diagram

Application Proxy supports the following types of applications:

  • Web applications
  • Web APIs that you want to expose to rich applications on different devices
  • Applications hosted behind a Remote Desktop Gateway
  • Rich client apps that are integrated with the Microsoft Authentication Library (MSAL)

Azure Application Proxy can often be overlooked to solve your business requirements without the need to implement costly third-party firewalls (it also doesn't have to be an on-premises workload, for example, if the web application is running on a VM in Azure, it will also work).

The Azure Application proxy connector is a lightweight agent installed on a Windows Server machine that is logically close to the backend service that you want to deliver through the proxy.

The Connector gives access to and relays the information to the Application proxy service in Microsoft Azure via HTTP/HTTPS as long as it has access to the following:

URLPortHow it's used
*.msappproxy.net *.servicebus.windows.net443/HTTPSCommunication between the connector and the Application Proxy cloud service
crl3.digicert.com crl4.digicert.com ocsp.digicert.com crl.microsoft.com oneocsp.microsoft.com ocsp.msocsp.com80/HTTPThe connector uses these URLs to verify certificates.
login.windows.net secure.aadcdn.microsoftonline-p.com *.microsoftonline.com *.microsoftonline-p.com *.msauth.net *.msauthimages.net *.msecnd.net *.msftauth.net *.msftauthimages.net *.phonefactor.net enterpriseregistration.windows.net management.azure.com policykeyservice.dc.ad.msft.net ctldl.windowsupdate.com www.microsoft.com/pkiops443/HTTPSThe connector uses these URLs during the registration process.
ctldl.windowsupdate.com80/HTTPThe connector uses this URL during the registration process.

Setup Azure Application Proxy

I will set up an Azure Application Proxy to grant access to my Synology NAS (Network Attached Storage) device web page in this guide.

Although I am using my local NAS web administration page, it can be any webpage (Unifi Controller, hosted on Apache, IIS etc.) accessible from the connector.

  • I have a Windows Server 2022 Domain Controller.
  • Synology NAS (not domain joined, but accessible on the network via a DNS record from the domain)
  • Microsoft 365 Developer subscription with appropriate licenses

Pre-requisites for Azure Application Proxy setup

The following resources and rights will be needed to set up Azure Application Proxy:

  • An Microsoft Entra ID tenant
  • A minimum of Application Administrator rights is required to set up the Application and user and group assignments.
  • A server running Windows Server 2012 R2 or above to install the Application Proxy connector on (and the permissions to install)
  • If you are using a third-party domain (you will need a public SSL certificate) and, of course, the ability to edit external DNS records, the domain will need to be added to Microsoft Entra ID as a custom domain in order to be used.
  • Microsoft Entra ID Premium P1 license or M365 Business Premium/E3 license for each user using Microsoft Entra ID Application Proxy.

Microsoft Entra ID Application Proxy Licensing

(Note: Normal Azure AD service limits and restrictions apply).

I will be configuring the Azure Application Proxy on a domain controller running Windows Server 2022.

Disable IE Enhanced Security Configuration

The Azure Application Proxy connector requires you to log in to Microsoft Azure, and I will be installing this on a Windows Server 2022 domain controller; if this Enhanced Security Configuration is enabled (as it should be), you will have problems authenticating to Microsoft Azure, so the easiest thing is to turn it off temporarily.

  1. Open Server Manager
  2. Click on Local Server
  3. Click on: IE Enhanced Security Configuration
  4. Select Off for: Administrators
  5. Close Microsoft Edge (if you have it opened)
  6. Disable IE Enhanced Security Configuration

Install Azure Application Proxy Connector

  1. Login to Azure Portal (on the server that you want to install the Connector on)
  2. Navigate to: Microsoft Entra ID
  3. Select Application Proxy
  4. Azure Portal - Application Proxy
  5. Click on: Download connector service.
  6. Accept the system requirements and click Accept Terms & Download
  7. A file named: 'AADApplicationProxyConnectorInstaller.exe' should have been downloaded. Run it.
  8. Select: I agree to the license terms and conditions and select Install
  9. Microsoft Microsoft Entra ID Application Proxy Connector Installation
  10. Wait for the Microsoft Microsoft Entra ID Application to display and log in with an Microsoft Entra ID account with Application Administrator rights.
  11. The Microsoft Microsoft Entra ID Application Connector will now be registered in your Microsoft Entra ID tenancy.
  12. Microsoft Microsoft Entra ID Application Proxy Connector Installation
  13. Click Close
  14. Now re-enable IE enhanced security configuration.

You should now see two new services appear in services as Automatic (Delayed Start):

  • WAPCsvc - Microsoft AAD Application Proxy Connector
  • WAPCUpdaterSvc - Microsoft AAD Application Proxy Connector Updater

And the following processes running:

  • ApplicationProxyConnectorService
  • ApplicationProxyConnectorUpdateService

ApplicationProxyConnectorService

If you are running Server Core, Microsoft Microsoft Entra ID Application Proxy can be installed via PowerShell.

The Azure Application Proxy Connector agent gets updated automatically when a new major version is released by Microsoft.

Configure Connector Group

Now that you have created the Connector, the Application Proxy has put our Connector in a group that has defaulted to Asia; because you can have more than one Application Proxy Connector for redundancy and different applications, we will create a new Connector Group that is set to use the Australia region if Asia works for you – feel free to skip this step.

  1. Login to Azure Portal (on any PC/server)
  2. Navigate to: Microsoft Entra ID
  3. Select Application Proxy
  4. You should now see: Default and your Region
  5. If you expand the Default Group, will you see your Connector:
  6. Azure AD Application Proxy Connector Groups
  7. Click on + New Connector Group
  8. Give it a name (i.e., On-premises)
  9. Select the Connector you had earlier and select the region closest to you (currently, the following regions can be chosen: Asia, Australia, Europe, North America)
  10. Azure AD Application Proxy - New Connector Group
  11. Click + Create
  12. Clicking create will create your new On-premises connector group and add the Connector to the group.

Configure your Azure Application Proxy Application

Now that you have your Connector setup, its time to set up your application

  1. Login to Azure Portal (on any PC/server)
  2. Navigate to: Microsoft Entra ID
  3. Select Application Proxy
  4. Click on: + Configure an app
  5. Fill in the details that match your application:
  • Name: This is the application that users will see (i.e. I am going with Pizza, which is the name of my NAS)
  • Internal URL: This is the internal URL used to access your application inside the network (in my example, it is: http://pizza.corp.contoso.com/)
  • External Url: This is the external URL that will be created so that users can access the application form; I will go with Pizza. Note this URL down.
  • Pre-Authentication: You don't have to authenticate with Azure AD, you can use passthrough, but it is not something I would recommend without delving into requirements, testing – I am going to select: Microsoft Entra ID.
  • Connector Group: Select the connector group you created earlier or that your Connector is signed to.
  • Leave all Additional Settings as default – they can be changed later if you need to.
    1. Azure Application Proxy
    2. Verify that everything is filled out correctly and, click + Add
    3. Azure Application Proxy has now created a new Enterprise Application for you; based on the name mentioned earlier, if you navigate to the external URL mentioned earlier, you should get a prompt similar to below:
    4. Azure AD Login Error
    5. It is now time to assign the permissions for users to access the Application via Microsoft Entra ID!

Assign rights to your Azure Application Proxy Application

  1. Login to Azure Portal (on any PC/server)
  2. Navigate to: Microsoft Entra ID
  3. Select Enterprise Applications
  4. Find the application that was created earlier by the Azure Application Proxy service.
  5. Microsoft Entra ID, Enterprise Application
  6. Click on the Application
  7. Click on: Users and Groups
  8. Click Add Assignment
  9. Add a user or group (preferred) you want to have access to this application.
  10. Click Assigned
  11. Azure AD Enterprise Applications - User & Group Assignment
  12. Click on Application Proxy
  13. Here you can see and edit the information you created earlier when you created the application, copy the External URL
  14. Open Microsoft Edge (or another browser of your choice)
  15. Paste in the External URL
  16. Log in with the Microsoft Entra ID account that was assigned to the Enterprise application.
  17. You should now have access to your on-premises web application from anywhere in the world, and because you are using Microsoft Entra ID, your conditional access policies and restrictions will be in effect:
  18. Synology Login

Note: Because the Synology web interface was running on port: 5000, I had to go back and add the port to the internal URL, as the Application Proxy was attempting to route to the incorrect port. Note: You may also notice that Microsoft has supplied an *.msappproxy.net certificate, even if your backend service doesn't have one..

Setup Password-based Single-Sign on

Azure Application Proxy supports various single sign-on methods, including Kerberos SPN integration.

However, my Synology NAS uses standalone accounts, so I will set Password-based single sign-on, allowing the MyApps extension to store my credentials (if you want single-sign-on using the password-based sign in, then every user will need to have this extension configured).

  1. Download and install the MyApps Secure Sign-in extension
  2. Log in using your Microsoft account to the MyApps extension
  3. Azure App Proxy
  4. Login to Azure Portal (on any PC/server)
  5. Navigate to: Microsoft Entra ID
  6. Select Enterprise Applications
  7. Find the application that was created earlier by the Azure Application Proxy service.
  8. Click on Single sign-on
  9. Select Password-based
  10. Azure Portal - Single Signon
  11. Type in the URL of the authentication webpage and click Save
  12. Azure App Proxy
  13. The Azure AD Application Proxy didn't find my sign-in login and password fields, so I have to manually configure them, select: Configure Pizza Password Single Sign-on Settings.
  14. Select: Manually detect sign-in fields
  15. Select Capture sign-in fields
  16. Azure Application Proxy - Configure Sign-on
  17. Your MS Edge Extension should show Capture Field:
  18. Azure Application Configure Extension
  19. Enter in your username
  20. Press Enter
  21. Enter in your password
  22. Select the MS Apps extension and select Save
  23. Navigate back to the Azure Portal
  24. Select 'I was able to sign in.'
  25. If successful, Azure AD should now have mapped the fields:
  26. Azure Portal - Signin Fields
  27. Click Save
  28. Next time you log in to the application, the My Apps Secure Sign-in Extension will have cached the credentials. It should automatically log you into the application, meaning you should only log in once with your Azure AD credentials.

Access your Azure Application Proxy published application

  1. You can now go to My Apps (microsoft.com), and you will see your application.
  2. M365 Waffle
  3. Your application will also appear in the Microsoft 365 Waffle (it may take up to an hour to appear):
  4. M365 Waffle

I recommend you go into the Enterprise Application and upload a better image/logo so your users can quickly tell it apart.

Git using Github Desktop on Windows for SysAdmins

· 9 min read

Git (Git is software for tracking changes in any set of files, usually used for coordinating work among programmers collaboratively developing source code during software development, allowing versioning, source control and enablement of continuous Integration and deployment)has been around for years(development and the first release began in 2005 by Linus Torvolds).

Although primary driven and consumed by software developers – it is now a staple of everyday life for an IT professional of many disciplines (i.e. Operations, Delivery), even if a git repository is used to store your PowerShell scripts (hint – it should!).

You don't have to know every single git command line syntax to use Git.

Tools such as Visual Studio Code allows you to utilize git source control efficiently, and of course, you can use Git directly from the command line; however, sometimes you want an easy way to leverage Git through a point and click interface, there a lot of tools out there to give you easy access to Git, but today I will concentrate on Github Desktop.

If you are looking at something a bit more powerful (especially if you are wanting to do submodules), then I suggest Atlassian Sourcetree.

Introducing Github Desktop... "Focus on what matters instead of fighting with Git. Whether you're new to Git or a seasoned user, GitHub Desktop simplifies your development workflow."

Github Desktop - Overview

Github Desktop gives you a clean, light and easy to use tool to work with git repositories that is constantly kept up to date and improved upon!

Although Github Desktop is published by Github – this doesn't mean you cannot use a git repository hosted by another provider, such as Azure DevOps.

This article assumes that you have a Git repository initialized already; you can create free repositories from Azure DevOps or Github. Microsoft owns Azure DevOps and Github; personally, I have moved from Azure DevOps to Github for my git repositories but utilize AzureDevOps pipelines.

Git High level workflow

Install Github Desktop

Installation of Github Desktop is pretty simple, but assuming you have rights to install the software:

  1. In your web browser, navigate to Github Desktop homepage and click on: Download
  2. Github Desktop - Download
  3. Once it's downloaded, you should have a file such as GitHubDesktopSetup-x64.exe (it should only take a few seconds, the file is about 109 MB at the time this article was written), then run it to install.
  4. Github Desktop - Installing

Congratulations, you have now installed Github Desktop!

Add your Azure DevOps repository

If you have an Azure DevOps git repository, then follow the steps below – if you have chosen to go: Github, then feel free to skip this section for the next.

  1. Sign in to Azure DevOps
  2. Navigate to the project you want to add to Github Desktop
  3. Click on Repos, Files
  4. Azure DevOps - Repo
  5. In the address bar, you will see your URL, and it should look like this: https://dev.azure.com/%username%/_git/%projectname%
  6. Copy the URL and open Github Desktop
  7. Click on File and Clone a repository
  8. Click on URL
  9. Github Desktop - Clone a Repository
  10. Paste in the repository URL you copied earlier.
  11. Select the Local path of where you want the Git repository to be saved locally on your device
  12. Now we need to generate git credentials to clone your repository, navigate back to Azure DevOps.
  13. Azure DevOps - Clone
  14. Click on Generate Git Credentials
  15. Azure DevOps will now generate the username and password that will be used by Github Desktop to authenticate with your git repository.
  16. Navigate back to Github Desktop
  17. Click Clone
  18. Enter in the username and password that you received from the git credentials, generated by Azure DevOps and click Clone.
  19. Github Desktop should now clone your repository locally.

Congratulations, you have set up an Azure DevOps git repository using Github Desktop.

Add your Github repository.

If you have an Azure DevOps git repository, follow the steps above – otherwise, follow these steps to add your Github repository into Github Desktop.

  1. Open Github Desktop
  2. Click File
  3. Click Clone repository….
  4. Github Desktop - Clone repository
  5. On the Github.com tab, enter your Github credentials
  6. Select the Local path of where you want the Git repository to be saved locally on your device
  7. Click Clone

Congratulations, you have now set up a Github git repository using Github Desktop.

Using Github Desktop

Now that you have a git repository cloned locally, it's time to use it.

Initial Commit

Once you have a file created and saved into the folder of your git repository, i.e. a PowerShell script, you will want to commit it to the git repository.

  1. Open Github Desktop
  2. Click on: Current repository to make sure your repository is selected.
  3. Github Desktop - Initial Commit
  4. In my example, I have created a new file called: HelloWorld.ps1 in my PowerShell repository.
  5. What you can see in the screenshot below is the various components that make up the Github Desktop; you can see the changed file (i.e. the new file), the contents of the file and what will be added, the commit title and the all-important commit description.
  6. Github Desktop - Overview
  7. You can change the title to something more appropriate if you want, but with your commit description, this is what you will use for versioning and seeing what changes you made in the future from a quick glance – make sure it's an appropriate description and click Commit to master.
  8. Committing it to master does not push it to its 'Origin'. I.e. the actual remote git repository (stored in Github or Azure DevOps) will commit to the local git repository. This allows you to work on code locally without requiring every change to be uploaded to a local repository. In order to commit to the Origin and remote repository, click on: Push Origin.
  9. Github Desktop - Header
  10. Once it has been committed, you should be able to see the file on the origin git repository, and you can Push multiple local git changes at once.
  11. If you click on: History should now see your commit with your file and description (as you can see, I was using an old PowerShell repository that I had merged into other repositories since then but thought it was worth using it for this article).
  12. Github Desktop - Initial commit

Congratulations, you now committed your first file into Git! It wasn't that difficult!

Restore file from the previous version

One of the benefits of using Git is version control and restoring a file if something stops working, or someone had an 'Oops!' moment! With Github Desktop, restoring a previous version is straightforward.

  1. Open Github Desktop
  2. Click on: Current repository to make sure your repository is selected
  3. Click on History (you may need to click Fetch Origin if files have been updated remotely)
  4. As you can see, someone (i.e. Luke Murray) has made a change to my' HelloWorld. ps1'' file, to be: "I like Unicorn" and changed the background and foreground colour to be both Yellow'.
  5. I can right-click that file and select Revert changes in the commit using Github Desktop.
  6. Github Desktop - Revert changes
  7. You will now have a new entry in the History that will revert the commit, and you can quickly push it back to Origin again.

Congratulations, you have successfully reverted a commit to a previous version using Github Desktop.

Working with branches

A significant function of Git is the ability to create and use branches. Branches allow you to work on features without touching the main or master branch (where you can have your production or thoroughly tested resources, for example).

  1. Open Github Desktop
  2. Click on: Current repository to make sure your repository is selected
  3. To create a branch, click on the Current branch and select New branch and give it a name, i.e. Dev
  4. Make a change to the file like you typically would and save
  5. Github Desktop has automatically added your changes, and you can commit them to the dev branch without touching master.
  6. Github Desktop - Branch commit
  7. If you navigate to the master branch, you can see that the file has remained untouched. All the control and versioning is done by Git!
  8. When you are ready to merge the dev branch into master, click the current branch.
  9. Select: Choose a branch to merge into master
  10. Select your branch, i.e. Dev
  11. Github Desktop - Merge branch
  12. Click on create a merge commit.
  13. You should see a message in Github notifying that the merge was successful, and you can push your changes to the origin repository.
  14. Github Desktop should redirect you to the master branch, and you can now see your changes:
  15. Github Desktop
  16. You can go back to using Dev to develop additional features, testing etc. and repeat the same process.

Using a master branch allows others to get production-ready scripts or code, or avoid automation around continuous deployment to production resources, while you may be still working on functionality that you don't quite want to be released yet.

Hopefully, this article gives you an excellent base to start your git journey!

There is a lot more functionality built into Github Desktop, especially around branching, but for day to day use, the above should give you all you need!

It is also worth reading this article on the .gitignore file, to make sure your git repositories don't end up bloated by unwanted files and you are only committing the files you need to be.