Skip to main content

188 posts tagged with "Azure"

View All Tags

Cloud Adoption Framework for Azure - Tools and Templates

· 4 min read

To help with your Microsoft Cloud Adoption and Azure migration, you need a few things to be successful:

  1. Define your strategy, what are your expected outcomes? Where do you start, what skills do you have or need?
  2. Plan, this may include organisational alignment to get moving to the Cloud
  3. Ready, this is where you look at your governance, Landing Zones /Blueprints
  4. Adopt, this is where you actually migrate your workloads into the cloud, existing apps and new

Cloud Adoption Framework for Azure

Here are some useful tools, templates, and assessments provided by Microsoft to help on your journey:

Note: It is not as if you can't get these resources elsewhere, I purely just wanted a list format for easy reference.

Define strategy

Plan

Ready

Adopt

Govern

Manage

The Microsoft Cloud Adoption Framework page has everything listed above and more! If you are serious about Cloud Adoption, then reading through the official documentation not only gives you better context to the resources linked to this page but gives you more ways to think about potential opportunities to help with your Cloud adoption!

Most of these tools can be found directly in the public Cloud Adoption Framework GitHub repository: microsoft/CloudAdoptionFramework so keep an eye on that!

Make sure you follow the Cloud Adoption Framework - Whats new page, to keep up with the current best practices

Azure NAT Gateway - Implementation and Testing

· 9 min read

With most Cloud resources being accessible over the internet, each publically accessible resource has its own public IP address, this makes it a lot more challenging to administer the security and access rules to access third party services.

Think along the lines of - you or your organisation might use software-as-a-service CRM product. That product is only accessible from your organisations IP for compliance/security reasons, you might access the CRM product from various Azure Virtual Desktop hosts, each with its public IP or a random Microsoft Azure datacenter IP, or you want to control Multifactor authentication/conditional access policies for users using Azure services.

The administration of this, particularly in scenarios where other people or teams can create and manage resources, can be complex, sure; you can use Standard Load Balancers, which would help, but you have to manage and pay for it, which is sometimes overkill.

Tunnelling outbound traffic through to a specific IP address or IP addresses to 'known controllable IP addresses' for Azure resources (both IaaS and PaaS) which sit in the same Virtual Network is where the Azure NAT Gateway comes in, allowing you to easily allow and control what IPs your traffic is coming from. NAT Gateway replaces the default Internet destination in the virtual network’s routing table for the subnets identified

"The Azure NAT gateway is a fully managed, highly resilient service built into the Azure fabric, which can be associated with one or more subnets in the same Virtual Network, that ensures that all outbound Internet-facing traffic will be routed through the gateway. As a result, the NAT gateway gives you a predictable public IP for outbound Internet-facing traffic. It also significantly increases the available SNAT ports in scenarios where you have a high number of concurrent connections to the same public address/port combination."

My Testing

Now lets get testing the Azure NAT Gateway! To test the gateway, I created:

  • Virtual Network
  • NAT Gateway
  • IP Public Address prefix
  • 1 Windows VM (Windows Server 2019) with Public IP
  • 1 Linux (Ubuntu 18.04) VM with Public IP
  • 1 Windows VM (Windows Server 2019) as a backend pool for an Azure Load Balancer
  • Virtual Machine Scale Set with four instances (each with Windows Server 2019)

Note: Each VM has RDP opened to allow inbound traffic from my network using the Public IP and a NAT rule allowing RDP traffic on the Load Balancer. There is no point-to-site or site-to-site VPN; RDP connections are directly over the internet to Australia East, from New Zealand.

NAT Gateway - Test

Once the Azure resources were created, I then connected to each machine using RDP/SSH on their Public IP address and tested:

Linux Machine with Public IP for RDP

  • Inbound Public IP: 20.53.92.19
  • Outbound IP: 20.53.73.184

Linux Azure NAT Gateway

As you can see, I connected to the Linux VM's public IP via SSH and did a curl to: https://ifconfig.me/ to grab my public IP. The public IP of my Linux box was my NAT Gateway Public IP prefix!

Windows Machine with Public IP for RDP

  • Inbound Public IP: 20.70.228.211
  • Outbound IP: 20.53.73.184

Windows Azure NAT Gateway

Using RDP to the public IP of the Windows Server, I navigated to: https://www.whatismyip.com/. As you can see, the Public IP of my outbound IP address was my NAT Gateway Public IP prefix!

Windows Machine behind an Azure Load Balancer

  • Inbound Public IP: 20.211.100.67
  • Outbound IP: 20.53.73.185

Windows Machine behind Azure Load Balancer NAT Gateway

This was the last of the 3 test machines; I stood up. Using RDP to the public IP of the Azure Load BalancerI navigated to: https://www.whatismyip.com/. As you can see, the Public IP of my outbound IP address was my NAT Gateway Public IP prefix; however, this was '20.53.73.185', which was the second IP address available in my /31 IP address prefix.

Windows Machine behind a VM Scale Set

Although not in the diagram, I decided to add a VM Scale Set of 4 Virtual Machines into my testing (to save on cost, they are just Standard_B2ms machines but more than enough for my testing).

Azure NAT Gateway - VM Scale Set

As you can see from the mess that is my screenshot above, all machines had completely different inbound Public IP addresses. Still, the outbound public IP addresses came from the NAT Gateway as expected.

Findings and Observations

  • The outbound public IP did seem to change between the workloads; if I refreshed 'whatismyip' and 'ifconfig', the public IP changed between 184 and 185. However, no loss of connectivity occurred to the Virtual Machines. This was linked to the '4-minute idle timeout' configured on the NAT Gateway; I saw no reason to change the default timeout value; if I were that worried about the same IP address - I would have chosen with a Public IP vs a Public IP prefix on the NAT Gateway.
  • Any Public IP used on the same subnet as a NAT Gateway needs to be Standard.
  • If I had both a Public IP address and a Public IP prefix on my NAT gateway, the Prefix seemed to take precedence.
  • You cannot use a Public IP Prefix that is in use by the NAT Gateway for any other workload, i.e. any inbound Public IPs. It would be best if you had another Public IP prefix resource.
  • A single NAT gateway resource supports from 64,000 up to 1 million concurrent flows. Each IP address provides 64,000 SNAT ports to the available inventory. Therefore, you can use up to 16 IP addresses per NAT gateway resource. The SNAT mechanism is described here in more detail.

Create a NAT Gateway

To create my NAT Gateway, I used the ARM Quickstart template, located here: https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/quickstart-create-nat-gateway-template.

Then I created the additional Virtual Machines and Load Balancers and added them to the same VNET created as part of the NAT Gateway.

To create a NAT Gateway using the Azure Portal

  1. Log in to the Azure Portal and navigate to Create a resource, NAT Gateway (this link will get you there: Create-NATGateway).
  2. Select your Subscription
  3. Enter your NAT Gateway name
  4. Enter your Region
  5. Enter your availability zone
  6. Set your idle timeout (I suggest leaving this at 4 minutes, you can change it later if it presents issues)
  7. Create Azure NAT Gateway
  8. Click Next: Outbound IP
  9. We are just going to create a new Public IP address (it has to be Standard and Static, the Azure Portal automatically selects this for you - although you can create your Public IP prefix here for scalability, you don't need it both).
  10. Create Azure NAT Gateway
  11. Click Next: Subnet
  12. Create or link your existing Virtual Network and subnets and click Next: Tags
  13. Enter in any tags that may be relevant (Creator, Created on, Created for, Support Team etc.)
  14. Click Next: Review + Create
  15. Verify everything looks ok then click Create

Congratulations, you have now created your NAT Gateway!

To create a NAT Gateway using Azure Bicep

Just a quick Bicep snippet I created to create the NAT Gateway resource only:

Create-NATGateway.bicep

//Target Scope is: Resource Group

targetScope = 'resourceGroup'

//Set Variables and Parameters

@allowed([
'Prod'
'Dev'
])
param environment string = 'Prod'
param location string = resourceGroup().location

param dateTime string = utcNow('d')
param resourceTags object = {
Application: 'Azure NAT Gateway/Azure Network Management'
CostCenter: 'Operational'
CreationDate: dateTime
Environment: environment
}

//// Resource Creation

/// Create - NAT Gateway

resource NATGW 'Microsoft.Network/natGateways@2021-03-01' = {
name: 'aznatgw'
tags: resourceTags

location: location
sku: {
name: 'Standard'
}

properties: {
idleTimeoutInMinutes: 4
}
}

It can be deployed by opening PowerShell (after Bicep is installedusing the PowerShell method)and logging into your Azure and running the following(replace RGNAME with the name of the Resource Group you will be deploying it to):

When you are actually ready to deploy, remove the -Whatif at the end. Then you can go into the resource and add the Public IP/prefix. PowerShell will prompt you for the name of the NAT Gateway and be created in the same location as the Resource Group by default.

New-AzResourceGroupDeployment -Name NatGwDeployment -ResourceGroupName RGNAME -TemplateFile .\Create_NATGateway.bicep -whatif

Additional Resources

Benefits to using the Microsoft Azure Cloud to host your Infrastructure

· 5 min read

Cloud computing offers many benefits, from your traditional on-premises infrastructure, ecosystems such as Microsoft Azure, have an underlying fabric built for today's 'software as a service' or 'software defined' world.

The shift of technologies from managing on-premises Exchange environments for mail to consuming Microsoft 365 services has allowed more time for the IT and businesses to adopt, consume and improve their technology and continuously improve - to get the most use of it and remain competitive in this challenging world.

Below is a high-level list of what I consider some of the benefits of using the Microsoft Azure ecosystem:

  • Each Azure datacentre 'region' has 3 Availability Zones, each zone acts as a separate datacentre, giving redundant power and networking services, quickly allowing you to separate your services across different fault domains and zones, providing better resiliency, while also giving you the ability to keep them logically and physically close together.
  • Geo-redundant replication of backups for Virtual Machines, PaaS/File Shares, and ability to do cross-region restore functionality (i.e., Australia/Australia East).
  • A multitude of hosts (supporting both AMD and Intel workloads), which are continually patched and maintained, and tuned for virtualisation performance, stability and security, no longer do we need to spend hours patching, maintaining, licensing on-premises hypervisors, ever so increasing as these systems get targeted for vulnerabilities and architecting how many physical hosts, we may need to support a system.
  • Consistent, up-to-date hardware, no need to worry about lead times for new hardware, purchasing new hardware every three years and procurement and implementation costs of hardware, allowing you to spend the time improving on the business and tuning your services (scaling up and down, trying new technologies, turning off devices etc.)
  • For those that like to hoard every file that ever existed, the Azure platform allows scale (in and out to suit your file sizes) along with cost-saving opportunities and tweaks with Automation and migrating files between cool/hot tiers.
  • No need to pay datacentre hosting costs
  • No need to worry about redundant switching
  • With multiple hosts, there is no risk around air conditioning leaks, hardware failure; you don't need to worry about some of these unfortunate events occurring.
  • No need to pay electricity costs to host your workloads.
  • Reduced IT labour costs and time to implement and maintain systems
  • OnDemand resources available can stand up separate networks unattached to your production network for testing or other devices easily without working out through VLANs or complex switching and firewalls.
  • Azure Network have standard DDOS protection enabled by default
  • Backups are secure by default; they are offline and managed by Microsoft, so if a ransomware attack occurs, won't be able to touch your backups.
  • Constant Security recommendations, improvements built into the platform.
  • Azure Files is geo-redundant and across multiple storage arrays, encrypted at rest.
  • Windows/SQL licensing is all covered as part of the costings, so need to worry about not adhering to MS licensing, Azure helps simplify what can sometimes be confusing and complex licensing.
  • Extended security updates for out-of-date Server OS such as Windows Server 2008 R2, Windows Server 2021 R2 without having to pay for extended update support.
  • Ability to leverage modern and remote desktop and application technologies such as Windows 365 and Azure Virtual Desktop, by accessing services hosted in Azure.
  • Having your workloads in Azure gives you a step towards, removing the need for traditional domain controllers and migrating to Microsoft Entra ID joined devices.
  • Azure AutoManage functionality is built in to automatically patch Linux (and Windows of course!), without having to manage separate patching technologies for cross-platform infrastructure.
  • Azure has huge support for Automation, via PowerShell, CLI and API, allowing you to standardize, maintain, tear down and create infrastructure and services, monitoring, self-users on an as needed basis.
  • Azure datacentres are sustainable and run off renewable energy where they can, Microsoft has commitments to be fully renewable.
  • No need for NAS or Local Backups, the backups are all built into Azure.
  • Compliant datacentre across various global security standards - https://learn.microsoft.com/en-us/compliance/assurance/assurance-datacenter-security
  • Ability to migrate or expand your resources from Australia to ‘NZ North’ or other new or existing data centres! Azure is global and gives you the ability to scale your infrastructure to a global market easily or bring your resources closer to home if a data centre becomes available.
  • We all know that despite the best of intentions, we rarely ever test, develop, and improve disaster recovery scenarios, sometimes this is because of the complexity of the applications and backup infrastructure. Azure Site Recovery, Geo-Redundant backup, Load Balancers and automation helps make this a lot easier.
  • Ability to better utilise Cloud security tools (ie such as the Azure Security Center), across Cloud and on-premises workloads consistently using Azure Arc and Azure policies.
  • And finally - more visibility into the true cost and value of your IT infrastructure, the total cost of your IT Infrastructure is hidden behind electricity costs, outages and incidents that would not have impacted cloud resources, slow time to deployment or market, outdated and insecure technologies and most likely services you are running which you don't need to run!

#ProTip - Resources such as the Azure Total Cost of Ownership (TCO) can help you calculate the true cost of your workloads.

Implement WebJEA for self-service Start/Stop of Azure Virtual Machines

· 15 min read

WebJEA allows you to build web forms for any PowerShell script dynamically. WebJEA automatically parses the script at page load for description, parameters and validation, then dynamically builds a form to take input and display formatted output!

The main goals for WebJEA:

  • Reduce delegation of privileged access to users
  • Quickly automate on-demand tasks and grant access to less-privileged users
  • Leverage your existing knowledge in PowerShell to build web forms and automate on-demand processes
  • Encourage proper script creation by parsing and honouring advanced function parameters and comments

Because WebJEA is simply a Self-Service Portal for PowerShell scripts, anything you can script with PowerShell you can run through the Portal! Opening a lot of opportunities for automation without having to learn third party automation toolsets! Anyone who knows PowerShell can use it! Each script can be locked down to specific users and AD groups!

You can read more about WebJEA directly on the GitHub page: https://github.com/markdomansky/WebJEA.

This guide will concentrate on setting up WebJEA for self-service Azure VM management. However, WebJEA can be used to enable much more than what this blog article covers, from things such as new user onboarding, to resource creation.

WebJEA - Start/Stop

We will use a Windows Server 2019, running in Microsoft Azure, to run WebJEA from.

Prerequisites

  • Domain Joined server running Windows 2016+ Core/Full with PowerShell 5.1
  • The server must have permission to go out over the internet to Azure and download PowerShell modules.
  • CPU/RAM Requirements will depend significantly on your usage, start low (2-vCPU/4GB RAM) and grow as needed.

I've created a Standard_B2ms (2vCPU, 8GB RAM) virtual machine, called: WEBJEA-P01 in an Azure Resource Group called: webjea_prod

This server is running: Windows Server 2019 Datacenter and is part of my Active Directory domain; I've also created a service account called: webjea_services.

Setup WebJEA

Once we have a Windows Server, now it's time to set up WebJEA!

Setup Self-Signed Certificate

If you already have a certificate you can use, skip this step. In the case of this guide, we are going to use a self-signed certificate.

Log into the WebJEA Windows server using your service account (in my case, it is: luke\webjea_services).

Open PowerShell ISE as Administrator, and after replacing the DNS name to suit your own environment, run the following to create the Root CA and Self-Signed certificate:

Now that the Root CA is created and trusted, we want to create the actual self-signed certificate:

#Create RootCA
$rootCA = New-SelfSignedCertificate -Subject "CN=MyRootCA" `
-KeyExportPolicy Exportable `
-KeyUsage CertSign,CRLSign,DigitalSignature `
-KeyLength 2048 `
-KeyUsageProperty All `
-KeyAlgorithm 'RSA' `
-HashAlgorithm 'SHA256' `
-Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" `
-NotAfter (Get-Date).AddYears(10)

#Create Self-Signed Certificate
$cert = New-SelfSignedCertificate -Subject "CN=WEBJEA-P01.luke.geek.nz" `
-Signer $rootCA `
-KeyLength 2048 `
-KeyExportPolicy Exportable `
-DnsName WEBJEA-P01.luke.geek.nz, WEBJEA, WEBJEA-P01 `
-KeyAlgorithm 'RSA' `
-HashAlgorithm 'SHA256' `
-Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" `
-NotAfter (Get-Date).AddYears(10)
$certhumbprint = $cert.Thumbprint

#Add Root CA to Trusted Root Authorities
New-Item -ItemType Directory 'c:\WebJea\certs' -Force
Export-Certificate -Cert $rootCA -FilePath "C:\WebJEA\certs\rootCA.crt" -Force
Import-Certificate -CertStoreLocation 'Cert:\LocalMachine\Root' -FilePath "C:\WebJEA\certs\rootCA.crt"

Write-Host -ForegroundColor Green -Object "Copy this: $certhumbprint - The Thumbprint is needed for the DSCDeploy.ps1 script"

Copy the Thumbprint (if you do this manually, make sure it is the Thumbprint of the certificate, not the Trusted Root CA certificate); we will need that later.

Setup a Group Managed Service Account

This is the account we will use to run WebJEA under; it can be a normal Active Directory user account if you feel more comfortable or want to assign permissions to.

I am using a normal AD (Active Directory) service account in this guide because I am using Microsoft Entra ID Domain Services as my Domain Controller, and GMSA is not currently supported. I have also seen some scripts require the ability to create and read user-specific files. However, it's always good to follow best practices where possible.

Note: Group Managed Services accounts automatically renew and update the passwords for the accounts; they allow for additional security. You can read more about them here: Group Managed Service Accounts Overview.

#Create A group MSA account
Add-kdsrootkey -effectivetime ((get-date).addhours(-10))
New-ADServiceAccount -name webjeagmsa1 -dnshostname (get-addomaincontroller).hostname -principalsallowedtoretrievemanagedpassword WEBJEA-P01.luke.geek.nz

#Create AD Group
New-ADGroup -Name "WebJEAAdmins" -SamAccountName WebJEAAdmins -GroupCategory Security -GroupScope Global -DisplayName "WebJEA - Admins" -Description "Members of this group are WebJEA Admins"

Install-adserviceaccount webjeagmsa1
Add-ADGroupmember -identity "luke.geek.nz\WebJEAAdmins" -members (get-adserviceaccount webjeagmsa1).distinguishedname

Add the WebJEAAdmins group to the Administrators group of your WebJEA server.

Install WebJEA

Download the latest release package (zip file) onto the WebJEA Windows server

Extract it, and you should have 2 files and 2 folders:

  • Site\

  • StarterFiles\

  • DSCConfig.inc.ps1

  • DSCDeploy.ps1

    Open PowerShell ISE as Administrator and open DSCDeploy.ps1

WebJEA uses PowerShell DSC (Desired State Configuration) to set up a lot of the setup.

DSC will do the following for us:

  • Install IIS
  • Create the App Pool and set the identity
  • Create and migrate the Site files to the IIS website folder
  • Configure SSL (if we were using it)
  • Update the WebJEA config files to point towards the script and log locations

Even though most of the work will be automated for us by Desired State Configuration, we have to do some configurations to work in our environment.

I am not using a Group Managed Service Account. Instead, I will use a normal AD account as a service account (i.e. webjea_services), but if you use a GMSA, you need to put the username in the AppPoolUserName; no credentials are needed (make sure the GMSA has access to the server).

Change the following variables to suit your setup; in my case, I have moved WebJEA resources to their own folder, so it's not sitting directly on the OS drive, but until its own Folder.

VariableNote
NodeNameThis is a DSC variable, leave this.
WebAppPoolNameWebApp Pool Name, it may be best to leave this as: WebJEA, however you can change this.
AppPoolUserNameAdd in your GMSA or Domain Service account username
AppPoolPasswordIf using a Domain Account, add the password in here, if GSMA leave bank
WebJEAIISURIThis is the IIS URL, ie server/WebJEA. You can change this if you want.
WebJEAIISFolderIIS folder location, this can be changed if you wanted to move IIS to another drive or location.
WebJEASourceFolderThe source folder, this is the source folder for the WebJEA files when they are first downloaded and extracted (ie Downloads directory)
WebJEAScriptsFolderThis is where the scripts folder will be placed (ie WebJEA installed)
WebJEAConfigPathThis is where the config file will be placed (ie WebJEA installed - it needs to be the same location as the Scripts folder)
WebJEALogPathWebJEA log path
WebJEA_Nlog_LogFileWebJEA system log location
WebJEA_Nlog_UsageFileWebJEA usage log location

WebJEA - DSC

One thing to note is that the DSCDeploy.ps1 is calling (dot sourcing) the DSCConfig deploy script; by default, it is looking for it in the same folder as the DSCDeploy.ps1 folder.

If you just opened up PowerShell ISE, you may notice that you are actually in C:\Windows\System32, so it won't be able to find the script to run; you can either change the script to point directly to the file location, or you can change the directory you are into to match the files, in my case in the Script pane I run the following:

cd 'C:\Users\webjea_services\Downloads\webjea-1.1.157.7589'

Now run the script and wait.

If you get an error saying that the script is not digitally signed, run the following in the script pane:

Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass

This is because the PowerShell execution policy hasn't been set; depending on the scripts you are running, you may have to update the execution policy for the entire system, but for now, we will set it to Bypass for this process only, now re-run the script again, you should see DSC kick-off and start your configuration and setup of IIS and the WebJEA site.

WebJEA - DSC

You should also see the files/folders starting to be created!

Note: If you need to make a configuration change, please change it in the DSCDeploy.ps1, DSC will ensure that the configuration is applied as per your configuration and rerun the script, i.e. if you need to replace the certificate from a self-signed certificate to a managed PKI certificate.

Once DSC has been completed, your server should now be running IIS and the WebJEA site

To add the IIS Management Tool, this is not required but will help you manage IIS, run the following PowerShell cmdlet:

Enable-WindowsOptionalFeature -Online -FeatureName IIS-ManagementConsole

Open an Internet Browser and navigate to (your equivalent of): https://webjea-p01.luke.geek.nz/WebJEA.

If you need assistance finding the Website path, open the Internet Information (IIS) Manager, installed and uncollapse Sites, Default WebSite, right-click WebJEA, Manage Application and select Browse.

WebJEA - IIS

If successful, you should get a username and password prompt:

WebJEA - IIS

That's normal - it means you haven't been given access and now need to configure it.

Configure WebJEA

Now that WebJEA has been set up, it is time to configure it; the first thing we need to do is create a Group for WebJEA admins (see all scripts).

Create an Active Directory group for:

  • WebJEA-Admins
  • WebJEA-Users

Add your account to the: WebJEA-Admins group.

Navigate to your WebJEA scripts folder; in my case, I set it up under c:\WebJEA\Scripts:

WebJEA - Scripts

Before we go any further, take a Backup of the config.json file, rename it to "config.bak".

I recommend using Visual Studio Code to edit the config.json to help avoid any syntax issues.

Now right click config.json and open it to edit

This file is the glue that holds WebJEA together.

We are going to make a few edits:

  • Feel free to update the Title to match your company or Teams
  • Add in the WebJEA-Admins group earlier (include the Domain Name) into the permitted group's session - this controls access for ALL scripts.

Note the: \\ for each path that is required. If you get a syntax error when attempting to load the WebJEA webpage, this is most likely missing.

WebJEA - Demo

Save the config file and relaunch the WebJEA webpage. It should now load without prompting for a username and password.

Set the PowerShell execution policy on the machine to Unrestricted so that you can run any PowerShell scripts on it:

Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope LocalMachine

WebJEA - Demo

If you get an: AuthorizationManager check failed error, it is because the PowerShell scripts are still in a blocked state from being downloaded from the internet, run the following command to unblock them, then refresh the WebJEA webpage:

Get-ChildItem -Path 'C:\WebJEA\scripts\' -Recurse | Unblock-File

You now have a base WebJEA install! By default, WebJEA comes with 2 PowerShell files:

  • overview.ps1
  • validate.ps1

You may have noticed these in the config.json file; WebJEA has actually run the overview.ps1 file as soon as the page loads, so you can have scripts run before running another one, which is handy when you need to know the current state of something before taking action.

The validate.ps1 script is an excellent resource to check out the parameter types used to generate the forms.

Setup Azure Virtual Machine Start/Stop

Now that we have a working WebJEA install, it's time to set up the Azure VM Start/Stop script for this demo.

On the WebJEA server, we need to install the Azure PowerShell modules, run the following in Powershell as Administrator:

Install-Module Az -Scope AllUsers

Create Service Principal

Once the Az PowerShell modules are installed, we need to set a Service Principal for the PowerShell script to connect to Azure to manage our Virtual Machines.

Run the following PowerShell cmdlet to connect to Azure:

Connect-AzAccount

Now that we are connected to Azure, we now need to create the SPN, run the following:

$sp = New-AzADServicePrincipal -DisplayName WebJEA-AzureResourceCreator -Role Contributor
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sp.Secret)
$UnsecureSecret = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

Now you have created an SPN called: WebJEA-AzureResourceCreator. We now need to grab the Tenant ID, run the following:

Get-AzContext | Select-Object Tenant

Now that we have the SPN and Tenant ID, it's time to test connectivity.

# Login using service principal 
$TenantId = 'TENANTIDHERE'
$ApplicationId = 'APPLICATIONIDHERE'
$Secret = ConvertTo-SecureString -String 'SECRETSTRINGHERE' -AsPlainText -Force
$Credential = [System.Management.Automation.PSCredential]::New($ApplicationId, $Secret)
Connect-AzAccount -ServicePrincipal -Credential $Credential -TenantId $TenantId

Copy the TenantID into the TenantID section

Type:

$sp.ApplicationID

To retrieve the ApplicationID created from the SPN in the previous step and add it into the ApplicationID part.

Type in:

$UnsecureSecret

To retrieve the Secret, created in the SPN and add it to the String.

Now run the snippet, and you should be successfully connected to Azure.

Create Get-VM script

One of the features of WebJEA is the ability to run scripts on page load. So, we will get the current Power State of our Azure VMs, in the WebJEA scripts directory to create a new PS1 file called: Get-VM.ps1.

Add the following script to it:

# Login using service principal 
$TenantId = 'TENANTIDHERE'
$ApplicationId = 'APPLICATIONIDHERE'
$Secret = ConvertTo-SecureString -String 'SECRETSTRINGHERE' -AsPlainText -Force
$Credential = [System.Management.Automation.PSCredential]::New($ApplicationId, $Secret)
Connect-AzAccount -ServicePrincipal -Credential $Credential -TenantId $TenantId
Get-AzVM -Status | Select-Object Name, PowerState, ResourceGroupName

Save the file.

Create Set-VM script

Now, it's time to create the Script to Start/Stop the Virtual Machine. In the WebJEA scripts directory, create a new PS1 file called: Set-VM.ps1

Add the following script to it:

#Variables
[CmdletBinding(SupportsShouldProcess=$True,ConfirmImpact='Low')]
param
(
[Parameter(Position=1, mandatory=$true,
HelpMessage='What is the name of the Azure Virtual Machine?')]
$VMName,
[Parameter(Position=2, mandatory=$true,
HelpMessage='What is the name of the Azure Resource Group that the Virtual Machine is in?')]
$RGName,
[Parameter(Position=3, mandatory=$true,
HelpMessage='What action do you want to do?')]
[VALIDATESET('Start','Stop')]
$VMAction
)
# Login using service principal
$TenantId = 'TENANTIDHERE'
$ApplicationId = 'APPLICATIONIDHERE'
$Secret = ConvertTo-SecureString -String 'SECRETSTRINGHERE' -AsPlainText -Force
$Credential = [System.Management.Automation.PSCredential]::New($ApplicationId, $Secret)
Connect-AzAccount -ServicePrincipal -Credential $Credential -TenantId $TenantId
Get-AzVM -Status | Select-Object Name, PowerState, ResourceGroupName
if ($VMAction -eq "Start")
{
Start-AzVM -Name $VMName -ResourceGroupName $RGName -Confirm:$false -Force
return
}
elseif ($VMAction -eq "Stop")
{
Stop-AzVM -Name $VMName -ResourceGroupName $RGName -Confirm:$false -Force
}

Save the file.

Set VM in WebJEA Config

Now that the scripts have been created, it's time to add them to WebJEA to use.

Navigate to your scripts file and make a backup of the config.json file, then edit: config.json

On the line beneath the "onloadscript": "overview.ps1" file, add:

},

Then add in:

{
"id": "StartStopAzVM",
"displayname": "StartStop-AzVM",
"synopsis": "Starts or Stops Azure Based VMs",
"permittedgroups": [".\\Administrators", "luke.geek.nz\\WebJEAAdmins"],
"script": "Set-VM.ps1",
"onloadscript": "Get-VM.ps1"
}

So your config.json should look similar to:

config.json

{
"Title": "Luke Web Automation",
"defaultcommandid": "overview",
"basepath": "C:\\WebJEA\\scripts",
"LogParameters": true,
"permittedgroups": [".\\Administrators", "luke.geek.nz\\WebJEAAdmins"],
"commands": [
{
"id": "overview",
"displayname": "Overview",
"synopsis": "Congratulations, WebJEA is now working! We've pre-loaded a demo script that will help you verify everything is working. <br/><i>Tip: You can use the synopsis property of default command to display any text you want. Including html.</i>",
"permittedgroups": [".\\Administrators"],
"script": "validate.ps1",
"onloadscript": "overview.ps1"
},
{
"id": "StartStopAzVM",
"displayname": "StartStop-AzVM",
"synopsis": "Starts or Stops Azure Based VMs",
"permittedgroups": [".\\Administrators", "luke.geek.nz\\WebJEAAdmins"],
"script": "Set-VM.ps1",
"onloadscript": "Get-VM.ps1"
}

]
}

Test Azure Virtual Machine Start/Stop

Now that the scripts have been created open the WebJEA webpage.

Click on the StartStop-AzVM page (it may take a few seconds to load, as it is running the Get-VM script). You should be greeted by a window similar to below:

WebJEA - Demo

Congratulations, you have now set up WebJEA and can Start/Stop any Azure Virtual Machines using self-service!

Additional Notes

  • There is room for improvement around error checking, doing more with the scripts, such as sending an email when triggered, etc., to remind the server to be powered off.
  • Because most of the configuration is JSON/PowerShell files, you could have the entire scripts folder in a git repository to make changes, roll back and keep version history.
  • Remove any hard coding of any secrets to connect to Azure (as an example) from the scripts and implement a password management tool with API access or even the Windows Credential Manager. You want a system where you can easily update the passwords of accounts, limit access and prevent anything from being stored in plain text.
  • Using the permitted group's section of the config.json file, you can restrict the ability for certain groups to run scripts this way, and you can set granular control on who can do what.
  • If you use a normal Active Directory user account as the service account - then for added security, make sure that the WebJEA server is the only device that - that account can be logged in as and only has the permissions assigned that it needs to, look at implementing PIM (Privilaged Access Management) for some tasks so it only has access at the time that it needs it.

Well-Architected Framework Azure infrastructure review with PSRule for Azure

· 6 min read

Imagine if you could validate that your Azure Resources are deployed per the Well-Architected Framework (WAF).. just imagine!

Of a way of validating your services are secure and deployed following the Azure Architecture framework, both before and after the resources have been created!

Imagine no longer! There is a PowerShell module designed specifically for that purpose: PSRule for Azure.

PSRule - Azure

PSRule is a suite of rules to validate resources and infrastructure as code (IaC) using PSRule, and the Azure component uses the base PSRule module.

Features of PSRule for Azure include:

  • Leverage over 200 pre-built rules across five WAF pillars:
    • Cost Optimization
    • Operational Excellence
    • Performance Efficiency
    • Reliability
    • Security
  • Validate resources and infrastructure code pre or post-deployment using Azure DevOps or Github!
  • It runs on macOS, Linux, and Windows.

With over 200 inbuilt rules (and you can add your own), there is a lot of resource types covered, such as (but not limited to):

  • Azure App Service
  • Azure Key vault
  • Azure Virtual Machine
  • Azure Storage
  • Azure Network
  • Azure Public IP

Azure PSRules has been in development since 2019 and is under constant updates and fixes.

PSRule for Azure provides two methods for analyzing Azure resources:

  • Pre-flight - Before resources are deployed from Azure Resource Manager templates.
  • In-flight - After resources are deployed to an Azure subscription.

Pre-flight validation is used to scan ARM (Azure Resource Manager) templates before services are deployed and allow for quality gaps and better information in pull requests to improve and implement your infrastructure as code components.

The in-flight method can also be used in Azure DevOps for validation of Terraform resource deployments etc. Still, in this demo, I will run you through installing the Module and doing an export and scan from your PowerShell console!

We are going to install the PSRule.Azure (based on the Well-Architected Framework & Cloud Adoption Framework).

I recommend keeping the Modules (and as such the in-built rules) up-to-date and do scans at least every quarter or after a major deployment or project to help verify your resources are set up according to some best-practice rules. This does not replace Security Center and Azure Advisor; this is intended to be a supplement.

Install PSRule.Azure

  1. Open PowerShell console and run the following commands:

    #The main Module and base rules to validate Azure resources..
    Install-Module PSRule.Rules.Azure -Scope CurrentUser

Install-Module PSRule 2. Press 'Y' to accept PSGallery as a trusted repository; just a note, you can prevent the confirmation prompt when installing Modules from the PSGallery, by classifying it as a 'Trusted Repository' by running the following. Just be wary that won't get rechallenged:

   Set-PSRepository -Name 'PSGallery' -InstallationPolicy Trusted

3. You should now have the following modules installed:

  • PSRule
  • PSRule.Rules.Azure

Extract Azure Subscription PSRule JSON files

Now that PSRule has been installed, it's time to log in to Azure and extract information regarding your Azure resources for analysis; these extracted files are JSON files containing information, such as your resource names, subscription ID, etc. resource groups in plain text.

As you can see from the screenshot below, we can target specific Subscriptions, Tenancies (yes, as long as the account you have access to has access to the subscription, you can export those as well), Resource Groups and Tags.

Export-AzRuleData

Because I want to get the most data available across all resources, I will target everything with the '-All' parameter.

  1. First, we need to connect to the Azure subscription and then connect to the Azure subscription we have access to or are targeting by running the following:

    Connect-AzAccount

    Get-AzSubscription | ogv -PassThru | Set-AzContext
  2. Now that you have connected its time to export the Azure resource information, run the following PowerShell cmdlet, and point it towards an empty folder:

    Export-AzRuleData -OutputPath c:\temp\AzRuleData -All
  3. If the folder doesn't exist, don't worry - the Export command will create it for you. Depending on how many resources and subscriptions you are extracting, this may take a few minutes.

You should see the JSON files appearing if you open one of these. In addition, you should be able to see information about the resources it has extracted.

Run PSRule across your JSON files

Now that you have extracted the JSON files of your Azure resources, it's now time to analyse them following Microsoft Cloud Adoption and Well Architectured framework and the rules builtin into PSRule.Azure!

You don't need to be connected to Azure; for this analysis, have the PSRule modules installed and access the JSON files.

PSRule.Azure has a few baselines; these baselines contain the rules used to analyse your resources and range from Preview to newly released rules; again, we will target ALL rules, as we are after all recommendations.

  1. In PowerShell, run the following:

    Assert-PSRule -Module 'PSRule.Rules.Azure' -InputPath 'C:\temp\AzRuleDataExport\*.json' -Baseline 'Azure.All'
  2. This will trigger PSRules to scan your extracted JSON files with the ALL rules, and you will get output like below:

  3. Invoke-PSRules

  4. Although it is good being able to see a high level, I prefer to look at it all at once in Excel, so run the following to export the rules to a CSV:

    Invoke-PSRule -Module 'PSRule.Rules.Azure' -InputPath 'C:\temp\AzRuleDataExport\*.json' -Baseline 'Azure.All' | Export-csv C:\temp\AzRuleDataExport\Exported_Data.csv
  5. You should now have a CSV file to review and look for common issues, concerns and work on improving your Azure infrastructure setup!

PS Rules Azure - Export CSV

Note: The export contains the Subscription/Resource Names, so you can definitely see what resources can improve upon; however, I removed it from my screenshot.

Congratulations! You now have more visibility and, hopefully, some useful recommendations for improving your Azure services!

If you want to get a good understanding of the type of data rules, check out my extracted CSV 'here'.

Additional Resources

  • If you found PSRules.Azure interesting; how about getting any Failed rules? How about getting any failed rules pushed to Azure Monitor?

PSRule to Azure Monitor

  • If you are interested in the CI (Continous Integration) options, check out the links below:

Azure DevOps Pipeline & Github Actions

  • Extend the PSRules to include Cloud Adoption Framework as well?

PSRule for Cloud Adoption Framework

  • And finally, creating Custom Rules for your organisation, including Tagging, Naming conventions etc.?

PSRule.Azure Custom Rules