Skip to main content

194 posts tagged with "Azure"

View All Tags

Capturing Virtual Machine images and Snapshots in Azure using WVDAdmin

· 7 min read

WVDAdmin - is a native administration GUI (graphical user interface) for Azure Virtual Desktop (AVD). WVDAdmin is a free custom-built tool designed to make managing and standing up Azure Virtual Desktop infrastructure easy. Not only can you use it to roll out your Azure Virtual Desktop infrastructure and manage existing workspaces and host pools - you can use it to create Virtual Machine images that can be used for Virtual Scale Sets, but Base also builds or Azure Virtual Desktop session hosts! In addition, WVDAdmin automates creating and using snapshots and virtual machine images in a simple point and click interface - that just works!

Prerequisites

  • Azure subscription
  • Resource Group
  • Virtual Machine (to be used as your master image)
  • Of course - WVDAdmin

You can download WVDAdmin from the following page: Azure Windows Virtual Desktop administration with WVDAdmin.

Also, make sure you have set up a service principal with the appropriate rights to the Resource Groups that holds your Virtual Machine.

Before proceeding ahead, make sure you have a virtual machine backup!

Capturing a Snapshot

Although, possible to do using the Azure Portal, quickly taking an OS disk snapshot and then reverting the change can be a bit tedious, especially if you want to make a backup quickly of the operating system disk before patching or application upgrade, Snapshots are a lot quicker to take and work well for immediate and temporary recovery, especially when you want to quickly try something out - without having to wait for an Azure Backup. Please note this tool does not snapshot any data drives present.

Capture a Snapshot

  1. Open WVDAdmin
  2. On the "Welcome" tab, enter in your Azure Tenant id
  3. Enter in your Service principal (application) ID and key
  4. Click on Reload all - to connect to Azure
  5. Expand Azure
  6. Expand Virtual Machines
  7. Expand your Resource group; in my example; it is: SERVERS-RG
  8. Right-click your server; in my example, it is: Server2019
  9. Select SnapShot-Create
  10. WVDAdmin - Create Snapshot
  11. WVDAdmin will then prompt you to verify that you want to create your Snapshot.
  12. WVD - Verify Snapshot
  13. Confirm the server is correct and click Ok
  14. Depending on the size of your disk, this process may only take a few seconds; the virtual Machine may experience a slight performance hit. Still, I did not lose RDP connectivity during the snapshot process in my testing.
  15. Review the logs to make sure that the Snapshot has been created successfully:
  16. Snapshot
  17. You should now see the Snapshot in the Azure Portal, in the same Resource Group as the server.
  18. Azure Portal - Snapshot

Restore a Snapshot

Before you proceed, just a warning that restoring the Snapshot will discard any changes made after the Snapshot. The virtual machine will also be deallocated, so it will stop any connections to it.

  1. Open WVDAdmin
  2. On the "Welcome" tab, enter in your Azure Tenant id
  3. Enter in your Service principal (application) ID and key
  4. Click on Reload all - to connect to Azure
  5. Expand Azure
  6. Expand Virtual Machines
  7. Expand your Resource group; in my example; it is: SERVERS-RG
  8. Right-click your server; in my example, it is: Server2019
  9. Select SnapShot-Restore
  10. Azure Disk Snapshot
  11. Select the Snapshot you would like to restore to, and when you are ready, click Ok. This will force the Virtual Machine to be shut down and deallocated and the Snapshot to be restored.
  12. Azure Disk Snapshot
  13. You may also start the VM from WVDAdmin, by right-clicking on the Virtual Server after the Snapshot restores and click: Start.
  14. Azure Disk Snapshot
  15. Verify that your Virtual Machine is back up and running and remove any unneeded snapshots and disks from the Azure Portal, to reduce additional costs. If you intend to keep any around, make sure you add appropriate Tags and a review date so you know what and why they existed in the first place.

A few things to note:

  • WVDAdmin gave me errors, stating that the "Recovering snapshot was not successful", however, this occurred after the Swapping disk process when the old disk was attempting to be deleted. The recovery did, in fact, reoccur; I then successfully deleted the disks in the Azure Portal manually.
  • I also had the: "Virtual machine agent status is not ready." error occur. After the Virtual Machine had enough time to start the Azure agent, this self-resolved.

Capturing a Virtual Machine Image

Virtual Machine images work well for Azure Virtual Desktop and Virtual Machines scale sets, where you want consistency between your various virtual machines. The same process I will run through works with Windows 10/11 along with Windows Server 2022 and below (and I would also imagine Linux workloads).

I will be using the Windows Server 2019 Virtual Machine I had created before, however with various applications that I want to be standard across new builds; in my demo I used chocolatey to install:

  • Adobe Reader
  • Microsoft Visual C++ runtimes
  • 7Zip
  • VLC

Then added a custom user policy to set the wallpaper. WVDAdmin will automatically generalise (sysprep) the Machine for you by creating a 'Temp' machine without touching your original Virtual Machine!

Capture a Virtual Machine Image

  1. Open WVDAdmin
  2. On the "Welcome" tab, enter in your Azure Tenant id
  3. Enter in your Service principal (application) ID and key
  4. Click on Reload all - to connect to Azure
  5. Expand Azure
  6. Expand Virtual Machines
  7. Expand your Resource group; in my example, it is: SERVERS-RG
  8. Right-click your server; in my example, it is: Server2019
  9. Select Create a template image
  10. WVDAdmin - Create a template image
  11. WVDAdmin will then display the: Capture Image tab.
  12. Type in an appropriate image name (make sure you understand it, add specific versioning etc.)
  13. Verify that your Template VM is correct
  14. Select your Target Resource Group for your Image
  15. If you have a custom PowerShell script, you may add additional customisations. Add the script path here (make sure it's publically accessible by Azure, i.e. Azure storage account, Github repository etc.).
  16. Before proceeding to the next step, your VM will be deallocated
  17. When you are ready, select Capture
  18. WVDAdmin will then deallocate your VM and run through the following process:
  19. Deallocate VM -> Create a snapshot of VM ->Create a temporary VM from the snapshot -> Generalise the VM -> deallocate temporary VM -> create the image -> delete temp VM resources
  20. WVDAdmin - Capture Image
  21. You should now see your Image in your Azure Portal.
  22. Azure - Custom Image
  23. You can now create additional Virtual Machines from your Custom image using the Azure Portal.
  24. WVDAdmin can also copy your Custom Image into a Shared Image Gallery, or you can use it to create an Azure Virtual Desktop session host!
  25. WVDAdmin - New Session Host

Hopefully, this article has been of some use - even if you don't use Azure Virtual Desktop - WVDAdmin is a great tool to help with day-to-day Azure Virtual Machine operations.

Cloud Adoption Framework for Azure - Tools and Templates

· 4 min read

To help with your Microsoft Cloud Adoption and Azure migration, you need a few things to be successful:

  1. Define your strategy, what are your expected outcomes? Where do you start, what skills do you have or need?
  2. Plan, this may include organisational alignment to get moving to the Cloud
  3. Ready, this is where you look at your governance, Landing Zones /Blueprints
  4. Adopt, this is where you actually migrate your workloads into the cloud, existing apps and new

Cloud Adoption Framework for Azure

Here are some useful tools, templates, and assessments provided by Microsoft to help on your journey:

Note: It is not as if you can't get these resources elsewhere, I purely just wanted a list format for easy reference.

Define strategy

Plan

Ready

Adopt

Govern

Manage

The Microsoft Cloud Adoption Framework page has everything listed above and more! If you are serious about Cloud Adoption, then reading through the official documentation not only gives you better context to the resources linked to this page but gives you more ways to think about potential opportunities to help with your Cloud adoption!

Most of these tools can be found directly in the public Cloud Adoption Framework GitHub repository: microsoft/CloudAdoptionFramework so keep an eye on that!

Make sure you follow the Cloud Adoption Framework - Whats new page, to keep up with the current best practices

Azure NAT Gateway - Implementation and Testing

· 9 min read

With most Cloud resources being accessible over the internet, each publically accessible resource has its own public IP address, this makes it a lot more challenging to administer the security and access rules to access third party services.

Think along the lines of - you or your organisation might use software-as-a-service CRM product. That product is only accessible from your organisations IP for compliance/security reasons, you might access the CRM product from various Azure Virtual Desktop hosts, each with its public IP or a random Microsoft Azure datacenter IP, or you want to control Multifactor authentication/conditional access policies for users using Azure services.

The administration of this, particularly in scenarios where other people or teams can create and manage resources, can be complex, sure; you can use Standard Load Balancers, which would help, but you have to manage and pay for it, which is sometimes overkill.

Tunnelling outbound traffic through to a specific IP address or IP addresses to 'known controllable IP addresses' for Azure resources (both IaaS and PaaS) which sit in the same Virtual Network is where the Azure NAT Gateway comes in, allowing you to easily allow and control what IPs your traffic is coming from. NAT Gateway replaces the default Internet destination in the virtual network’s routing table for the subnets identified

"The Azure NAT gateway is a fully managed, highly resilient service built into the Azure fabric, which can be associated with one or more subnets in the same Virtual Network, that ensures that all outbound Internet-facing traffic will be routed through the gateway. As a result, the NAT gateway gives you a predictable public IP for outbound Internet-facing traffic. It also significantly increases the available SNAT ports in scenarios where you have a high number of concurrent connections to the same public address/port combination."

My Testing

Now lets get testing the Azure NAT Gateway! To test the gateway, I created:

  • Virtual Network
  • NAT Gateway
  • IP Public Address prefix
  • 1 Windows VM (Windows Server 2019) with Public IP
  • 1 Linux (Ubuntu 18.04) VM with Public IP
  • 1 Windows VM (Windows Server 2019) as a backend pool for an Azure Load Balancer
  • Virtual Machine Scale Set with four instances (each with Windows Server 2019)

Note: Each VM has RDP opened to allow inbound traffic from my network using the Public IP and a NAT rule allowing RDP traffic on the Load Balancer. There is no point-to-site or site-to-site VPN; RDP connections are directly over the internet to Australia East, from New Zealand.

NAT Gateway - Test

Once the Azure resources were created, I then connected to each machine using RDP/SSH on their Public IP address and tested:

Linux Machine with Public IP for RDP

  • Inbound Public IP: 20.53.92.19
  • Outbound IP: 20.53.73.184

Linux Azure NAT Gateway

As you can see, I connected to the Linux VM's public IP via SSH and did a curl to: https://ifconfig.me/ to grab my public IP. The public IP of my Linux box was my NAT Gateway Public IP prefix!

Windows Machine with Public IP for RDP

  • Inbound Public IP: 20.70.228.211
  • Outbound IP: 20.53.73.184

Windows Azure NAT Gateway

Using RDP to the public IP of the Windows Server, I navigated to: https://www.whatismyip.com/. As you can see, the Public IP of my outbound IP address was my NAT Gateway Public IP prefix!

Windows Machine behind an Azure Load Balancer

  • Inbound Public IP: 20.211.100.67
  • Outbound IP: 20.53.73.185

Windows Machine behind Azure Load Balancer NAT Gateway

This was the last of the 3 test machines; I stood up. Using RDP to the public IP of the Azure Load BalancerI navigated to: https://www.whatismyip.com/. As you can see, the Public IP of my outbound IP address was my NAT Gateway Public IP prefix; however, this was '20.53.73.185', which was the second IP address available in my /31 IP address prefix.

Windows Machine behind a VM Scale Set

Although not in the diagram, I decided to add a VM Scale Set of 4 Virtual Machines into my testing (to save on cost, they are just Standard_B2ms machines but more than enough for my testing).

Azure NAT Gateway - VM Scale Set

As you can see from the mess that is my screenshot above, all machines had completely different inbound Public IP addresses. Still, the outbound public IP addresses came from the NAT Gateway as expected.

Findings and Observations

  • The outbound public IP did seem to change between the workloads; if I refreshed 'whatismyip' and 'ifconfig', the public IP changed between 184 and 185. However, no loss of connectivity occurred to the Virtual Machines. This was linked to the '4-minute idle timeout' configured on the NAT Gateway; I saw no reason to change the default timeout value; if I were that worried about the same IP address - I would have chosen with a Public IP vs a Public IP prefix on the NAT Gateway.
  • Any Public IP used on the same subnet as a NAT Gateway needs to be Standard.
  • If I had both a Public IP address and a Public IP prefix on my NAT gateway, the Prefix seemed to take precedence.
  • You cannot use a Public IP Prefix that is in use by the NAT Gateway for any other workload, i.e. any inbound Public IPs. It would be best if you had another Public IP prefix resource.
  • A single NAT gateway resource supports from 64,000 up to 1 million concurrent flows. Each IP address provides 64,000 SNAT ports to the available inventory. Therefore, you can use up to 16 IP addresses per NAT gateway resource. The SNAT mechanism is described here in more detail.

Create a NAT Gateway

To create my NAT Gateway, I used the ARM Quickstart template, located here: https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/quickstart-create-nat-gateway-template.

Then I created the additional Virtual Machines and Load Balancers and added them to the same VNET created as part of the NAT Gateway.

To create a NAT Gateway using the Azure Portal

  1. Log in to the Azure Portal and navigate to Create a resource, NAT Gateway (this link will get you there: Create-NATGateway).
  2. Select your Subscription
  3. Enter your NAT Gateway name
  4. Enter your Region
  5. Enter your availability zone
  6. Set your idle timeout (I suggest leaving this at 4 minutes, you can change it later if it presents issues)
  7. Create Azure NAT Gateway
  8. Click Next: Outbound IP
  9. We are just going to create a new Public IP address (it has to be Standard and Static, the Azure Portal automatically selects this for you - although you can create your Public IP prefix here for scalability, you don't need it both).
  10. Create Azure NAT Gateway
  11. Click Next: Subnet
  12. Create or link your existing Virtual Network and subnets and click Next: Tags
  13. Enter in any tags that may be relevant (Creator, Created on, Created for, Support Team etc.)
  14. Click Next: Review + Create
  15. Verify everything looks ok then click Create

Congratulations, you have now created your NAT Gateway!

To create a NAT Gateway using Azure Bicep

Just a quick Bicep snippet I created to create the NAT Gateway resource only:

Create-NATGateway.bicep

//Target Scope is: Resource Group

targetScope = 'resourceGroup'

//Set Variables and Parameters

@allowed([
'Prod'
'Dev'
])
param environment string = 'Prod'
param location string = resourceGroup().location

param dateTime string = utcNow('d')
param resourceTags object = {
Application: 'Azure NAT Gateway/Azure Network Management'
CostCenter: 'Operational'
CreationDate: dateTime
Environment: environment
}

//// Resource Creation

/// Create - NAT Gateway

resource NATGW 'Microsoft.Network/natGateways@2021-03-01' = {
name: 'aznatgw'
tags: resourceTags

location: location
sku: {
name: 'Standard'
}

properties: {
idleTimeoutInMinutes: 4
}
}

It can be deployed by opening PowerShell (after Bicep is installedusing the PowerShell method)and logging into your Azure and running the following(replace RGNAME with the name of the Resource Group you will be deploying it to):

When you are actually ready to deploy, remove the -Whatif at the end. Then you can go into the resource and add the Public IP/prefix. PowerShell will prompt you for the name of the NAT Gateway and be created in the same location as the Resource Group by default.

New-AzResourceGroupDeployment -Name NatGwDeployment -ResourceGroupName RGNAME -TemplateFile .\Create_NATGateway.bicep -whatif

Additional Resources

Benefits to using the Microsoft Azure Cloud to host your Infrastructure

· 5 min read

Cloud computing offers many benefits, from your traditional on-premises infrastructure, ecosystems such as Microsoft Azure, have an underlying fabric built for today's 'software as a service' or 'software defined' world.

The shift of technologies from managing on-premises Exchange environments for mail to consuming Microsoft 365 services has allowed more time for the IT and businesses to adopt, consume and improve their technology and continuously improve - to get the most use of it and remain competitive in this challenging world.

Below is a high-level list of what I consider some of the benefits of using the Microsoft Azure ecosystem:

  • Each Azure datacentre 'region' has 3 Availability Zones, each zone acts as a separate datacentre, giving redundant power and networking services, quickly allowing you to separate your services across different fault domains and zones, providing better resiliency, while also giving you the ability to keep them logically and physically close together.
  • Geo-redundant replication of backups for Virtual Machines, PaaS/File Shares, and ability to do cross-region restore functionality (i.e., Australia/Australia East).
  • A multitude of hosts (supporting both AMD and Intel workloads), which are continually patched and maintained, and tuned for virtualisation performance, stability and security, no longer do we need to spend hours patching, maintaining, licensing on-premises hypervisors, ever so increasing as these systems get targeted for vulnerabilities and architecting how many physical hosts, we may need to support a system.
  • Consistent, up-to-date hardware, no need to worry about lead times for new hardware, purchasing new hardware every three years and procurement and implementation costs of hardware, allowing you to spend the time improving on the business and tuning your services (scaling up and down, trying new technologies, turning off devices etc.)
  • For those that like to hoard every file that ever existed, the Azure platform allows scale (in and out to suit your file sizes) along with cost-saving opportunities and tweaks with Automation and migrating files between cool/hot tiers.
  • No need to pay datacentre hosting costs
  • No need to worry about redundant switching
  • With multiple hosts, there is no risk around air conditioning leaks, hardware failure; you don't need to worry about some of these unfortunate events occurring.
  • No need to pay electricity costs to host your workloads.
  • Reduced IT labour costs and time to implement and maintain systems
  • OnDemand resources available can stand up separate networks unattached to your production network for testing or other devices easily without working out through VLANs or complex switching and firewalls.
  • Azure Network have standard DDOS protection enabled by default
  • Backups are secure by default; they are offline and managed by Microsoft, so if a ransomware attack occurs, won't be able to touch your backups.
  • Constant Security recommendations, improvements built into the platform.
  • Azure Files is geo-redundant and across multiple storage arrays, encrypted at rest.
  • Windows/SQL licensing is all covered as part of the costings, so need to worry about not adhering to MS licensing, Azure helps simplify what can sometimes be confusing and complex licensing.
  • Extended security updates for out-of-date Server OS such as Windows Server 2008 R2, Windows Server 2021 R2 without having to pay for extended update support.
  • Ability to leverage modern and remote desktop and application technologies such as Windows 365 and Azure Virtual Desktop, by accessing services hosted in Azure.
  • Having your workloads in Azure gives you a step towards, removing the need for traditional domain controllers and migrating to Microsoft Entra ID joined devices.
  • Azure AutoManage functionality is built in to automatically patch Linux (and Windows of course!), without having to manage separate patching technologies for cross-platform infrastructure.
  • Azure has huge support for Automation, via PowerShell, CLI and API, allowing you to standardize, maintain, tear down and create infrastructure and services, monitoring, self-users on an as needed basis.
  • Azure datacentres are sustainable and run off renewable energy where they can, Microsoft has commitments to be fully renewable.
  • No need for NAS or Local Backups, the backups are all built into Azure.
  • Compliant datacentre across various global security standards - https://learn.microsoft.com/en-us/compliance/assurance/assurance-datacenter-security
  • Ability to migrate or expand your resources from Australia to ‘NZ North’ or other new or existing data centres! Azure is global and gives you the ability to scale your infrastructure to a global market easily or bring your resources closer to home if a data centre becomes available.
  • We all know that despite the best of intentions, we rarely ever test, develop, and improve disaster recovery scenarios, sometimes this is because of the complexity of the applications and backup infrastructure. Azure Site Recovery, Geo-Redundant backup, Load Balancers and automation helps make this a lot easier.
  • Ability to better utilise Cloud security tools (ie such as the Azure Security Center), across Cloud and on-premises workloads consistently using Azure Arc and Azure policies.
  • And finally - more visibility into the true cost and value of your IT infrastructure, the total cost of your IT Infrastructure is hidden behind electricity costs, outages and incidents that would not have impacted cloud resources, slow time to deployment or market, outdated and insecure technologies and most likely services you are running which you don't need to run!

#ProTip - Resources such as the Azure Total Cost of Ownership (TCO) can help you calculate the true cost of your workloads.

Implement WebJEA for self-service Start/Stop of Azure Virtual Machines

· 15 min read

WebJEA allows you to build web forms for any PowerShell script dynamically. WebJEA automatically parses the script at page load for description, parameters and validation, then dynamically builds a form to take input and display formatted output!

The main goals for WebJEA:

  • Reduce delegation of privileged access to users
  • Quickly automate on-demand tasks and grant access to less-privileged users
  • Leverage your existing knowledge in PowerShell to build web forms and automate on-demand processes
  • Encourage proper script creation by parsing and honouring advanced function parameters and comments

Because WebJEA is simply a Self-Service Portal for PowerShell scripts, anything you can script with PowerShell you can run through the Portal! Opening a lot of opportunities for automation without having to learn third party automation toolsets! Anyone who knows PowerShell can use it! Each script can be locked down to specific users and AD groups!

You can read more about WebJEA directly on the GitHub page: https://github.com/markdomansky/WebJEA.

This guide will concentrate on setting up WebJEA for self-service Azure VM management. However, WebJEA can be used to enable much more than what this blog article covers, from things such as new user onboarding, to resource creation.

WebJEA - Start/Stop

We will use a Windows Server 2019, running in Microsoft Azure, to run WebJEA from.

Prerequisites

  • Domain Joined server running Windows 2016+ Core/Full with PowerShell 5.1
  • The server must have permission to go out over the internet to Azure and download PowerShell modules.
  • CPU/RAM Requirements will depend significantly on your usage, start low (2-vCPU/4GB RAM) and grow as needed.

I've created a Standard_B2ms (2vCPU, 8GB RAM) virtual machine, called: WEBJEA-P01 in an Azure Resource Group called: webjea_prod

This server is running: Windows Server 2019 Datacenter and is part of my Active Directory domain; I've also created a service account called: webjea_services.

Setup WebJEA

Once we have a Windows Server, now it's time to set up WebJEA!

Setup Self-Signed Certificate

If you already have a certificate you can use, skip this step. In the case of this guide, we are going to use a self-signed certificate.

Log into the WebJEA Windows server using your service account (in my case, it is: luke\webjea_services).

Open PowerShell ISE as Administrator, and after replacing the DNS name to suit your own environment, run the following to create the Root CA and Self-Signed certificate:

Now that the Root CA is created and trusted, we want to create the actual self-signed certificate:

#Create RootCA
$rootCA = New-SelfSignedCertificate -Subject "CN=MyRootCA" `
-KeyExportPolicy Exportable `
-KeyUsage CertSign,CRLSign,DigitalSignature `
-KeyLength 2048 `
-KeyUsageProperty All `
-KeyAlgorithm 'RSA' `
-HashAlgorithm 'SHA256' `
-Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" `
-NotAfter (Get-Date).AddYears(10)

#Create Self-Signed Certificate
$cert = New-SelfSignedCertificate -Subject "CN=WEBJEA-P01.luke.geek.nz" `
-Signer $rootCA `
-KeyLength 2048 `
-KeyExportPolicy Exportable `
-DnsName WEBJEA-P01.luke.geek.nz, WEBJEA, WEBJEA-P01 `
-KeyAlgorithm 'RSA' `
-HashAlgorithm 'SHA256' `
-Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" `
-NotAfter (Get-Date).AddYears(10)
$certhumbprint = $cert.Thumbprint

#Add Root CA to Trusted Root Authorities
New-Item -ItemType Directory 'c:\WebJea\certs' -Force
Export-Certificate -Cert $rootCA -FilePath "C:\WebJEA\certs\rootCA.crt" -Force
Import-Certificate -CertStoreLocation 'Cert:\LocalMachine\Root' -FilePath "C:\WebJEA\certs\rootCA.crt"

Write-Host -ForegroundColor Green -Object "Copy this: $certhumbprint - The Thumbprint is needed for the DSCDeploy.ps1 script"

Copy the Thumbprint (if you do this manually, make sure it is the Thumbprint of the certificate, not the Trusted Root CA certificate); we will need that later.

Setup a Group Managed Service Account

This is the account we will use to run WebJEA under; it can be a normal Active Directory user account if you feel more comfortable or want to assign permissions to.

I am using a normal AD (Active Directory) service account in this guide because I am using Microsoft Entra ID Domain Services as my Domain Controller, and GMSA is not currently supported. I have also seen some scripts require the ability to create and read user-specific files. However, it's always good to follow best practices where possible.

Note: Group Managed Services accounts automatically renew and update the passwords for the accounts; they allow for additional security. You can read more about them here: Group Managed Service Accounts Overview.

#Create A group MSA account
Add-kdsrootkey -effectivetime ((get-date).addhours(-10))
New-ADServiceAccount -name webjeagmsa1 -dnshostname (get-addomaincontroller).hostname -principalsallowedtoretrievemanagedpassword WEBJEA-P01.luke.geek.nz

#Create AD Group
New-ADGroup -Name "WebJEAAdmins" -SamAccountName WebJEAAdmins -GroupCategory Security -GroupScope Global -DisplayName "WebJEA - Admins" -Description "Members of this group are WebJEA Admins"

Install-adserviceaccount webjeagmsa1
Add-ADGroupmember -identity "luke.geek.nz\WebJEAAdmins" -members (get-adserviceaccount webjeagmsa1).distinguishedname

Add the WebJEAAdmins group to the Administrators group of your WebJEA server.

Install WebJEA

Download the latest release package (zip file) onto the WebJEA Windows server

Extract it, and you should have 2 files and 2 folders:

  • Site\

  • StarterFiles\

  • DSCConfig.inc.ps1

  • DSCDeploy.ps1

    Open PowerShell ISE as Administrator and open DSCDeploy.ps1

WebJEA uses PowerShell DSC (Desired State Configuration) to set up a lot of the setup.

DSC will do the following for us:

  • Install IIS
  • Create the App Pool and set the identity
  • Create and migrate the Site files to the IIS website folder
  • Configure SSL (if we were using it)
  • Update the WebJEA config files to point towards the script and log locations

Even though most of the work will be automated for us by Desired State Configuration, we have to do some configurations to work in our environment.

I am not using a Group Managed Service Account. Instead, I will use a normal AD account as a service account (i.e. webjea_services), but if you use a GMSA, you need to put the username in the AppPoolUserName; no credentials are needed (make sure the GMSA has access to the server).

Change the following variables to suit your setup; in my case, I have moved WebJEA resources to their own folder, so it's not sitting directly on the OS drive, but until its own Folder.

VariableNote
NodeNameThis is a DSC variable, leave this.
WebAppPoolNameWebApp Pool Name, it may be best to leave this as: WebJEA, however you can change this.
AppPoolUserNameAdd in your GMSA or Domain Service account username
AppPoolPasswordIf using a Domain Account, add the password in here, if GSMA leave bank
WebJEAIISURIThis is the IIS URL, ie server/WebJEA. You can change this if you want.
WebJEAIISFolderIIS folder location, this can be changed if you wanted to move IIS to another drive or location.
WebJEASourceFolderThe source folder, this is the source folder for the WebJEA files when they are first downloaded and extracted (ie Downloads directory)
WebJEAScriptsFolderThis is where the scripts folder will be placed (ie WebJEA installed)
WebJEAConfigPathThis is where the config file will be placed (ie WebJEA installed - it needs to be the same location as the Scripts folder)
WebJEALogPathWebJEA log path
WebJEA_Nlog_LogFileWebJEA system log location
WebJEA_Nlog_UsageFileWebJEA usage log location

WebJEA - DSC

One thing to note is that the DSCDeploy.ps1 is calling (dot sourcing) the DSCConfig deploy script; by default, it is looking for it in the same folder as the DSCDeploy.ps1 folder.

If you just opened up PowerShell ISE, you may notice that you are actually in C:\Windows\System32, so it won't be able to find the script to run; you can either change the script to point directly to the file location, or you can change the directory you are into to match the files, in my case in the Script pane I run the following:

cd 'C:\Users\webjea_services\Downloads\webjea-1.1.157.7589'

Now run the script and wait.

If you get an error saying that the script is not digitally signed, run the following in the script pane:

Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass

This is because the PowerShell execution policy hasn't been set; depending on the scripts you are running, you may have to update the execution policy for the entire system, but for now, we will set it to Bypass for this process only, now re-run the script again, you should see DSC kick-off and start your configuration and setup of IIS and the WebJEA site.

WebJEA - DSC

You should also see the files/folders starting to be created!

Note: If you need to make a configuration change, please change it in the DSCDeploy.ps1, DSC will ensure that the configuration is applied as per your configuration and rerun the script, i.e. if you need to replace the certificate from a self-signed certificate to a managed PKI certificate.

Once DSC has been completed, your server should now be running IIS and the WebJEA site

To add the IIS Management Tool, this is not required but will help you manage IIS, run the following PowerShell cmdlet:

Enable-WindowsOptionalFeature -Online -FeatureName IIS-ManagementConsole

Open an Internet Browser and navigate to (your equivalent of): https://webjea-p01.luke.geek.nz/WebJEA.

If you need assistance finding the Website path, open the Internet Information (IIS) Manager, installed and uncollapse Sites, Default WebSite, right-click WebJEA, Manage Application and select Browse.

WebJEA - IIS

If successful, you should get a username and password prompt:

WebJEA - IIS

That's normal - it means you haven't been given access and now need to configure it.

Configure WebJEA

Now that WebJEA has been set up, it is time to configure it; the first thing we need to do is create a Group for WebJEA admins (see all scripts).

Create an Active Directory group for:

  • WebJEA-Admins
  • WebJEA-Users

Add your account to the: WebJEA-Admins group.

Navigate to your WebJEA scripts folder; in my case, I set it up under c:\WebJEA\Scripts:

WebJEA - Scripts

Before we go any further, take a Backup of the config.json file, rename it to "config.bak".

I recommend using Visual Studio Code to edit the config.json to help avoid any syntax issues.

Now right click config.json and open it to edit

This file is the glue that holds WebJEA together.

We are going to make a few edits:

  • Feel free to update the Title to match your company or Teams
  • Add in the WebJEA-Admins group earlier (include the Domain Name) into the permitted group's session - this controls access for ALL scripts.

Note the: \\ for each path that is required. If you get a syntax error when attempting to load the WebJEA webpage, this is most likely missing.

WebJEA - Demo

Save the config file and relaunch the WebJEA webpage. It should now load without prompting for a username and password.

Set the PowerShell execution policy on the machine to Unrestricted so that you can run any PowerShell scripts on it:

Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope LocalMachine

WebJEA - Demo

If you get an: AuthorizationManager check failed error, it is because the PowerShell scripts are still in a blocked state from being downloaded from the internet, run the following command to unblock them, then refresh the WebJEA webpage:

Get-ChildItem -Path 'C:\WebJEA\scripts\' -Recurse | Unblock-File

You now have a base WebJEA install! By default, WebJEA comes with 2 PowerShell files:

  • overview.ps1
  • validate.ps1

You may have noticed these in the config.json file; WebJEA has actually run the overview.ps1 file as soon as the page loads, so you can have scripts run before running another one, which is handy when you need to know the current state of something before taking action.

The validate.ps1 script is an excellent resource to check out the parameter types used to generate the forms.

Setup Azure Virtual Machine Start/Stop

Now that we have a working WebJEA install, it's time to set up the Azure VM Start/Stop script for this demo.

On the WebJEA server, we need to install the Azure PowerShell modules, run the following in Powershell as Administrator:

Install-Module Az -Scope AllUsers

Create Service Principal

Once the Az PowerShell modules are installed, we need to set a Service Principal for the PowerShell script to connect to Azure to manage our Virtual Machines.

Run the following PowerShell cmdlet to connect to Azure:

Connect-AzAccount

Now that we are connected to Azure, we now need to create the SPN, run the following:

$sp = New-AzADServicePrincipal -DisplayName WebJEA-AzureResourceCreator -Role Contributor
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sp.Secret)
$UnsecureSecret = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

Now you have created an SPN called: WebJEA-AzureResourceCreator. We now need to grab the Tenant ID, run the following:

Get-AzContext | Select-Object Tenant

Now that we have the SPN and Tenant ID, it's time to test connectivity.

# Login using service principal 
$TenantId = 'TENANTIDHERE'
$ApplicationId = 'APPLICATIONIDHERE'
$Secret = ConvertTo-SecureString -String 'SECRETSTRINGHERE' -AsPlainText -Force
$Credential = [System.Management.Automation.PSCredential]::New($ApplicationId, $Secret)
Connect-AzAccount -ServicePrincipal -Credential $Credential -TenantId $TenantId

Copy the TenantID into the TenantID section

Type:

$sp.ApplicationID

To retrieve the ApplicationID created from the SPN in the previous step and add it into the ApplicationID part.

Type in:

$UnsecureSecret

To retrieve the Secret, created in the SPN and add it to the String.

Now run the snippet, and you should be successfully connected to Azure.

Create Get-VM script

One of the features of WebJEA is the ability to run scripts on page load. So, we will get the current Power State of our Azure VMs, in the WebJEA scripts directory to create a new PS1 file called: Get-VM.ps1.

Add the following script to it:

# Login using service principal 
$TenantId = 'TENANTIDHERE'
$ApplicationId = 'APPLICATIONIDHERE'
$Secret = ConvertTo-SecureString -String 'SECRETSTRINGHERE' -AsPlainText -Force
$Credential = [System.Management.Automation.PSCredential]::New($ApplicationId, $Secret)
Connect-AzAccount -ServicePrincipal -Credential $Credential -TenantId $TenantId
Get-AzVM -Status | Select-Object Name, PowerState, ResourceGroupName

Save the file.

Create Set-VM script

Now, it's time to create the Script to Start/Stop the Virtual Machine. In the WebJEA scripts directory, create a new PS1 file called: Set-VM.ps1

Add the following script to it:

#Variables
[CmdletBinding(SupportsShouldProcess=$True,ConfirmImpact='Low')]
param
(
[Parameter(Position=1, mandatory=$true,
HelpMessage='What is the name of the Azure Virtual Machine?')]
$VMName,
[Parameter(Position=2, mandatory=$true,
HelpMessage='What is the name of the Azure Resource Group that the Virtual Machine is in?')]
$RGName,
[Parameter(Position=3, mandatory=$true,
HelpMessage='What action do you want to do?')]
[VALIDATESET('Start','Stop')]
$VMAction
)
# Login using service principal
$TenantId = 'TENANTIDHERE'
$ApplicationId = 'APPLICATIONIDHERE'
$Secret = ConvertTo-SecureString -String 'SECRETSTRINGHERE' -AsPlainText -Force
$Credential = [System.Management.Automation.PSCredential]::New($ApplicationId, $Secret)
Connect-AzAccount -ServicePrincipal -Credential $Credential -TenantId $TenantId
Get-AzVM -Status | Select-Object Name, PowerState, ResourceGroupName
if ($VMAction -eq "Start")
{
Start-AzVM -Name $VMName -ResourceGroupName $RGName -Confirm:$false -Force
return
}
elseif ($VMAction -eq "Stop")
{
Stop-AzVM -Name $VMName -ResourceGroupName $RGName -Confirm:$false -Force
}

Save the file.

Set VM in WebJEA Config

Now that the scripts have been created, it's time to add them to WebJEA to use.

Navigate to your scripts file and make a backup of the config.json file, then edit: config.json

On the line beneath the "onloadscript": "overview.ps1" file, add:

},

Then add in:

{
"id": "StartStopAzVM",
"displayname": "StartStop-AzVM",
"synopsis": "Starts or Stops Azure Based VMs",
"permittedgroups": [".\\Administrators", "luke.geek.nz\\WebJEAAdmins"],
"script": "Set-VM.ps1",
"onloadscript": "Get-VM.ps1"
}

So your config.json should look similar to:

config.json

{
"Title": "Luke Web Automation",
"defaultcommandid": "overview",
"basepath": "C:\\WebJEA\\scripts",
"LogParameters": true,
"permittedgroups": [".\\Administrators", "luke.geek.nz\\WebJEAAdmins"],
"commands": [
{
"id": "overview",
"displayname": "Overview",
"synopsis": "Congratulations, WebJEA is now working! We've pre-loaded a demo script that will help you verify everything is working. <br/><i>Tip: You can use the synopsis property of default command to display any text you want. Including html.</i>",
"permittedgroups": [".\\Administrators"],
"script": "validate.ps1",
"onloadscript": "overview.ps1"
},
{
"id": "StartStopAzVM",
"displayname": "StartStop-AzVM",
"synopsis": "Starts or Stops Azure Based VMs",
"permittedgroups": [".\\Administrators", "luke.geek.nz\\WebJEAAdmins"],
"script": "Set-VM.ps1",
"onloadscript": "Get-VM.ps1"
}

]
}

Test Azure Virtual Machine Start/Stop

Now that the scripts have been created open the WebJEA webpage.

Click on the StartStop-AzVM page (it may take a few seconds to load, as it is running the Get-VM script). You should be greeted by a window similar to below:

WebJEA - Demo

Congratulations, you have now set up WebJEA and can Start/Stop any Azure Virtual Machines using self-service!

Additional Notes

  • There is room for improvement around error checking, doing more with the scripts, such as sending an email when triggered, etc., to remind the server to be powered off.
  • Because most of the configuration is JSON/PowerShell files, you could have the entire scripts folder in a git repository to make changes, roll back and keep version history.
  • Remove any hard coding of any secrets to connect to Azure (as an example) from the scripts and implement a password management tool with API access or even the Windows Credential Manager. You want a system where you can easily update the passwords of accounts, limit access and prevent anything from being stored in plain text.
  • Using the permitted group's section of the config.json file, you can restrict the ability for certain groups to run scripts this way, and you can set granular control on who can do what.
  • If you use a normal Active Directory user account as the service account - then for added security, make sure that the WebJEA server is the only device that - that account can be logged in as and only has the permissions assigned that it needs to, look at implementing PIM (Privilaged Access Management) for some tasks so it only has access at the time that it needs it.