Skip to main content

Empowering Resilience with Azure backup services

· 23 min read

This article is part of Azure Back to School - 2023 event! Make sure to check out the fantastic content created by the community!

Empowering Resilience with Azure backup services

Along with the basics of the Azure Backup solutions, particularly on Virtual Machines running on Microsoft Azure, there have been a lot of changes in the last year, including Immutable vaults, enhanced policies, intelligence tiering, and cross-region restore.

Introduction

Let us start with the basics with a user story; what do we need to achieve:

"As a Cloud Infrastructure Administrator at Contoso, I want to implement an automated backup solution for virtual machines (Windows and Linux) hosted in Microsoft Azure, So that I can ensure data reliability, disaster recovery, and compliance with minimal manual intervention."

With some assumptions around further requirements, we can jump into solutions using native Microsoft Azure services to fulfil the Cloud Administrator's need. It is worth mentioning, especially around disaster recovery, that there is much more you can (and should do) do around mission-critical Azure architecture. This article will focus primarily on the data loss portion of disaster recovery with Azure Backup services.

RequirementAzure Service(s) Used
Specific (S): Backup virtual machines in AzureAzure Backup, Azure Site Recovery (ASR)
Measurable (M):
- Achieve 99% backup success rateAzure Backup, Azure Monitor
- Define and meet RTO (recovery time objective) for critical VMsAzure Backup, Azure Site Recovery (ASR)
- Monitor and optimise storage consumptionAzure Monitor, Microsoft Cost Management
Achievable (A):
- Select and configure Azure-native backup solutionAzure Backup
- Configure Azure permissions and access controlsAzure Role-Based Access Control (RBAC)
- Define backup schedules and retention policiesAzure Backup, Azure Policy
Relevant (R):
- Align with Azure best practicesAzure Well-Architected Framework, Azure Advisor
- Comply with data protection regulationsAzure Compliance Center, Azure Policy
- Support disaster recovery and business continuityAzure Site Recovery (ASR)
Time-bound (T):
- Implement within the next two Azure sprint cyclesAzure DevOps - Boards
- Regular progress reviews during sprint planningAzure DevOps - Boards
Definition of Done (DoD):
1. Select a cost-effective Azure-native backup solutionAzure Backup
2. Configure Azure permissions and access controlsAzure Role-Based Access Control (RBAC)
3. Define backup policies and RTOsAzure Backup, Azure Policy
4. Monitor and meet 99% backup success rateAzure Monitor
5. Optimize backup storage utilisationMicrosoft Cost Management
6. Create backup and recovery documentationMicrosoft Learn documentation
7. Train the team to manage and monitor the backup systemAzure Training
8. Integrate with Azure monitoring and alertingAzure Monitor
9. Conduct disaster recovery testsAzure Site Recovery (ASR)

Note: Azure DevOps - Boards are outside of the scope of this article; the main reflection here is to make sure that your decisions and designs are documented in line with business requirements. There are also some further assumptions we will make, particularly around security and RTO requirements for the organisation of Contoso.

We know to fulfil the requirements, we need to implement the following:

So, let us take our notebooks and look at the Backup sections.

Backup Center

When needing a single control plane for your Backups across multiple tenancies (using Azure Lighthouse), Subscriptions and regions, then Backup Center is the place to start with.

"Backup center provides a single unified management experience in Azure for enterprises to govern, monitor, operate, and analyze backups at scale. It also provides at-scale monitoring and management capabilities for Azure Site Recovery. So, it's consistent with Azure's native management experiences. Backup center is designed to function well across a large and distributed Azure environment. You can use Backup center to efficiently manage backups spanning multiple workload types, vaults, subscriptions, regions, and Azure Lighthouse tenants."

Backup center

As you can see, Backup center can be used to see manage:

  • Backup instances
  • Backup policies
  • Vaults
  • Monitor and report on backup jobs
  • Compliance (ie Azure Virtual Machines that are not configured for backup)

You can find the Backup Center directly in the Azure Portal.

Backup center

We can create and manage these resources by ourselves, but throughout this article, we will refer back to the Backup Center, to take advantage of the single pane of glass and integrate these resources.

Create Vault

In Microsoft Azure, there are two types of Vaults that the Backup center works with. These vaults are:

Backup center

Depending on your requirements depends on which Vault you will need to create (for our purposes, we will need the Recovery Services vault); Backup Center makes it remarkably easy to configure a new vault and select the right vault type by using the wizard.

Please refer to: Support matrix for Azure Backup for further information.

  1. Navigate to Backup Center
  2. Click on Vaults
  3. Click + Vault
  4. Select Recovery Services vault
  5. Select Continue
  6. Specify a location and Resource Group to house your Recovery Services vault
  7. Specify your vault name (abbreviation examples for Azure resources)
  8. Click Next: Vault properties

Immutability: I talked a bit about immutability in another blog article: You Can't Touch This: How to Make Your Azure Backup Immutable and Secure. Essentially an immutable vault, prevents unauthorised changes and restore point deletions, for this article, we will enable it to prevent unintended or malicious data loss (keep in mind with immutable vaults, reducing retention of recovery points is not allowed).

  1. Check enabled immutability, and click Next: Networking.
  2. We can join our Recovery Services vault to our private network using private endpoints, forcing Azure Backup and Site Recovery to traverse a private network, for the purposes of this article, we will skip it. Click Next: Tags
  3. Enter in Tags (tags useful for a Recovery Service vault, could be: Application, Support Team, Environment, Cost Center, Criticality)
  4. Click Review + Create

Create Azure Recovery Services Vault

If we navigate back to the Backup Center and then Vaults (under Manage), we will be able to see the newly created vault.

We now have our Backup solution provisioned for the Cloud Administrator to use, but we next need to define the policies for the backup.

Create Backup Policies

Now that we have our Recovery Services vault, we need to create backup policies; these backup policies will help define the frequency of backups, the retention (Daily, weekly, monthly, yearly) and vault tiering, which enables the Recovery Services Vault to move recovery vaults to an archive tier (slower to restore, but can be cheaper overall, for those long retention policies).

Backup policies are very organisation-specific and can depend a lot on operational and industry requirements; some industries have a legal obligation to store their backups for a certain number of years, the Azure compliance center documentation may help, around security and data requirements, make sure your backup policies are understood by the business you are working with.

For Contoso, we have the following requirements:

ResourceDailyWeeklyMonthlyYearlySnapshot Retention (Hot)
Critical Application DB - Prod7 days - Every 4 Hours4 weeks6 months7 years5 days
File Server- Prod7 days - Every 4 Hours6 weeks6 months7 years5 days
Web Application VM - Dev20 days8 weeks12 months2 years2 days
Database Server - Dev30 days8 weeks12 months2 years2 days

There are a few things to call out here:

  • We can see that for Development, items need to be retained for 2 years
  • For Production, its 7 years
  • Snapshots need to be stored for 5 days and 2 days to allow fast restore
  • Production requires a backup to be taken every 4 hours to reduce RTO (Recovery point objective)

Create Azure Recovery Services Vault

If we take a look at the Snapshot retention, we can leverage Instant restore snapshots, to restore the workloads, quickly from the previous 5 days, reducing our time RTO (recovery time objective), and overall impact of an outage or restore, by storing the snapshots locally (as close to the original disk) without putting it (waiting for it) into archive (slower disk), this will incurr more cost, but dramatically reduces restores time. I recommend always keeping a few Instant restore snapshots available for all production systems.

Snapshot

Let us create the policies (we will only create one policy, but the same process can be used to create the others).

  1. Navigate to Backup Center
  2. Click on Backup policies
  3. Click + Add 1 .Select Azure Virtual Machines
  4. Select the Vault created earlier
  5. Click Continue
  6. As this will be the policy for the Critical Application DB, we will specify: Enhanced (due to the multiple backups, Zone-redundant storage (ZRS) snapshots)
  7. Specify a Policy name, ie Tier-1-Prod-AppDB
  8. Specify Frequency to: Hourly, Schedule to Every 4 Hours, and Duration: 24 Hours
  9. Specify Retain instance recovery snapshots for '5' days
  10. Update Daily Backup point to: 7 days
  11. Configure the Weekly backup point to occur every Sunday and retain for 4 weeks
  12. Configure the Monthly backup point to occur on the first Sunday of the month and retain for 6 months
  13. Configure the yearly backup point to occur on the first Sunday of the year and retain for 7 years
  14. Select enable Tiering, and specify Recommended recovery points
  15. You can also update the Resource Group name used to store the Snapshots.
  16. Click Create

Snapshot

Note: If you want, you can repeat the same process to create any others you need. Remember, with immutable vaults, you cannot reduce the retention (but you can add), so if starting for the first time, keep the retention low until you have a clear direction of what is required. A workload can use the same policy. A Standard (not Enhanced) policy may be all you need for Development workloads.

Add Virtual Machines

Now that we have our Recovery Services Vault and custom backup policies, it's time to add our Virtual Machines to the backup! To do this, we can use the Backup center to view Virtual Machines that are not getting backed up, and then configure the backup.

  1. Navigate to Backup Center
  2. Click on Protectable data sources
  3. Click on the ellipsis of a Virtual Machine you want to backup
  4. Click on Backup
  5. Select the appropriate Backup vault and policy
  6. Click Enable backup

Although cross-region restore is now supported on a Recovery Services vault, the second region is read-only (RA-GRS), so make sure you have a backup recovery vault created in the region (and subscription) of the virtual machines you are trying to protect. Backup center, can see all Recovery services vaults across multiple regions and subscriptions that you have access to.

Add Virtual Machines

Once added, the Virtual Machine will now get backed up according to the specified policy.

Its worth noting that you can backup a Virtual Machine if it is deallocated, but it will Crash-consistent (Only the data that already exists on the disk at the time of backup is captured and backed up, and it triggers a disk check during recovery) compared to Application consistent, which is more application and OS aware, so can prepare to the OS and applications for the backups to make sure that everything is written successfully to the disk ahead of the backup. You can read more about Snapshot consistency.

Monitor Backups

Now that we have our Recovery Services Vault, policies and protected items (backed up Virtual Machines), we need to monitor to make sure that the backups are working. Backup center gives us a complete view of Failed, In Progress, and Completed jobs in the overview pane, which is excellent for a quick view of the status across subscriptions and regions.

Azure BackupCenter

But you may want something a bit more detailed; let us look into some of the options for monitoring your backups.

Alerts

As part of operational checks, you may want assurance or a ticket raised if there's an issue with a backup; one of the ways to achieve this is to set up an email alert that will send an email if a backup fails.

By default, these types of alerts are enabled out-of-the-box on a recovery services vault; examples of alerts can be found here: Azure Monitor alerts for Azure Backup, these can be displayed in the Recovery Services Vault or Backup Center blade, immediately.

If a destructive operation, such as stop protection with deleted data is performed, an alert is raised, and an email is sent to subscription owners, admins, and co-admins even if notifications aren't configured for the Recovery Services vault.

TypeDescriptionExample alert scenariosBenefits
Built-in Azure Monitor alerts (AMP alerts)These are alerts that will be available out-of-the-box for customers without needing additional configuration by the customer.Security scenarios like deleting backup data, soft-delete disabled, vault deleted, etc.Useful for critical scenarios where the customer needs to receive alerts without the possibility of alerts being subverted by a malicious admin. Alerts for destructive operations fall under this category
Metric alertsHere, Azure Backup will surface backup health-related metrics for customers' Recovery Services vaults and Backup vaults. Customers can write alert rules on these metrics.Backup health-related scenarios such as backup success alerts, restore success, schedule missed, RPO missed, etc.Useful for scenarios where customers would like some control over the creation of alert rules but without the overhead of setting up LA or any other custom data store.
Custom Log AlertsCustomers configure their vaults to send data to the Log Analytics workspace and write alert rules on logs.'N' consecutive failed backup jobs, Spike in storage consumed, etc.Useful for scenarios where there is a relatively complex, customer-specific logic needed to generate an alert.

Backup alerts are supported by Azure Monitor, so under Azure Monitor, and Alerts pane you can see all your other alerts, including Azure Backup alerts from a single pane.

Azure BackupCenter

If you want to configure notifications via emails for other types of alerts, such as Backup failures, we can use Azure Monitor Action Groups and Alert processing rules, to let us know, without having to login to the Azure Portal directly, so let us create an email alert.

To do this, we will create an Action Group and Alert Processing rule.

ComponentDescription
Action GroupAn Action Group is a collection of actions or tasks that are executed automatically when an alert that matches specific criteria is triggered. Actions can include sending notifications, running scripts, triggering automation, or escalating the alert. Action Groups help streamline incident response and automate actions based on the nature and severity of an alert.
Alert Processing RuleAn Alert Processing Rule is a set of conditions and criteria used to filter, categorize, or route incoming alerts within a monitoring or alerting system. These rules enable organizations to define how alerts are processed, prioritize them, and determine the appropriate actions to take when specific conditions are met. Alert Processing Rules are crucial for managing and efficiently responding to alerts.
  1. Navigate to Backup Center
  2. Click on Alerts
  3. Click on Alert Processing rule
  4. Click + Select Scope
  5. Click All Resource Types, and Filter by: Recovery Services Vault
  6. Select your Recovery Services vault, you would like to alert on
  7. Click Apply
  8. Click on Filter, and change: Alert condition = Fired.
  9. Click Next: Rule Settings
  10. Click Apply action group
  11. Click + Create action group
  12. Select the Subscription, Resource Group to store your action group (i.e. monitor resource group)
  13. Give the Action Group a name, and give it a Display name
  14. Specify Notification type (ie Email/SMS message/push/voice)
  15. For this article, we will add an Email (but you can have it ring a number, push a notification to the Azure Mobile App)
  16. Enter in your details, then click Next: Actions
  17. In the Actions pane, is where you can trigger automation, such as Azure Logic Apps, Runbooks, ITSM connections, Webhooks etc., to help self-remediate the issues, or better notifications, such as a Logic App that posts in a Teams channel when an alert is fired, or a wehbook that triggers a webpage to update. In this example, we will leave it empty and rely on email notifications and click Next: Tags
  18. Enter any Tags and click Review + create
  19. Make note of Suppress Notifications; this could be handy during scheduled maintenance windows where backups may fail due to approved work.
  20. Once the Action Group has been created, click Next: Scheduling
  21. Select Always
  22. Click Next: Details
  23. Enter in a Resource Group, for the Alert processing rule to be placed
  24. Enter in Rule name, description and click Review + Create

Azure BackupCenter

As you can see Azure Monitor integration into backups, gives you some great options to keep on top of your backups, and integrate with other systems, like your IT Service Management toolsets.

Azure Site Recovery

Azure Site Recovery (ASR) can be used to migrate workloads, across Availability Zones and regions, by replicating the disks of a Virtual Machine to another region (GRS) or zone (ZRS), in fact Azure Resource Mover uses Azure Site Recovery when moving virtual machines between regions. Azure Site Recovery can also help with migrating workloads outside of Azure, to Azure, for disaster recovery.

When looking at migrating workloads, to Azure from the VMWare stack, consider the Azure Site Recovery Deployment Planner for VMware to Azure to assist.

For the purposes of this guide, we will achieve disaster recovery of our virtual machine, by replicating it to another region (i.e. from Australia East to Central India).

Azure Recovery Services contributes to your BCDR strategy: Site Recovery service: Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to a secondary location, and access apps from there. After the primary location is running again, you can fail back to it. Backup service: The Azure Backup service keeps your data safe and recoverable.

Azure BackupCenter

Just as important (if not more) than the technology to enable this, clear business requirements and preparation is paramount for a successful disaster recovery solution, I highly recommend the Azure Business Continuity Guide. Supplied by the Microsoft Fastrack team, this guide includes resources to prepare for thorough disaster recovery plan.

The key to successful disaster recovery is not only the workloads themselves but supporting services, such as DNS, Firewall rules, connectivity etc., that need to be considered, These are out of the scope of this article but the following Microsoft Azure architecture references are worth a read:

For Azure Site Recovery to work, it relies on a mobility service running within the Virtual Machine to replicate changes, the source virtual machine needs to be on to replicate the changes.

When you enable replication for a VM to set up disaster recovery, the Site Recovery Mobility service extension installs on the VM and registers it with Azure Site Recovery. During replication, VM disk writes are sent to a cache storage account in the source region. Data is sent from there to the target region, and recovery points are generated from the data.

Azure Site Recovery, does not currently support virtual machines protected with Trusted Launch.

Enable Azure Site Recovery

For now, we have 'VM1' a Ubuntu workload, running in Australia East, with a Public IP, that we will failover to Central India. The source Virtual Machine can be backed up normally by a vault in the source region, and replicated to another vault in the destination region.

Azure Site Recovery has a specific Operating System and Linux kernel support. Make sure you confirm that your workloads are supported.

  1. Navigate to Backup Center
  2. Click on Vaults
  3. Create a new Recovery Services vault in your DR (Disaster Recovery region - ie Central India)
  4. Click on Site Recovery
  5. Under Azure Virtual Machines, click on: Enable replication
  6. Specify the source Virtual Machine, you wish to migrate
  7. Click Next
  8. Select your source Virtual Machine
  9. Click Next
  10. Select the target location (i.e. Central India)
  11. Select the target Resource Group
  12. Select the Target Virtual Network (create one if it doesn't exist)
  13. Select the target subnet
  14. Under the Storage, you can consider changing the replica disk to Standard, to reduce cost (this can be changed later).
  15. Select a cache storage account (The cache storage account is a storage account used for transferring the replication data before its written to the destination disk)
  16. You can then adjust the availability zone of the destination virtual machine
  17. Click Next
  18. Here we can define a Replication Policy (a replication policy in Azure Site Recovery is a set of rules and configurations that determine how data is replicated from the source environment to the target environment (Azure) in case of a disaster or planned failover, such as retention, ie you can restore a point within the retention period) we will leave the default 24-hour retention policy.
  19. We can specify a Replication Group, An example of a replication group is application servers that need to be consistent with each other, in terms of data ( replication policy in Azure Site Recovery is a set of rules and configurations that determine how data is replicated from the source environment to the target environment (Azure) in case of a disaster or planned failover.).
  20. Specify an automation account to manage the mobility service, and we will leave the update extension to be ASR (Azure Site Recovery) managed.
  21. Click Next
  22. Click Enable replication
  23. At the Recovery Services Vault, under Site Recovery Jobs you can monitor the registration, registration and initial replication can take 30-60 minutes to install the agent and start the replication.

Azure BackupCenter

Failover to the secondary region using Azure Site Recovery

Once your virtual machine has been replicated in the secondary region. you can do a Failover, or Test failover. A Test failover is recommended, in your DR testing, and application testing.

Azure BackupCenter

AspectFailoverTest Failover
PurposeTo switch to a secondary site during a disaster or planned maintenance event.To validate your disaster recovery plan without impacting production.
Impact on ProductionDisrupts production services as the primary site becomes unavailable during the failover process.No impact on production services; the primary site remains operational.
Data ReplicationReplicates data from primary to secondary site, making it the active site during the failover.Uses the same replicated data but doesn't make the secondary site the active site; it's for testing purposes only.
Recovery TimeLonger recovery time, as it involves setting up and activating the secondary site.Faster recovery time, as it doesn't require making the secondary site the active site.
Data ConsistencyEnsures data consistency and integrity during the failover process.Ensures data consistency for testing but doesn't make the secondary site the primary site.
CostMay incur additional costs due to the resources activated at the secondary site.Typically incurs minimal additional costs as it's for testing purposes.
Use CasesActual disaster recovery scenarios or planned maintenance events.Testing and validating disaster recovery procedures, training, and compliance.
Post-OperationThe secondary site becomes the new primary site until failback is initiated.No change to the primary site; the secondary site remains inactive.
Rollback OptionFailback operation is required to return to the primary site once it's available.No need for a rollback; the primary site remains unaffected.
  1. Navigate to your destination Recovery Services Vault
  2. Click on REplicated Items
  3. Select the Virtual Machine you wish to recover in your second region
  4. Select Test Failover (or Failover, depending on your requirements)
  5. Select your Recovery point and destination Virtual network
  6. Select Failover
  7. If it is a test failover, you can then Clean up your Test failover (deleted replicated item) after you have tested

Azure BackupCenter

Azure Policies

Automatically, mapping of Virtual Machines, to backup policies can be done using Azure Policy.

Azure policies such as:

  • Azure Backup should be enabled for Virtual Machines
  • Configure backup on virtual machines without a given tag to an existing recovery services vault in the same location
  • Disable Cross Subscription Restore for Backup Vaults
  • Soft delete should be enabled for Backup Vaults

More, are built-in to the Azure policy engine and can be easily assigned, across subscriptions and management groups, found in the Backup Center.

  1. Navigate to Backup Center
  2. Click on Azure policies for backup
  3. Click on a policy and click Assign

You can find a list of custom and built-in policies at the AzPolicyAdvertizerPro website.

Azure AutoManage

Azure Automanage can be used alongside Azure policy, to onboard Virtual Machines, into backup, patching etc automatically, with reduced manual intervention, and although not directly part of this article, what you have learned can be used to develop your automanage profiles.

Get Ahead with Self-Hosted Agents and Container Apps Jobs

· 23 min read

When considering build agents to use in Azure DevOps (or GitHub), there are 2 main options to consider:

Agent typeDescription
Microsoft-hosted agentsAgents hosted and managed by Microsoft
Self-hosted agentsAgents that you configure and manage, hosted on your VMs

Microsoft-hosted agents, can be used for most things, but there are times where you may need to talk to internal company resources, or security is a concern, which is when you would consider self-hosting the agent yourself.

Azure Bicep Deployment with Deployment Stacks

· 14 min read

Deployment Stacks! What is it? insert confused look

Maybe you have been browsing the Microsoft Azure Portal and noticed a new section in the management blade called: Deployment stacks and wondered what it was, and how you can use it.

Let us take a look!

Before we get started its worth noting that as of the time of this article, this feature is under Public Preview. Features or ways of working with Deployment Stacks may change, when it becomes generally available. If you run into issues, make sure you have a look at the current known issues.

Automate your Azure Bicep deployment with ease using Deployment Stacks

Overview

Azure Deployment Stacks are a type of Azure resource that allows you to manage a group of Azure resources as an atomic unit.

When you submit a Bicep file or an ARM template to a deployment stack, it defines the resources that are managed by the stack.

You can create and update deployment stacks using Azure CLI, Azure PowerShell, or the Azure portal along with Bicep files. These Bicep files are transpiled into ARM JSON templates, which are then deployed as a deployment object by the stack.

Deployment stacks offer additional capabilities beyond regular deployment resources, such as simplified provisioning and management of resources, preventing undesired modifications to managed resources, efficient environment cleanup, and the ability to utilize standard templates like Bicep, ARM templates, or Template specs.

When planning your deployment and determining which resource groups should be part of the same stack, it's important to consider the management lifecycle of those resources, which includes creation, updating, and deletion. For instance, suppose you need to provision some test VMs for various application teams across different resource group scopes.

Comparisons

Before we dig into it further, it may help to give you a comparison between the different products and where Deployment Stacks, could be used, lets us take a look at a comparison, between similar products, that may come to mind, such as:

  • Azure Blueprints
  • Bicep (on its own)
  • Template Specs
  • Terraform
FeatureDeployment StacksAzure BlueprintsUsing BicepTemplate SpecsTerraform
Management of ResourcesManages a group of Azure resources as an atomic unit.Defines and deploys a repeatable set of Azure resources that adhere to organizational standards.Defines and deploys Azure resources using a declarative language.Defines and deploys reusable infrastructure code using template specs.Defines and provisions infrastructure resources across various cloud providers using a declarative language.
Resource DefinitionBicep files or ARM JSON templates are used to define the resources managed by the stack.Blueprint artifacts, including ARM templates, policy assignments, role assignments, and resource groups, are used to define the blueprint.Bicep files are used to define the Azure resources.Template specs are used to define reusable infrastructure code.Terraform configuration files are used to define the infrastructure resources.
Access ControlAccess to the deployment stack can be restricted using Azure role-based access control (Azure RBAC).Access to blueprints is managed through Azure role-based access control (Azure RBAC).Access to Azure resources is managed through Azure role-based access control (Azure RBAC).Access to template specs is managed through Azure role-based access control (Azure RBAC).Access to cloud resources is managed through provider-specific authentication mechanisms.
BenefitsSimplified provisioning and management of resources as a cohesive entity. Preventing undesired modifications to managed resources.*Efficient environment cleanup.*Utilizing standard templates such as Bicep, ARM templates, or Template specs for your deployment stacks.Rapidly build and start up new environments with organizational compliance. Built-in components for speeding up development and delivery.*Easier management and deployment of Azure resources.*Improved readability and understanding of resource configurations.* Publish libraries of reusable infrastructure code.* Infrastructure-as-Code approach for provisioning resources across multiple cloud providers.
DeprecationN/AAzure Blueprints (Preview) will be deprecated.N/AN/AN/A

It is always recommended to refer to the official documentation for the most up-to-date and comprehensive information. The comparison table above, was created with the help of AI.

It is hard to do a complete comparison, as always 'it depends' on your use cases and requirements, but hopefully this makes it clear where Deployment Stacks come into play (and it does not replace Bicep but works with it for better governance), with out-of-the-box benefits such as:

  • Simplified provisioning and management of resources across different scopes as a cohesive entity.
  • Preventing undesired modifications to managed resources through deny settings.
  • Efficient environment cleanup by employing delete flags during deployment stack updates.
  • Utilizing standard templates such as Bicep, ARM templates, or Template specs for your deployment stacks.

The key here is that Azure Deployment Stacks, is a native way to treat your infrastructure components as an atmonic unit or stack, so you manage the lifecycle of the resources as a whole vs every resource separately.

Using Deployment Stacks

Deployment stacks requires Azure PowerShell (version 10.1.0 or later) or Azure CLI (version 2.50.0 or later).

For the purposes of this article, I will be using PowerShell.

PowerShell

Once you have the latest Azure PowerShell modules, its time to take a look at the cmdlets, that are offered to us for Deployment Stacks.

Open your PowerShell terminal and type in:

Get-Command -Name *DeploymentStack*

Get-Command -Name DeploymentStack As you can see, there are a range of cmdlets we have to work with.

For the purpose of this article, I will be using a Bicep file, I already have on hand (unmodified for Deployment Stacks). This bicep file will create:

  • 2 Virtual Networks
  • 4 Subnets (2 subnets in each Virtual Network)
  • 4 NSGs (and assign to each subnet, with Deny All rules)
  • Then finally, peer the virtual networks.

This is the Bicep file:

main.bicep
@description('Name of the virtual network.')
param vnetName string = 'myVnet'

@description('Name of the first subnet.')
param subnet1Name string = 'subnet1'

@description('Name of the second subnet.')
param subnet2Name string = 'subnet2'

@description('Name of the first network security group.')
param nsg1Name string = 'nsg1'

@description('Name of the second network security group.')
param nsg2Name string = 'nsg2'

@description('Name of the second virtual network.')
param vnet2Name string = 'myVnet2'

@description('Name of the third subnet.')
param subnet3Name string = 'subnet3'

@description('Name of the fourth subnet.')
param subnet4Name string = 'subnet4'

@description('Name of the third network security group.')
param nsg3Name string = 'nsg3'

@description('Name of the fourth network security group.')
param nsg4Name string = 'nsg4'

@description('Location for all resources.')
param location string = resourceGroup().location

resource vnet 'Microsoft.Network/virtualNetworks@2023-04-01' = {
name: vnetName
location: location
properties: {
addressSpace: {
addressPrefixes: [
'10.0.0.0/16'
]
}
}
}

resource subnet1 'Microsoft.Network/virtualNetworks/subnets@2023-04-01' = {
parent: vnet
name: subnet1Name
properties: {
addressPrefix: '10.0.1.0/24'
networkSecurityGroup: {
id: resourceId('Microsoft.Network/networkSecurityGroups', nsg1Name)
}
}
}

resource subnet2 'Microsoft.Network/virtualNetworks/subnets@2023-04-01' = {
parent: vnet
name: subnet2Name
properties: {
addressPrefix: '10.0.2.0/24'
networkSecurityGroup: {
id: resourceId('Microsoft.Network/networkSecurityGroups', nsg2Name)
}
}
}

resource nsg1 'Microsoft.Network/networkSecurityGroups@2023-04-01' = {
name: nsg1Name
location: location
properties: {
flushConnection: false
securityRules: [

{
name: 'Deny-All-Inbound'
properties: {
priority: 4096
access: 'Deny'
direction: 'Inbound'
destinationPortRange: '*'
protocol: 'Tcp'
sourcePortRange: '*'
destinationAddressPrefix: '*'
sourceAddressPrefix: '*'
}
}
]
}
}

resource nsg2 'Microsoft.Network/networkSecurityGroups@2023-04-01' = {
name: nsg2Name
location: location
properties: {
flushConnection: false
securityRules: [

{
name: 'Deny-All-Inbound'
properties: {
priority: 4096
access: 'Deny'
direction: 'Inbound'
destinationPortRange: '*'
protocol: 'Tcp'
sourcePortRange: '*'
destinationAddressPrefix: '*'
sourceAddressPrefix: '*'
}
}
]
}
}

resource vnet2 'Microsoft.Network/virtualNetworks@2023-04-01' = {
name: vnet2Name
location: location
properties: {
addressSpace: {
addressPrefixes: [
'10.1.0.0/16'
]
}
}
}

resource subnet3 'Microsoft.Network/virtualNetworks/subnets@2023-04-01' = {
parent: vnet2
name: subnet3Name
properties: {
addressPrefix: '10.1.1.0/24'
networkSecurityGroup: {
id: resourceId('Microsoft.Network/networkSecurityGroups', nsg3Name)
}
}
}

resource subnet4 'Microsoft.Network/virtualNetworks/subnets@2023-04-01' = {
parent: vnet2
name: subnet4Name
properties: {
addressPrefix: '10.1.2.0/24'
networkSecurityGroup: {
id: resourceId('Microsoft.Network/networkSecurityGroups', nsg4Name)
}
}
}

resource nsg3 'Microsoft.Network/networkSecurityGroups@2023-04-01' = {
name: nsg3Name
location: location
properties: {
flushConnection: false
securityRules: [

{
name: 'Deny-All-Inbound'
properties: {
priority: 4096
access: 'Deny'
direction: 'Inbound'
destinationPortRange: '*'
protocol: 'Tcp'
sourcePortRange: '*'
destinationAddressPrefix: '*'
sourceAddressPrefix: '*'
}
}
]
}
}

resource nsg4 'Microsoft.Network/networkSecurityGroups@2023-04-01' = {
name: nsg4Name
location: location
properties: {
flushConnection: false
securityRules: [

{
name: 'Deny-All-Inbound'
properties: {
priority: 4096
access: 'Deny'
direction: 'Inbound'
destinationPortRange: '*'
protocol: 'Tcp'
sourcePortRange: '*'
destinationAddressPrefix: '*'
sourceAddressPrefix: '*'
}
}
]
}
}

resource vnetPeering 'Microsoft.Network/virtualNetworks/virtualNetworkPeerings@2023-04-01' = {
parent: vnet
name: vnet2Name
properties: {
remoteVirtualNetwork: {
id: vnet2.id
}
allowVirtualNetworkAccess: true
allowForwardedTraffic: false
allowGatewayTransit: false
useRemoteGateways: false
}
dependsOn: [
subnet1
subnet3
]
}

resource vnetPeering2 'Microsoft.Network/virtualNetworks/virtualNetworkPeerings@2023-04-01' = {
parent: vnet2
name: vnetName
properties: {
remoteVirtualNetwork: {
id: vnet.id
}
allowVirtualNetworkAccess: true
allowForwardedTraffic: false
allowGatewayTransit: false
useRemoteGateways: false
}
dependsOn: [
subnet2
subnet4
vnetPeering
]
}

I have already deployed a new Resource Group to deploy our virtual network into:

New-AzResourceGroup -Name 'rg-network' -Location 'Australia East'

So let us create our first Deployment Stack!

New-AzResourceGroupDeploymentStack

The 'New-AzResourceGroupDeploymentStack' cmdlet is the first one we will look into.

Let us look at the most common syntax that you may use:

New-AzResourceGroupDeploymentStack -Name "<deployment-stack-name>" -TemplateFile "<bicep-file-name>" -DeploymentResourceGroupName "<resource-group-name>" -DenySettingsMode "none"
ParameterDescription
-NameSpecifies the name of the deployment stack.
-LocationSpecifies the Azure region where the deployment stack will be created. This is valid for Subscription based DeploymentStacks.
-TemplateFileSpecifies the Bicep file that defines the resources to be managed by the deployment stack.
-DeploymentResourceGroupNameSpecifies the name of the resource group where the managed resources will be stored.
-DenySettingsModeSpecifies the operations that are prohibited on the managed resources to safeguard against unauthorized deletion or updates. Possible values include "none", "DenyDelete", "DenyWriteAndDelete".
-DeleteResourcesDeletes the managed resources associated with the deployment stack.
-DeleteAllDeletes all deployment stacks and their associated resources.
-DeleteResourceGroupsDeletes the resource groups associated with the deployment stacks.

These parameters allow you to customize the creation and management of deployment stacks.

The DenySettingsMode parameter is used in Azure Deployment Stacks to assign specific permissions to managed resources, preventing their deletion by unauthorized security principals, this is a key differentiator to some of the other solutions mentioned earlier, but it does mean you need to think about how your resources will be managed, let us take a look at the DenySettingsMode a bit deeper.

The DenySettingsMode parameter accepts different values to define the level of deny settings. Some of the possible values include:

  • "none": No deny settings are applied, allowing all operations on the managed resources.
  • "DenyDelete": Denies the delete operation on the managed resources, preventing their deletion.
  • "DenyWriteAndDelete": Denies all operations on the managed resources, preventing any modifications or deletions.

By specifying the appropriate DenySettingsMode value, you can control the level of permissions and restrictions on the managed resources within the deployment stack.

For our testing, we will deploy our Azure Virtual Networks, NSGs to a new Deployment Stack, using the DenyDelete DenySettingMode.

$RGName = 'rg-network'
$DenySettings = 'DenyDelete'
$BicepFileName = 'main.bicep'
$DeploymentStackName = 'NetworkProd'

New-AzResourceGroupDeploymentStack -Name $DeploymentStackName -TemplateFile $BicepFileName -ResourceGroupName $RGName -DenySettingsMode $DenySettings -DenySettingsApplyToChildScopes

New-AzResourceGroupDeploymentStack

As you can see, creating a new Azure Deployment Stack is easy, with no adjustments to the underlying Bicep configuration needed.

Note: If you get an error, that the cmdlet is missing -Name parameter, make sure that the -ResourceGroupName parameter has been added.

If we navigate to the Azure Portal, we can see the Deployment Stack natively, including the Stack properties, such as what are the actions if resources are removed, what the denyDelete mode is.

New-AzResourceGroupDeploymentStack

Testing Deny-Assignment

As we deployed our virtual networks, using the denyDelete assignment, lets take a look and attempt to delete a Network Security Group, before we do that we need to dissociate it from the subnet.

Note: Its worth noting my permissions are: Owner.

When I attempted to delete a Network Security Group I get the error below:

Failed to delete network security group 'nsg1'. Error: The client '************' with object id 'cb059544-e63c-4543-930f-4b6e6b7aece1' has permission to perform action 'Microsoft.Network/networkSecurityGroups/delete' on scope 'rg-network/providers/Microsoft.Network/networkSecurityGroups/nsg1'>nsg1'; however, the access is denied because of the deny assignment with name 'Deny assignment '55ebfe82-255d-584a-8579-0e0c9f0219ff' created by Deployment Stack '/subscriptions/f0ee3c31-ff51-4d47-beb2-b1204a511f63'.

Azure Deployment Stack - Delete Resource Test

To delete the resource, I would need to, do one of the following:

  • Delete the Deployment Stack (and detach the resources and delete it manually)
  • Delete the Deployment Stack (and delete all the resources)
  • Remove from the bicep code and update deployment stack.

Note: In our testing, we were able to disassociate the Network Security Group, from the Subnet, because when the deployment stack was deployed - it was with the: denyDelete assignment, not the:'DenyWriteAndDelete'.

Redeploy - Deployment Stack (Portal)

Using the Azure Portal, we can Edit and re-deploy our existing Deployment stack, if you have changes or resources that you may want to roll back:

Azure Deployment Stack - Delete Resource Test

Redeploy - Deployment Stack (Bicep)

What if we want to make further changes, such as removing resources from our Deployment Stack?

In this example, we will modify our bicep code to remove the second Virtual network, subnets and associated NSGs (Network Security Groups), and remove the resources from Azure completely (we can unattach them, which will remove them from being managed by the deployment stack), but I want my Virtual Network resources to be managed completely by Bicep.

We could use the: Save-AzResourceGroupDeploymentStackTemplate, to save the Deployment Stack to an ARM template, if we wanted to deploy it later.

Note: In the bicep code example supplied earlier I removed everything after NSG2.

We will run the Set-AzResourceGroupDeploymentStack, pointing to the modified bicep code:

$RGName = 'rg-network'
$DenySettings = 'DenyWriteAndDelete'
$BicepFileName = 'main.bicep'
$DeploymentStackName = 'NetworkProd'

Set-AzResourceGroupDeploymentStack -Name $DeploymentStackName -ResourceGroupName $RGName -TemplateFile $BicepFileName -DenySettingsMode $DenySettings -DeleteResources -Verbose -DenySettingsApplyToChildScopes

In this example, we tell Deployment Stacks to Delete Resources that are no longer part of the stack, and this time we will add the Verbose flag, so we can see what it is doing.

Azure Deployment Stack - Delete Resource Test

Note: I cut the gif, thats why the timestamps don't match, or you could be spending 10 minutes staring at the verbose output.

If we navigate to the Azure Portal, we can see the deleted resources listed in the Deployment stack history (only displays the last Deployment stack changes vs keeping a history of everything), and the Resource un-managed state has changed to: delete.

Azure Deployment Stack - Delete Resource Test

Note: A manually created Virtual Network in the same Resource Group (but not part of the deployment stack) remained untouched.

I forgot to update, the DenySettings variable, so once I re-deployed with the 'DenyWriteAndDelete' instead of: 'DenyDelete'. I was unable to disassociate my Network Security Group.

Azure Deployment Stack - Delete Resource Test

Permissions

I have 'Owner' rights over my own demo subscriptions, so a bit more flexibility than I would have in a Production environment.

You can add exclusions to your Deployment Stack, allowing certain principals or actions to be completed.

You could also create custom role (Microsoft.Resources/deploymentStacks) to be able to Read, Update or delete deployment stacks, giving you the flexibility to allow people to modify their own stacks and redeploy, without accessing to other tooling required and self-service functionality, such as being able to give someone a deployment stack, that the users can then delete the resources and redeploy later straight from the Azure Portal when required for testing.

Azure Bicep - Deploy Pane

· 2 min read

Working with Azure Bicep using Visual Studio Code, is as native as an experience as you can get, made even better by the Bicep Visual Studio Code extension.

The Bicep Visual Studio Code extension keeps evolving, with a recent (Experimental) feature being added called the Deploy Pane.

The Deployment Pane is a UI panel in VSCode that allows you to connect to your Azure subscription and execute validate, deploy & whatif operations and get instant feedback without leaving the editor.

Azure Bicep - Deploy Pane

The Deploy Pane, brings together some key actions:

  • Deploy
  • Validate
  • What-If

The Deploy step, will deploy the Bicep file using the Subscription Scope and ID specified in the pane. The validate step, will validate that the Bicep syntax is correct for the Azure Resource Manager to process the template. The What-If step, will let you know what it will deploy and what changes will be made, without having to deploy or touch any resources.

To enable the new Experimental Feature, make sure you are running the latest version of both Bicep, and the Bicep Visual Studio Code extension.

  1. Click on Settings
  2. Expand Extensions
  3. Navigate to: Bicep
  4. Check the box labelled: Experimental: Deploy Pane

Azure Bicep - Deploy Pane

Once enabled, you will see the new Deploy Pane, appear in the top right of your Visual Studio code interface, next to Bicep Visualizer, once you have a Bicep file loaded.

Azure Bicep - Deploy Pane

If you have any feedback regarding this extension, make sure to add your feedback to the azure/bicep issues

Coding on the Cloud - Getting Started with GitHub Codespaces

· 17 min read

Github Codespaces gives you the full capability of your favourite IDE (Integrated Development Environment) like Visual Studio Code, Jupyter, or JetBrains and an extension, to your web browser. With it, you can develop your applications without needing any dependant service or tool installed or configured locally, giving developers a standard way of working on applications and scripts.

Github Codespaces - Getting Started

Github Codespaces does this by leveraging the power of the Cloud and GitHub to run containers that you can personalize to run your IDE, extensions, and any dependencies that you may need, whether you are a developer needing Python, apache, react, or a devops engineer requiring Bicep and Terraform support, Codespaces is an ideal enabler for our toolkit, in fact, this article was written in a Github Codespace, using Visual Studio Code and Markdown extensions.

A codespace is a development environment that's hosted in the cloud. You can customize your project for GitHub Codespaces by configuring dev container files to your repository (often known as Configuration-as-Code), creating a repeatable codespace configuration for all project users. GitHub Codespaces run on various VM-based compute options hosted by GitHub.com, which you can configure from 2-core machines up to 32-core machines. You can connect to your codespaces from the browser or locally using an IDE like Visual Studio Code or IntelliJ.

Let's delve into Github Codespaces a bit more!

Introduction

GitHub Codespaces is a cloud-based development environment provided by GitHub, designed to enhance the coding experience and streamline collaboration for developers. It allows you to create, manage, and access your development environments directly from your web browser. With GitHub Codespaces, you can code, build, test, and debug your projects without setting up complex local development environments on your machine.

Github Codespaces

Key features and benefits of GitHub Codespaces include:

  • Access Anywhere: You can access your coding environment from anywhere with an internet connection. This is particularly useful for remote work, collaboration, and coding on the go.
  • Consistency: Codespaces ensure consistency across development environments, which can help avoid the "it works on my machine" issue often encountered in software development.
  • Collaboration: Multiple developers can collaborate on the same Codespace simultaneously, making it easier to pair programs, review code, and troubleshoot together in real-time.
  • Isolation: Each project or repository can have its own isolated Codespace, preventing conflicts between dependencies and configurations.
  • Quick Setup: Setting up a development environment is quick and easy. You don't need to spend time installing and configuring software locally.
  • Configurability: Codespaces can be customized with extensions, tools, and settings to match your preferred development environment.
  • Scalability: GitHub Codespaces can scale according to your needs, making it suitable for individual developers and larger teams.
  • Version Control Integration: Codespaces is tightly integrated with GitHub repositories, making it seamless to switch between your code and the development environment.
  • Security: Codespaces offer a secure environment for development, as it doesn't store sensitive data and is protected by GitHub's security measures.
  • Project Setup: Codespaces can be configured to automatically set up a project with required dependencies and tools, reducing the time needed to get started.

Github Codespaces went into general availability on August 2021 and is built on top of the devcontainers schema.

Prerequisites

You need a Github account to use GitHub Codespaces.

GitHub Codespaces are available for developers in every organization. However, organizations can choose whether codespaces created from their repositories will be user-owned or organization-owned.. All personal GitHub.com accounts include a monthly quota of free usage each month.

GitHub will provide users in the Free plan 120 core hours, or 60 hours of run time on a two core codespace, plus 15 GB of storage each month.

For further pricing information, make sure you review:

Pricing, features and offerings could change at any time. For the most up-to-date information, make sure you review the documentation on github.com.

To use GitHub Codespaces, you need an active repository; by default, Github Codespaces is configured for the repository you set.

You will also need a supported browser (the latest versions of Chrome, Edge, Firefox, or Safari are recommended) to view your IDE; in this article, we will be using Visual Studio Code.

Setting Up GitHub Codespaces

Github Codespaces can be accessed directly from the GitHub interface.

  1. Navigate to a new repository
  2. Click Code
  3. Click + in the Codespaces tab to open a new Codespace on your repo, by default a Visual Studio Code instance will open in your browser; note the 'funky' name and URL that create to give you a unique container.

Note: Don't worry; nothing is saved to your repository unless you want to commit any changes.

Your Codespace is now started and running in a default GitHub-supplied development container.

A development container is a running Docker container with a well-defined tool/runtime stack and its prerequisites.

Github Codespaces - Run

Exploring the Interface

Once you have your Codespace running, you have access to most native Visual Studio Code capability's and all the files in your repository.

Github Codespaces - Overview

We now have our workspace, consisting of Visual Studio code, running in your own docker container! The Host field (lower left) indicates that you are running in a Codespace.

Out of the box, Visual Studio Code has git integration, so you can easily commit any changes to the repository as you would if you were working from your local development workstation - this is critical to remember when making a change to your devcontainer configuration - you have to commit it before you can rebuild or you will lose your configuration (we will get to this further in the article).

As its runs in a hosted container, you can switch easily between computers and browsers by opening the Codespace (the same way you created your Codespace, but instead selecting an already running instance) or copying the URL of your Codespace and log back into Github on another computer to go directly to the container instance:

Github Codespaces - Run

If you leave your Codespace running without interaction, or if you exit your codespace without explicitly stopping it, the codespace will timeout after a period of inactivity and stop running. You can adjust the timeout of your codespace to a maximum of 240 minutes (4 hours) for new Codespaces, but keep in mind you will be charged unless the Codespace is stopped. If the Codespace remains inactive for some time, it could be deleted. You should get an email notification before this happens, but I suggest keeping an eye on your Codespace and ensuring it's only running when needed.

Warning: Codespaces compute usage is billed for the duration a codespace is active. If you're not using a codespace that remains running and hasn't yet timed out, you are billed for the total time that the codespace was active, irrespective of whether you were using it. For more information, see "About billing for GitHub Codespaces."

As with any Visual Studio Code instance, you can also log in to your GitHub account to pull your settings and extensions, but to keep things clean and distraction-free, you can customize your Codespace instead for only what you or others working in the same repository need.

Customizing Your Codespace

You can customize your Codespace to suit the project you are working on; some examples I use personally are:

  • Markdown editing (for example, my website is hosted on Github Pages, and the formatting is done using Markdown, so I have a devcontainer preconfigured to run Markdown Visual Studio Code extensions and linting, so as soon as I open it - its good to go!)
  • Infrastructure as Code development (I have preconfigured devcontainer, running on a container, that has the latest version of PowerShell, Terraform, Bicep installed and relevant Visual Studio extensions)

I used to install everything locally, to the point when I would be reinstalling Windows every few months. To keep my device clean, I moved to an Azure Virtual Desktop host as my development environment, but Codespaces now give me the flexibility to install what I need (when I need it) within a Linux environment, and I know when I rebuild the Codespace, I will have the latest libraries.

There are a lot of customisation you can do, we won't be able to cover all possible customisations in this article, but I aim to cover the basics to get you up and running!

Setting Sync

Before delving into some of the customisation of the devcontainer configuration itself, let us remember the Visual Studio Code settings sync.

If you are someone who works on the same products and services and has invested time in configuring Visual Studio profiles, there's nothing indicating that you can't use this in a Github Codespace, especially if it is a trusted repository.

You will already be logged into Visual Studio Code with your GitHub account; you can turn on Setting Sync to have your Visual Studio code settings and profiles sync straight into your devcontainer.

Github Codespaces - Setting Sync

One of the downsides of this method is the container can get bloated with extensions and configurations you don't need, and you will have to turn on Setting Sync each time a Codespace is launched.

Setting Sync is an easy way to import your configuration from your Desktop into the Cloudspace.

Codespace templates

Instead of spending time, developing your template, you may find a devcontainer template already exists for your use case; some examples consist of:

  • Ruby on Rails
  • React
  • Juypter Notebooks and more.

These Codespace Templates can easily be launched from the web browser and are a great resource to test the power of Codespace and refer to when customising your own devcontainer.

See devcontainers/template-starter and devcontainers/feature-starter for information on creating your own!

Devcontainers

Within each customised Codespace is a "devcontainer.json" json file, and some containers will have a dockerfile.

These files will sit inside a /.devcontainer/ folder at the root of your git repository. It is worth noting that you can have multiple devcontainer files within a single repository; you will be prompted which one to be used when you start the Codespace up.

These files are crucial to customising your devcontainer.

Although they serve different purposes they can work standalone or together to create a consistent and reproducible development environment for your project.

FilePurpose
devcontainer.jsonThe devcontainer.json file configures how your development environment is set up within the Docker container when using the Remote - Containers extension.
dockerfileThe dockerfile defines the environment you need for your project. When you create a Codespace, GitHub will use the specified Dockerfile to build a container image that includes all the tools, libraries, and configurations required to work on your project.

When you open your project in a GitHub Codespace that uses a devcontainer.json file, Visual Studio Code will automatically detect the configuration and set up your development environment according to the specified settings.

You can use a dockerfile to define the environment you need for your project. When you create a Codespace, GitHub will use the specified Dockerfile to build a container image that includes all the tools, libraries, and configurations required to work on your project.

Even without using a dockerfile, you can install any dependant libraries onto your codespace, but they are lost when the container gets rebuilt; there are certain approved features you can add to your devcontainer file that will be installed when a container is launched, which is great when making sure you are working on with the latest component.

The idea with both these files is to keep them lean and make sure that you are running the components you only need. To keep launch time and performance as quick as possible, it is possible to 'prebuild' your image if it is largely complex, but we won't be covering that here.

devcontainer.json

Let us take a look at the 'devcontainer.json' file. As Codespaces uses the devcontainer schema, all the customisations offered such as:

  • entrypoint
  • onCreateCommand
  • customizations
  • features

Can be used, offering a vast range of customisation opportunities to suit your needs.

For most purposes, you may be able to find you can get away with a devcontainer.json file without having to delve into building your own dockerfile.

Let us look into the devcontainer.json file I am using for this blog article:

// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/markdown
{
"name": "Markdown Editing",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/base:bullseye",

// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},

// Configure tool-specific properties.
"customizations": {
// Configure properties specific to VS Code.
"vscode": {
// Add the IDs of extensions you want to be installed when the container is created.
"extensions": [
"yzhang.markdown-all-in-one",
"streetsidesoftware.code-spell-checker",
"DavidAnson.vscode-markdownlint",
"shd101wyy.markdown-preview-enhanced",
"bierner.github-markdown-preview"
]
}
}

// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],

// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",

// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

This was from an already existing template, that had everything I needed from: devcontainers/templates.

A few things stand out:

  • image
  • customizations
  • features
  1. The image is the docker image, that this Codespace is running.
  2. Customizations are application-specific customisations, in my example vscode, and the extensions that are automatically provisioned for me when I started this codespace.
  3. Features, which are currently blank, but will allow us to run scripts to install any relevant dependencies when the container is started.

This json file is a great reference point, as we go into creating our own. You can create your own devcontainer.json file - or we can do it within the devcontainer itself using the Codespaces extension, preinstalled into your new Codespace.

In our newly created Codespace from the Setting up Codespace step earlier, it's time to create our own devcontainer. The project we will be working on will be Terraform development, so we want to customise our own codespace for Infrastructure as Code development.

  1. Press Ctrl+Shift+P (or click View, Command Palette)
  2. Type in: Codespaces (with the codespaces commands you can rebuild your container, resize and modify your codespace)
  3. Select Add Dev Container Configuration Files
  4. Select Create a new Configuration
  5. Type in Ubuntu
  6. Select the latest version (or default)
  7. In the features list we have the option to install third-party tools and dependencies that will be installed when we launch our Codespace, search for Terraform
  8. Select Terraform, tflint, and TFGrunt
  9. Click Ok
  10. Select Configure Options
  11. Check the installTFsec and instalLTerraformDocs
  12. Click Ok
  13. Select the latest Terraform version
  14. Select the latest Tflint version

This will now create a devcontainer json file, using a base Ubuntu image, with the latest version of Terraform, tflint and Terragrunt installed!

Github Codespaces - Create

Make sure you save and commit the devcontainer.json file to the repository! You have now created your first custom codespace.

You can now rebuild your container, to run inside your Terraform container:

  1. Press Ctrl+Shift+P (or click View, Command Palette)
  2. Type in Codespaces
  3. Select Full Rebuild Container
  4. Accept the prompt. that it will be rebuild with the devcontainer configuration.
  5. GitHub Codespaces will then grab the Ubuntu image, and the Terraform feature and run.

Note: If the build fails, at the time of writing, there looked to be an issue with the latest version of terragrunt, I pined it to this specific version: 0.48.0, and it fixed it. So edit the JSON file and update latest to the version. Feel free to review my example codespace here: lukemurraynz/codespaces.

  1. Once loaded, I can immediately run 'terraform init'

Github Codespaces - Terraform init

Now that we have Terraform installed, the Azure Terraform and HashiCorp extension - we may want the GitLens extension, to help with working with other developers, so let us add this!

  1. Navigate to the Extensions
  2. Search for GitLens
  3. Right click the extension button and select 'Add to devcontainer.json'
  4. Then commit your save, you have now added the GitLens extension into your devcontainer, this will automatically be installed on your next rebuild.

Github Codespaces - Install Extension

Now we have created our own Github Codespaces using devcontainers, using features and adding in extensions, last thing to add is a few settings to Visual Studio Code, such as Format on Save Mode.

  1. Click on the Settings gear

  2. Click Settings

  3. Search for: Format

  4. Click on the gear next to: Editor: Format On Save

  5. Click Copy Setting ID

  6. Navigate to your devcontainer.json file

  7. Under vscode customization, add a new item called Settings and add:

     "settings": {
    "editor.formatOnSave": true
    },
  8. Intellisense should help you, add it in and any other settings you may want configured. you may want to consider configuring a default formatter or linter for your project.

Github Codespaces - Set Setting

As usual, make sure you Commit the change to the repository, before you rebuild to confirm the settings have worked.

dockerfile

For those more complex scenarios, where there may not be a feature or shellscript that you can run as part of the launch, you may want to consider your own dockerfile.

In this example, I am going to use the same scenario, but use a non-devcontainer image for Apache.

You can create a dockerfile in the same repository.

To make this work, you need an adjustment to your devcontainer.json file.

  1. Create a new file called: dockerfile - in the same location as the 'devcontainer.json' file

  2. In the dockerfile add the following line:

  3.  FROM httpd:latest
  4. Save

  5. In the devcontainer.json file replace the image section with:

     "build": {
    // Path is relataive to the devcontainer.json file.
    "dockerfile": "Dockerfile"
    },
  6. Now start your Codespace

  7. Github will now grab the image directly from dockerhub and overly your devcontainers configuration on top of it.

Port Forwarding

Github Codespaces can do port forwarding, which is either Private (ie visible only to your GitHub user), or Public (open to the internet). This is useful for local development and testing.

Let us take our Apache, httpd image supplied earlier.

In the same directory, we will create an index.html page:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>GitHub Codespace Port Forwarding Test</title>
</head>
<body>
<h1>Hello, GitHub Codespaces!</h1>
<p>This is a test webpage for port forwarding.</p>
</body>
</html>

And adjust the dockerfile like so:

FROM httpd:latest
COPY index.html /usr/local/apache2/htdocs
EXPOSE 80

This will take our index.html page and feed it to the apache htdocs folder.

Then we go to our devcontainer.json file and add these:

  "forwardPorts": ["80"],

"postStartCommand": "httpd"

Now save the changes and launch your Codespace.

Feel free to review my example codespace here: lukemurraynz/codespaces.

Github Codespaces - Port Forwarding

Working from your own device

This is all great, but sometimes it feels more natural to work from a locally installed Visual Studio Code instance.

Using the: GitHub Codespaces Visual Studio Code extension, you can connect to a Codespace (or start one), directly from your own Visual Studio Code installation.

  1. Install GitHub Codespaces extension
  2. Press Ctrl+Shift+P (or click View, Command Palette)
  3. Type in Codespaces
  4. Click Connect to a Codespace
  5. Select your codespace

Github Codespaces - Connect to Codespace

As you can see you can now connect to one or multiple GitHub Codespaces, from your own locally installed Visual Studio instance!

Additional Settings

Additional settings, that can be configured are:

Hopefully this article has given you a taste of what GitHub Codespaces can do.