Local to Azure: Build Containers and Deploy with IaC on Azure Container App Jobs

Introduction
In this post we will build a Docker image from a Dockerfile, run the image inside a container locally that will execute a bash script, write Terraform configuration to build our cloud infrastructure required to run the container in Azure & finally test our image inside an Azure Container App Job.
The goal is to get you to test the services so I will supply a repo you can clone with some finished code so you can easily follow along and test the commands out for yourself. You will need:
- An Azure Subscription
- Docker installed on your system
- Terraform installed on your system
- AZ CLI installed on your system
- Git installed on your system as well as a Github account
- A docker hub account (Optional)
The scenario
We have a resource group with very sensitive resources that are central part of our environments monitoring strategy. We want to ensure that any updates to these resources and alerts happen with great care. If someone is to make a change they first need to unlock the resource-group by removing the resource lock we have applied.
They should make their changes and then re-apply the lock. However we have noticed that administrators tend to forget to re-lock the RG. For this we want a scheduled task that runs every night that runs a command to apply the lock, ensuring the resources are locked even if the admins forget.
az lock create --name rgLock --resource-group rg-alz-monitor --lock-type ReadOnly
Deploy the solution
We will build several things in the solution. You can find everything in the Github Repo here.
First off we will build a Dockerfile
that we will use to build our image.
FROM ubuntu:22.04
WORKDIR /app
ARG AZCLI_VERSION=2.55.0
RUN apt-get update \
&& apt-get install -y gpg curl lsb-core \
&& rm -rf /var/lib/apt/lists/*
RUN curl -sL https://packages.microsoft.com/keys/microsoft.asc | \
gpg --dearmor | \
tee /etc/apt/trusted.gpg.d/microsoft.gpg > /dev/null
RUN AZ_REPO=$(lsb_release -cs) \
&& echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
tee /etc/apt/sources.list.d/azure-cli.list \
&& apt-get update \
&& apt-get install -y azure-cli=$AZCLI_VERSION-1~$AZ_REPO
COPY scripts/azscript.sh /app/azscript.sh
ENTRYPOINT [ "/bin/bash", "/app/azscript.sh" ]
We can build the image by running the following commands.
git clone git@github.com:carlzxc71/local-to-azure-container-apps.git
cd local-to-azure-container-apps
docker image build -t applylockscript .
Wait for the image to build. Normally I would have published this image to docker hub or some other repo that you could consume directly from there. However, the scope for this guide is to get you to use all of the tooling and build as much as possible yourself from scratch.
Now once the image has been built we will authenticate to Azure to test the container. run az login
and az set -s <subscription id>
Next we will run docker container run --rm -v "${HOME}/.azure":/root/.azure applylockscript
--rm
= this will remove the container once its completed its run
--v
= Creates a bind mount to copy our .azure folder into the container, allowing us to be authenticated to Azure from within the container.
If everything is working properly you should see the following output of your account details and the lock being applied
{
"environmentName": "AzureCloud",
"homeTenantId": "***a4ac",
"id": "****914a6",
"isDefault": true,
"managedByTenants": [],
"name": "<Name of subscription>",
"state": "Enabled",
"tenantId": "*******a4ac",
"user": {
"name": "<user>",
"type": "user"
}
}
{
"id": "/subscriptions/****4a6/resourceGroups/rg-alz-monitor/providers/Microsoft.Authorization/locks/rgLock",
"level": "ReadOnly",
"name": "rgLock",
"notes": null,
"owners": null,
"resourceGroup": "rg-alz-monitor",
"type": "Microsoft.Authorization/locks"
}
You should now see the resource lock on the resource group in the Azure Portal.
How does it work?
There are some key lines in the Dockerfile that copies a bash script from the folder into the image and executes it.
The following lines copies the script and executes it in the image
COPY scripts/azscript.sh /app/azscript.sh
ENTRYPOINT [ "/bin/bash", "/app/azscript.sh" ]
Entrypoint specifies the default executable
The script is simply the command we talked about previously abount applying a resource lock.
Local to cloud
So far we have only configured and run this container locally but we will not use our own computer to run this on a schedule but use Azure Container App Jobs to run this in the Cloud. We will achieve this with Terraform.
First you need to create a tfvars file, I will create a folder using mkdir -p terraform/variables
& touch terraform/variables/prod.tfvars
and fill out the file with the following content, feel free to edit to fit your environment:
location = "Sweden Central"
container_app_env_name = "cae-prod-sc-lock"
rg_name = "rg-prod-sc-lock"
log_workspace_name = "log-prod-sc-lock"
container_app_job_name = "job-prod-sc-lock"
docker_image = "altinlab/applylockscript:2024-02-28"
Note: I am pointing to my docker image on docker hub to run this inside the container app job which you can use as well by just allowing the value above to stay for docker_image
- if you wish you can tag and push to your own repository if you have one with these commands
docker image tag applylockscript:<tag> <repo>/applylockscript:<tag>
docker push <repo>/applylockscript:<tag>
Note, if you do use your own repo, ensure you update docker_image variable in your prod.tfvars file
To deploy the required resources run:
terraform init
terraform plan -var-file variables/prod.tfvars
terraform apply -var-file variables/prod.tfvars
Resources created
- App Container Environment
- App Container Job (AzAPI resource)
- Log Analytics Workspace
- Resource Group
- Owner role assignment for App Container Job managed identity
If successful:

Once its finished deploying you can head to the Azure Portal and search for Azure Container App Jobs in the searchbar and find yours.
- Select Run Now

- Head to Execution History and once its completed you can select Console Logs to see the output from the job which will be similar to the output from when we ran the container locally

- Select Run for the automatically populated query, note that it takes some time for log analytics to ingest logs so you may need to press it a few times before you will actually get some logs returned, this can sometimes take a while

- Head to the resource group and select Locks and verify that it is indeed locked

Summary
In this post we have experimented with containers both for local development and running them in the cloud. Making use of Infrastructure as Code to configure all the supporting resources required and how to test the solution once its deployed. We got to work with both Terraform & Docker in this post as well which are both very exciting technologies.
In order to clean up you can run the following command to tear-down everything we have deployed.
terraform destroy -var-file variables/prod.tfvars -auto-approve
Takes usually 5-7 minutes for the app environment resource to destroy
If you want to experiment further with the project you can go ahead and clone my repository and make changes to the files within the project.
References
Read more about Container App Jobs here
About me
