Photo by Łukasz Łada on Unsplash

What is the Cloud Resume Challenge?

The Cloud Resume Challenge was originally a blog post/book authored by Forest Brazeal, which can be found here. The point of the project is to get hands-on learning about the cloud platform of your choice by building a fully functional resume website with an API that tracks the number of visitors to the site.

Personally, my initial exposure was through Gwyneth Peña-Siguenza's A Cloud Guru video on YouTube. I discovered Gwyneth's personal YouTube channel during my AZ-900 certification studies, and I highly recommend her channel to anyone interested in breaking into Cloud.

Like with any difficult project, it is good to break up the Cloud Resume Challenge into smaller chunks that we can tackle individually, so we don't get intimidated by the sheer enormity of the project.

Conveniently, Forest Brazeal does this for us. I will use his outline to help detail my own journey with the Cloud Resume Project.

TL;DR: here's a diagram.

Part 1 - Earn a cloud certification

I started this project shortly after I earned my AZ-900 Azure Fundamentals certification in November of 2022. At the time of publishing this blog, I also hold the AZ-104 (Administrator Associate) and AZ-305 (Solutions Architect) certifications.

Part 2 - HTML

I actually did part of a Codecademy front-end development course long before I committed to pursuing IT and studying CompTIA A+ materials, but I wouldn't say I'm anything more than a dilettante in this area. At first, I made a very bare bones site that looked like a paper resume, but ultimately, I ended up just using the template provided in ACG's video.

My one innovation here is that I added verification links for my certifications.

Part 3 - CSS

As with the HTML, I mostly just relied on ACG's template.

The one thing I spent a good amount of time on was trying to get my LinkedIn picture to display correctly. When I first added my image, it looked very silly and stretched out. I temporarily remediated this by adding the CSS property "object-fit: cover." However, later, after deploying the website, I noticed that this produced pixelated artifacts on the edges of my image in Chromium-based browsers.

I ended up following this article to figure out how to define the CSS properties so that I could resize my image while maintaining its native aspect ratio.

Part 4 - Static Website

Originally, I followed ACG's guidance and created a static website using an Azure storage account. However, over time I became uncomfortable with the inability to securely store and access secrets, namely the Azure Function URL, which contains an authentication key.

So, eventually, I migrated my app to a Python Flask app hosted in App Service. In this configuration, I am able to retrieve my Function App URL from an app setting that in turn retrieves the URL from Key Vault. App Service, like all the other services in my solution that require accessing Key Vault secrets, gets authenticated via a system-assigned managed identity. You can think of managed identities in Azure as managed service accounts for which you do not have to handle credentials (i.e. usernames and passwords). Most of the time you need to use a system-assigned managed identity for an Azure resource, you can enable it by going to Settings > Identity. In Key Vault, you can then create an access policy assigned to the managed identity or alternatively assign an RBAC role to it depending on which access control settings you use.

Part 5 - HTTPS

TLS 1.2 is configured in my Bicep (Infrastructure-as-Code) template for my Azure Front Door, which is a service that provides reduced latency and encryption for apps through Azure's cloud Content Delivery Network. In order for Front Door to create a managed TLS certificate for a custom domain, it needs to perform some validation. After creating my Front Door instance, I have to go in the Azure portal to Front Door > custom domains > clicking on the "pending" validation status > copy the TXT record value > create a new TXT record with my DNS provider. One small gotcha here: you may be tempted to copy the DNS record name provided in the Azure portal. First, you'll want to make sure you lop off the domain portion, since your domain provider probably automatically appends that to the "host" or subdomain you define (e.g. "www").

Validating my custom domain, when I did things correctly, never took more than about 10-12 minutes. You can troubleshoot this by seeing if public DNS resolvers can see your TXT record. I just went to https://dnschecker.org/, searched the name of my DNS record ("_dnsauth.subdomain.domain"), and saw if there were results. This brings me to the next part.

Part 6 - DNS

I used NameCheap as my DNS provider. Registering an account and purchasing a domain name was easy. If you pick a less popular TLD (top-level domain) you can get quite a low price (I think I pay $15 USD a year).

After deploying all of my infrastructure and creating the text record to validate my custom domain, I created a CNAME record to associate the subdomain "www" with the URL of my Front Door instance, since creating a CNAME record for the root of the domain, i.e. "seanchapman.xyz", is not allowed. I then configured redirection so that visiting the root domain "seanchapman.xyz" automatically takes you to "www.seanchapman.xyz."

Part 7 - JavaScript

I wrote a small script that creates an event listener with the trigger "DOMContentLoaded." In other words, I created some code that would run any time the page was loaded, fetch the updated visitor counter, and insert that into the web page. After migrating to App Service, instead of having the JavaScript communicate directly with my API, I had it call an endpoint in my site that would run a specific function defined in my Python Flask app. The Python code then handles calling the API and getting the visitor counter, and returns that value back to the JavaScript.

Part 8 - Database

I used Cosmos DB because of its fast response times and the fact that it allowed for a non-relational database. After all, the visitor counter only requires a simple key-value pair. I configured my Azure Function/API to initialize this value when it gets executed for the first time.

Part 9 - API

I used an Azure Function app coded in Python using the Python Programming Model V2. There was a bit of a learning curve to figuring out the input and output bindings that handled retrieving and updating the visitor counter in Cosmos DB, but once I figured that out the rest of the code was pretty simple to work out. In short, bindings in Azure Functions handle connections to other Azure services without you needing to write a ton of code yourself to connect to them.

Additionally, since I was concerned about someone running a script to endlessly call my function app and jack up my Azure bill (though that would be difficult given you're only charged $0.20 per 1,000,000 executions), I decided to put an API Management Service in between my App Service app and my Function App. The API Management service facilitates managing APIs and allows you to intercept calls to your API, re-route them to different back-ends, apply policies to them, etc. In my case, I created a policy that imposed rate-limiting on the API. Rate-limiting makes it so that only so many back-end requests can be made within a defined time period.

Part 10 - Python...or C#?

C#/.NET are unsurprisingly first-class citizens when it comes to Azure (hint: they're made by Microsoft). I coded my Azure Function and web app in Python because I was much more familiar with it, but given that I'm using the cheapest and least performant SKUs of these services, I would probably see a noticeable improvement in performance if I were to re-code these things in C#. I've started learning C# and will refactor my code eventually. ACG's repository has C# code, but I didn't want to mindlessly copy the code without understanding how it worked. Hence, I created my own code in Python.

Part 11 - Tests

I wanted to follow good development practices and also prevent glaring errors whenever I made changes to my site, so I created unit tests for both the Azure Function and the App Service app. Unit tests are pieces of code that are ran to check that the main code meets certain expectations, returns expected values, doesn't error out, etc.

This is where using Azure Function input/output bindings came into play and made developing unit tests much, much easier. My original code for the Azure Function manually created a Cosmos DB client, and while it was easier initially than figuring out the input binding, it made it more difficult to pass in a mock Cosmos DB object.

I have more notes in my original repository on this (see "Old repository" link below), but the biggest key to figuring out this portion is to use input/output bindings, and then view the documentation for those bindings to see the classes used by the binding parameters. In order to use the bindings and to mock the objects they create in your unit tests, you'll need to put the corresponding modules from the Python Azure SDK in your requirements.txt.

I also created unit tests for the front-end Flask app, including one that ensures that sending an HTTP GET (equivalent to a user navigating to my site in the browser) successfully leads to my HTML page being rendered. Thankfully, the Flask Python module includes a "test_client" object that you can use as a fixture in all of your tests so you don't have to spin up a web server any time you want to run tests.

Part 12 - Infrastructure as Code (IaC)

I ended up deciding to use Bicep to automate the setup of all the necessary resources for this project. Its syntax seemed more friendly than ARM, which I had used for much simpler deployments previously, and does not require one to maintain a state file and modules like Terraform.

Part 13 - Source Control

I used Git/GitHub to store my code.

Part 14 - CI/CD (Back end)

CI/CD (Continuous Integration/Continuous Development) pipelines consist of .yml files that contain a list of steps to be executed whenever triggered, usually by a "git push" to a specific repository or folder in a repository. Developers typically use them to automatically run unit tests against their code to check that new changes do not break functionality.

For my use case, I used the pipelines both to test my code and to automate the deployment of infrastructure (via .bicep files) that the code needed to run on.

In the original iteration of my project, I used GitHub Actions. Later I switched to Azure Devops pipelines in order to learn that particular service and also to keep my data contained within Azure services, which, at least on the face of it, seems more secure than storing secrets in an external third-party service.

From a high-level, the basic structure of both my back-end and front-end pipelines are: deploy supporting resources > run unit tests to ensure code is (relatively) error-free > push code to relevant resource > deploy any further resources that the code needs to fully function.

In order to be able to be able to deploy a resource > store some secret property from that resource in Key Vault > use that secret in a subsequent deployment, I had to rely on Az PowerShell commands to deploy one or a few templates at a time (using "New-AzResourceGroupDeployment"), so that the secrets from earlier deployments would be available to later ones.

To repeatedly deploy resources without worrying about names conflicting with previous deployments and to easily refer back to resources created in previous steps, my .bicep templates use a guid/"seed" value that I randomly generate with the aid of a PowerShell script. I configure a variable in my Azure Devops pipeline containing this guid that is then passed to all my .bicep files.

Some resources were trickier than others to create through IaC. The difficulty with automatically deploying my API management Service is that I needed to add my Function API key as a "named value" for use in the API Management policy that does the rate-limiting (defined in my "policy.xml"). I couldn't find a native way to retrieve Azure Function keys in Bicep, so I had to add a step to my .yml pipeline to run an Azure CLI script that grabs the primary key and stores it in Key Vault.

After that, I found that I couldn't simultaneously deploy both the API Management policy and the named value. So, I resorted to using Az PowerShell to check for the named value, deploy a bicep file if it didn't exist, and then do the same for the policy.

Part 15 - CI/CD (Front end)

The pipeline for the front end is much simpler. We only need to deploy an App Service and Front Door.

Microsoft had a very good template for creating a Front Door instance with a custom domain. After some tweaking I incorporated this into my App Service deployment by turning Microsoft's template into a module with the "module" keyword in Bicep.

Part 16 - Blog post (phew)

I wrote this blog and published it to LinkedIn.

It's been a very fun journey that I don't consider over yet. Any one of the above topics are vast and could be discussed in an entire blog series.

I've used this project as a touchstone that I keep coming back to to learn new cloud/development topics. I will continue to do so and share what I've learned in future posts!

Feel free to comment below or message me on LinkedIn if you have any questions, comments, or would like to discuss the Cloud Resume project in more depth.

Links:
Current GitHub repository
Old repository
Live website
LinkedIn

Author Of article : Sasquatch8946 Read full article