Introduction
Hey! I'm Bobby, a DevOps engineer and the author of the Introduction to Terraform ebook.
In this article, I'll share five Terraform best practices that I wish I knew when I first started using Terraform. These tips will help you write cleaner, more maintainable infrastructure as code and avoid common mistakes that can lead to deployment headaches.
Prerequisites
To follow along, you should have:
- Basic knowledge of Terraform
- Terraform installed on your system
If you're new to Terraform, I highly recommend checking out my Introduction to Terraform ebook, where I cover everything from the basics to managing infrastructure at scale.
Step 1 — Always Use Remote State Storage
By default, Terraform stores its state file (terraform.tfstate
) locally. However, in real-world projects, this is not ideal because:
- Local state files can be lost, leading to data inconsistencies.
- Teams working together need a shared state to prevent conflicts.
- Sensitive data may be exposed if the state file is not secured properly.
A better approach is to store your state remotely using Terraform Cloud. S3, or HashiCorp Consul.
Example: Storing Terraform State in AWS S3
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "state/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-lock"
}
}
This setup:
- Stores the state file in AWS S3.
- Encrypts the state file for security.
- Uses DynamoDB for state locking to prevent simultaneous updates.
💡 Tip: If you're using Terraform Cloud, it provides a built-in remote state storage and locking mechanism.
Step 2 — Use Modules to Keep Code DRY
One of the biggest Terraform mistakes is copy-pasting 🍝 infrastructure code across different projects. Instead, you should use Terraform modules to keep your configurations reusable and maintainable.
For example, instead of writing the same code for every EC2 instance, you can create a reusable module:
modules/
ec2-instance/
main.tf
variables.tf
outputs.tf
Example: A Simple EC2 Module (main.tf
)
resource "aws_instance" "web" {
ami = var.ami_id
instance_type = var.instance_type
tags = {
Name = var.name
}
}
Then, in your main Terraform configuration, you can call the module:
module "web_server" {
source = "./modules/ec2-instance"
ami_id = "ami-12345678"
instance_type = "t2.micro"
name = "web-server"
}
Benefits of using modules:
- Define infrastructure once and reuse it.
- Modify a single module rather than updating multiple files.
- Keeps your main configuration files clean.
Step 3 — Implement Terraform Workspaces for Multi-Environment Deployments
When managing multiple environments (e.g., development, staging, production), many people initially copy-paste Terraform files. This leads to configuration drift and inconsistency.
A better approach is to use Terraform workspaces, which allow you to manage multiple environments with the same Terraform code.
Example: Switching Workspaces
terraform workspace new staging
terraform workspace list
terraform workspace select staging
Inside your Terraform configuration, use terraform.workspace
:
resource "aws_s3_bucket" "example" {
bucket = "my-app-${terraform.workspace}"
}
This automatically creates different S3 buckets based on the workspace (my-app-dev
, my-app-staging
, etc.).
💡 Tip: If you need more complex environment configurations, consider using separate state files rather than workspaces.
Step 4 — Lock Provider Versions to Avoid Unexpected Breakages
Terraform providers are updated frequently, and sometimes these updates introduce breaking changes. If you don’t lock provider versions, your infrastructure might suddenly stop working when running terraform apply
.
Example: Locking Provider Versions
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0" # Locks to major version 5
}
}
required_version = ">= 1.5.0"
}
By doing this:
- You prevent unexpected changes when running Terraform commands.
- Your infrastructure remains stable across deployments.
💡 Tip: Always test updates in a separate branch before upgrading provider versions in production.
Step 5 — Use Terraform Validate and Format Before Applying Changes
Before running terraform apply
, it's good practice to validate and format your Terraform configuration to catch issues early.
Example: Checking Syntax with terraform fmt
and terraform validate
terraform fmt # Automatically formats code
terraform validate # Checks for syntax errors
You can also automate this in a CI/CD pipeline to ensure code consistency:
name: Terraform CI
on: [push]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
- name: Validate Terraform Code
run: terraform validate
Using these commands ensures:
- Your Terraform code follows best practices.
- You catch issues before applying changes.
- Your team follows a consistent formatting style.
Bonus Tip: Keep Secrets Secure with Environment Variables
Terraform configurations often require API keys, passwords, and database credentials. Hardcoding secrets in your .tf
files is a major security risk.
Instead, use environment variables or a secrets manager.
Example: Using Environment Variables
export TF_VAR_db_password="supersecretpassword"
terraform apply
In Terraform:
variable "db_password" {}
resource "aws_db_instance" "example" {
password = var.db_password
}
Better yet, use HashiCorp Vault or some other managed secrets managers for managing secrets securely.
Conclusion
These five best practices: using remote state, leveraging modules, implementing workspaces, locking provider versions, and validating Terraform code, will help you manage your infrastructure more efficiently.
If you're looking to dive deeper into Terraform, check out my paid ebook:
And if you're setting up your Terraform infrastructure on DigitalOcean, you can get $200 in free credits to get started!
Happy Terraforming! 🚀
Author Of article : Bobby Iliev Read full article