Reference : https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa
Every time you run Terraform, it records information about what infrastructure it created in a Terraform state file. By default, when you run Terraform in the folder /foo/bar, Terraform creates the file /foo/bar/terraform.tfstate. This file contains a custom JSON format that records a mapping from the Terraform resources in your configuration files to the representation of those resources in the real world.
Instead of using version control, the best way to manage shared storage for state files is to use Terraform’s built-in support for remote backends. A Terraform backend determines how Terraform loads and stores state. The default backend, which you’ve been using this entire time, is the local backend, which stores the state file on your local disk. Remote backends allow you to store the state file in a remote, shared store. A number of remote backends are supported, including Amazon S3, Azure Storage, Google Cloud Storage, and HashiCorp’s Terraform Cloud and Terraform Enterprise.
Although you should definitely store your Terraform code in version control, storing Terraform state in version control is a bad idea for the following reasons:
- Manual error. It’s too easy to forget to pull down the latest changes from version control before running Terraform or to push your latest changes to version control after running Terraform. It’s just a matter of time before someone on your team runs Terraform with out-of-date state files and, as a result, accidentally rolls back or duplicates previous deployments.
- Locking. Most version control systems do not provide any form of locking that would prevent two team members from running
terraform apply
on the same state file at the same time. - Secrets. All data in Terraform state files is stored in plain text. This is a problem because certain Terraform resources need to store sensitive data. For example, if you use the
aws_db_instance
resource to create a database, Terraform will store the username and password for the database in a state file in plain text, and you shouldn’t store plain text secrets in version control.
Instead of using version control, the best way to manage shared storage for state files is to use Terraform’s built-in support for remote backends.
If you’re using Terraform with AWS, Amazon S3 (Simple Storage Service), which is Amazon’s managed file store, is typically your best bet as a remote backend for the following reasons:
- It’s a managed service, so you don’t need to deploy and manage extra infrastructure to use it.
- It’s designed for 99.999999999% durability and 99.99% availability, which means you don’t need to worry too much about data loss or outages.
- It supports encryption, which reduces worries about storing sensitive data in state files. You still have to be very careful who on your team can access the S3 bucket, but at least the data will be encrypted at rest (Amazon S3 supports server-side encryption using AES-256) and in transit (Terraform uses TLS when talking to Amazon S3).
- It supports locking via DynamoDB. (More on this later.)
- It supports versioning, so every revision of your state file is stored, and you can roll back to an older version if something goes wrong.
- It’s inexpensive, with most Terraform usage easily fitting into the AWS Free Tier.
To enable remote state storage with Amazon S3, the first step is to create an S3 bucket. Create a main.tf file in a new folder (it should be a different folder from where you store the configurations from Part 1 of this series), and at the top of the file, specify AWS as the provider:
provider "aws" {
region = "us-east-2"
}
Next, create an S3 bucket by using the aws_s3_bucket
resource:
resource "aws_s3_bucket" "terraform_state" {
bucket = "terraform-up-and-running-state"
# Prevent accidental deletion of this S3 bucket
lifecycle {
prevent_destroy = true
}
}
No comments:
Post a Comment