Read our newest book, Fundamentals of DevOps and Software Delivery, for free!

Gruntwork Newsletter, June 2020

Headshot of Amanda Ohmer

Amanda Ohmer

JUL 3, 2020 | 15 min read
Featured Image of Gruntwork Newsletter, June 2020
Once a month, we send out a newsletter to all Gruntwork customers that describes all the updates we’ve made in the last month, news in the DevOps industry, and important security updates. Note that many of the links below go to private repos in the Gruntwork Infrastructure as Code Library and Reference Architecture that are only accessible to customers.Hello Grunts,In the last month, we updated the Reference Architecture to take advantage of all the new EKS/Kubernetes features (e.g., support for Fargate, managed node groups, EFS, cluster security groups, etc) and published an upgrade guide, created reusable masks to help fight COVID-19 (all proceeds go to charity), added support for cross-region replication to our Aurora and RDS modules, added EFS support to our ECS modules, and made many other fixes and improvements. In other news, Terraform 0.13.0 Beta is out, HashiCorp has announced their Cloud Platform, and there’s a new and exciting Kubernetes provider for Terraform in the works.As always, if you have any questions or need help, email us at!

Motivation: Over the past 6 months, we’ve added many new features to our EKS solution, including support for Fargate, managed node groups, EFS, cluster security groups, etc. Many of these features have been adapted into our module for EKS (terraform-aws-eks), but we have not had the chance to update the Reference Architecture to be able to take advantage of these developments. We also did not have detailed instructions to help Reference Architecture customers update their clusters to the latest versions.Solution: We’ve updated the Acme Reference Architecture examples to the latest version of Kubernetes, EKS modules, and Helm v3! We also published a detailed upgrade guide that walks you through the process of how to update your existing Reference Architecture to the latest versions. Check out our guide and let us know what you think!

Motivation: COVID-19 continues to be a problem around the world and we wanted to do something to help.Solution: We’ve created a reusable face mask that you can buy to protect yourself, and, as all the proceeds go to Crisis Aid International’s COVID-19 Relief Fund, your purchase also helps to protect the world!What to do about it: Check out Not all heroes wear capes, some wear masks for the full details and get yourself a mask in the Gruntwork store!

Motivation: Our rds and aurora modules have supported replication across Availability Zones since day one, but customers that had more strict requirements around availability, disaster recovery, and global access wanted a way to create replicas in totally separate regions.Solution: As of the release of module-data-storage, v0.12.19, both the rds and aurora modules now support cross-region replication! For example, here’s how you can set up a primary MySQL instance in us-east-1 and a replica for it in us-west-2:
# Configure a provide for the primary region provider "aws" { region = "us-east-1" alias = "primary" }# Configure a provide for the replica region provider "aws" { region = "us-west-2" alias = "replica" }# Deploy the MySQL primary module "mysql_primary" { source = "" # Run the primary in the primary region providers = { aws = aws.primary } name = "mysql-primary" engine = "mysql" engine_version = "5.6.37" port = 3306 instance_type = "db.t3.micro" # Must be set to 1 or greater to support replicas backup_retention_period = 1 }# Deploy the MySQL replica module "mysql_replica" { source = "" # Run the replica in the replica region providers = { aws = aws.replica } # To indicate this is a replica, set the replicate_source_db param replicate_source_db = module.mysql_primary.primary_arn name = "mysql-replica" engine = "mysql" engine_version = "5.6.37" port = 3306 instance_type = "db.t3.micro" }
What to do about it: Check out the rds-mysql-with-cross-region-replica and aurora-with-cross-region-replica examples for the full sample code. Special thanks to Jesse Bye for contributing the Aurora cross-region replication feature!

Motivation: Last month, we released an efs module that makes it easy to create an NFS file system you can share across your AWS services. Now we needed a way to mount these EFS volumes in those services.Solution: In v0.12.20 of module-data-storage, we added support for creating EFS access points and the corresponding IAM policies for them, and in v0.20.3 of module-ecs, we’ve updated the ecs-service module with support for mounting the access points from those EFS Volumes in your ECS Tasks, including Fargate Tasks! All you need to do is specify the volumes you want to use via the efs_volumes input variable:
module "fargate_service" { source = "" # Specify the EFS Volumes to mount efs_volumes = { example = { file_system_id = root_directory = null container_path = "/example" transit_encryption = "ENABLED" transit_encryption_port = null access_point_id = module.efs.access_point_ids.example iam = "ENABLED" } } # ... (other params omitted) ... }
Where module.efs is an EFS volume with access points created using the efs module. Check out the docker-fargate-service-with-efs-volume example for fully working sample code. Special thanks to Milosz Pogoda for the contribution!What to do about it: Give EFS Volumes in your ECS Tasks a shot and let us know how they work out for you!

  • v0.1.20: You can now pass in a config file using the --config option to express more granular and complex filtering for what gets nuked. We currently support nuking S3 buckets by name using regular expressions. See the README to learn more about how it works. Pull requests very welcome!

  • v0.13.7: Fix a typo in the argument parsing of the run-vault script. The script now correctly looks for a --agent-ca-cert-fileargument instead of --agent-ca-cert_file.

  • v0.1.1: You can now configure environment variables from arbitrary sources using the additionalContainerEnvinput value.

  • v0.23.19: This release introduces a new config attribute terragrunt_version_constraint, which can be used to specify terragrunt versions that the config supports.
  • v0.23.20: The terragrunt and terraform version checks are now done without parsing the entire configuration.
  • v0.23.21: Added a new get_platform()function you can use in your Terragrunt config to get the name of the current operating system (e.g., darwin, freebsd, linux, or windows).
  • v0.23.22: Fixes a bug the GCP config comparison function that was deleting Terragrunt specific config values from the source configuration. Enables GCS authentication using a fixed token defined in the GOOGLE_OAUTH_ACCESS_TOKENenv var.
  • v0.23.23: Fixes a bug where having terragrunt_version_constraintin a config causes terragruntto crash when running xxx-allcommands.
  • v0.23.24: The xxx-all commands will now ignore the Terraform data dir (default .terraform) when searching for Terragrunt modules.
  • v0.23.25: This release fixes a bug in get_terraform_cli_argsfunction where it would crash when there were no CLI args passed to terraform.
  • v0.23.26: Terragrunt now considers terraform json files (e.g., .tf.json) as valid terraform code when validating if modules contain Terraform.
  • v0.23.27: Terragrunt’s color output should now work correctly on Windows.
  • v0.23.28: Terragrunt will no longer incorrectly print Error with plan: for plan-all commands when there was no error. Terragrunt will now log the Terraform version when debug mode is enabled.
  • v0.23.29: You can now set the terragrunt working directory using the environment variable TERRAGRUNT_WORKING_DIR.
  • v0.23.30: Terragrunt will now correctly parse and check Terraform version numbers for full releases (e.g., Terraform 0.12.23), beta releases (e.g., Terraform 0.13.0-beta2), and dev builds (e.g., Terraform v0.9.5-dev (cad024a5fe131a546936674ef85445215bbc4226+CHANGES)).
  • v0.23.31: Fix a bug where if you set prevent_destroy in a root terragrunt.hcl, it would override any values in child terragrunt.hcl files.

  • v0.27.4: helm.Rollback now supports rolling back to the previous version by passing in ""for revision. Also, this release fixes a bug for Helm v3 where the rollbackcommand required specifying namespace.
  • v0.27.5: Kubernetes API dependencies have been updated to 1.18.3.
  • v0.28.0: Updated the shell package to wrap errors with ErrWithCmdOutput so that (a) instead of an unhelpful "FatalError{Underlying: exit status 1}" error message, you now see the stderr output in the error message and (b) you can now programmatically retrieve the stderr stream from errors too.
  • v0.28.1: Added new CheckSsmCommand and WaitForSsmInstance methods that allow you to connect to EC2 instances and execute commands on them using SSM.
  • v0.28.2: Added new OutputMapOfObjects and OutputListOfObjects methods to read Terraform outputs that return nested maps and nested lists.
  • v0.28.3: Terratest will now properly handle nil values in complex object vars (lists, maps, and objects) that are passed to terraform.Options.Vars.
  • v0.28.4: helm.Upgrade and helm.UpgradeE now uses the --installflag to automatically install the chart if it is not already installed.
  • v0.28.5: Added new terraform.ApplyAndIdempotent and terraform.InitAndApplyAndIdempotent functions that you can check that your Terraform code is idempotent: that is, that after running apply, there are no changes in a subsequent plan.
  • v0.28.6: All the http-helper methods (e.g., HttpGetE, HttpGetWithRetry, etc.) now use http.DefaultTransport as the default transport. This sets better default values, including timeouts and support for proxies configured via environment variables.
  • v0.28.7: Fix bug in k8s.FindNodeHostnameE (and thus k8s.GetServiceEndpoint as well) where it incorrectly returned a blank hostname for AWS nodes when it did not have a public IP assigned to it.

  • v0.12.21: Improved the Aurora documentation and added a dedicated Aurora Serverless example. This release also adds support for specifying a scaling_configuration_timeout_action when using the aurora module in serverless mode.
  • v0.12.16: It’s possible now to change the default behavior of auto minor version upgrade, which applies to the cluster instances.
  • v0.12.17: You can now enable cross-region replication for Aurora by setting source_region and replication_source_identifier to the region and ARN, respectively, of a primary Aurora DB.
  • v0.12.18: Fix issue where restoring from snapshot wasn’t setting master_password .
  • v0.12.19: The rds module now supports cross-region replication! You can enable it by setting the replicate_source_db input variable to the ARN of a primary DB that should be replicated. Also, added primary_address and read_replica_addresses outputs and docs on how to avoid state drift when using auto minor version upgrades for the rds module.
  • v0.12.20: The efs module can now create EFS access points and corresponding IAM policies for you. Use the efs_access_points input variable to specify what access points you want and configure the user settings, root directory, read-only access, and read-write access for each one.
  • v0.13.0: The rds and aurora modules have been updated to remove redundant/duplicate resources by taking advantage of Terraform 0.12 syntax (i.e., for_each, null defaults, and dynamic blocks). This greatly simplifies the code and makes it more maintainable, but because many resources were renamed, this is a backwards incompatible change, so make sure to follow the migration guide when upgrading!

  • v0.32.0: kms-master-key now supports configuring service principal permissions with conditions. As part of this change, the way CloudTrail is setup in the Landing Zone modules have been updated to better support the multiaccount configuration. Refer to the updated docs on multiaccount CloudTrail for more information.
  • v0.32.1: This minor release includes a number of documentation changes and renamed has been renamed to throughout the repository; the suggestion to set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY has been dropped since most users now use aws-auth or aws-vault; added documentation on using 1Password with aws-auth.
  • v0.32.2: The iam-users module can now associate a public SSH key with each IAM user using the ssh_public_key parameter.

  • v0.9.3: Updated the memcached module to support passing an empty list of allowed CIDR blocks.

  • v0.20.2: The infrastructure-deploy-script now supports running destroy. Note that the threat model of running destroy in the CI/CD pipeline is not well thought out and is not recommended. Instead, directly call the ECS task to run destroy using privileged credentials.
  • v0.21.0: The infrastructure-deployer now supports selecting the container to run in a multi-container deployment for the ecs-deploy-runner. Note that this version of the infrastructure-deployer is only compatible with an ecs-deploy-runner that is deployed with this version.
  • v0.22.0: This release bumps the version of the ALB module used by Jenkins to v0.20.1 to fix an issue related to outputs from the ALB module.
  • v0.22.1: ecs-deploy-runner now outputs the security group used by the ECS task so that you can append additional rules to it.
  • v0.22.2: Added ecs_task_iam_role_arn as output on ecs-deploy-runner module
  • v0.23.0: terraform-update-variable now supports committing updates to a separate branch. Note that as part of this change, the --skip-git option has been updated to take in the value as opposed to being a bare option. If you were using the --skip-git flag previously, you will now need to pass in --skip-git true.
  • v0.23.1: Fix bug where command-args was not flowing properly from the lambda function to the deploy script.

  • v0.8.1: The lambda and lambda-edge modules now support configuring the dead letter queue for subscribing to errors from the functions.

  • v0.20.1: The cluster upgrade script now supports updating to Kubernetes version 1.16. The eks-cloudwatch-container-logs is also now compatible with Kubernetes version 1.16.
  • v0.20.2: The control plane Python PEX binaries now support long path names on Windows. Previously the scripts were causing errors when attempting to unpack the dependent libraries.
  • v0.20.3: eks-k8s-external-dns is now using a more up to date Helm chart to deploy external-dns. Additionally, you can now configure the logging format between text and json. eks-alb-ingress-controller now supports selecting a different container version of the ingress controller. This can be used to deploy the v2 alpha image with shared ALB support.

  • v0.3.3: The sns module will now allow display names to be up to 100 characters.
  • v0.3.4: The sqs module can now be turned off by setting create_resources = true. When this option is passed in, the module will disable all the resources, effectively simulating a conditional.

  • v0.20.0: You can now bind different containers and ports to each target group created for the ECS service. This can be used to expose multiple containers or ports to existing ALBs or NLBs.
  • v0.20.1: Add new module output ecs_instance_iam_role_id which contains the ID of the aws_iam_role mapped to ecs instances.
  • v0.20.2: The ecs-cluster module now attaches the ecs:UpdateContainerInstancesState permission to the ECS Cluster's IAM role. This is required for automated ECS instance draining (e.g., when receiving a spot instance termination notice).

  • v0.8.8: You can now configure the asg-rolling-deploy module to NOT use ELB health checks during a deploy by setting the use_elb_health_checks variable to false. This is useful for testing connectivity before health check endpoints are available.
  • v0.9.0: The variable aws_region was removed from the asg-rolling-deploy module, it's value will be retrieved from the region on the provider. When updating to this new version, make sure to remove the aws_region parameter to the module.

  • v0.8.9: The vpc-interface-endpoint module now supports endpoints for SSM, SSM Messages, and EC2 Messages.
  • v0.8.10: This release adds the ability to add tags to the resources created by the vpc-dns-forwarder-rules,vpc-dns-forwarder, and vpc-flow-logs modules by using the tags input variable.
  • v0.8.11: You can now disable VPC endpoints in the vpc-app module by setting the create_vpc_endpoints variable to false.

  • v0.8.3: Added iam_role_name and iam_role_arn outputs to the single-server module. Updated the repo README to the new format.

What happened: Terraform has announced a beta program for Terraform 0.13.0.Why it matters: The major features in Terraform 0.13.0 are:
  • 3rd party providers: allows automatic installation of providers outside the hashicorp namespace. The blog post describes this improvement in more detail.
  • New module features: modules now support count, for_each, and depends_on. We’re especially excited for this, as it makes Terraform modules significantly more powerful and flexible!
What to do about it: We do not recommend upgrading to Terraform 0.13.0 yet. We will be keeping our eye on it, waiting for it to get closer to a final release, and then will test our codebase with it for compatibility. We will announce when our testing is complete and it is safe to upgrade (and if there are any backwards incompatible changes to take into account).

What happened: HashiCorp has announced that they are launching the HashiCorp Cloud Platform (HCP), a fully managed cloud offering of all of HashiCorp products.Why this matters: Up until now, if you wanted to use HashiCorp products such as Consul, Vault, or Nomad, you had to run and maintain them yourself. These are all reasonably complicated distributed systems, so this could be a significant amount of work. Now, HashiCorp is beginning to launched managed offerings, where they will run these tools for you, and allow you to use them as a SaaS offering.What to do about it: Currently, they are only offering early access (private beta) to HCP Consul for AWS. HCP Vault is coming in the future. See the announcement for the full details.

What happened: HashiCorp has announced a new kubernetes-alpha provider for Terraform.Why it matters: The current Kubernetes provider for Terraform has always lagged significantly behind Kubernetes features, limiting your ability to use Terraform to manage your Kubernetes clusters. The new kubernetes-alpha cluster allows you to use native Kubernetes manifests directly—albeit in HCL, rather than in YAML—so you always have access to all features supported by Kubernetes. Moreover, by leveraging a new feature called Server-Side Apply, you will be able to leverage the plan functionality of Terraform, where it shows you the diff of what you’re about to change in your Kubernetes cluster before you run apply to actually deploy that change. Being able to use HCL— with full support for variables, functions, and integration with the rest of your infrastructure—instead of plain-old-YAML, may make Terraform a very compelling way to manage your Kubernetes cluster.What to do about it: Feel free to give the new provider a shot—though bear in mind that it is currently experimental and cannot yet be installed from the Terraform Registry (see the announcement blog post for details)—and let us know what you think! In the meantime, we will be keeping a close eye on this new provider to see how it compares with the current provider, as well as other Kubernetes tools, such as Helm, and will keep you posted on whether we will integrate it into the Gruntwork IaC Library.

What happened: AWS has (finally!) released native support for updating EC2 instances in an Auto Scaling Group (ASG).Why this matters: ASGs are a great way to run multiple EC2 instances and automatically replace unhealthy instances, but up until now, rolling out changes (e.g., new AMIs) has always been left up to the user (e.g., see the modules in module-asg for how we’ve implemented rolling deployments). Now AWS supports a native way to do rolling updates.What to do about it: Terraform support is still pending. Once it’s available, we plan on incorporating it into module-asg. PRs are also very welcome!

Below is a list of critical security updates that may impact your services. We notify Gruntwork customers of these vulnerabilities as soon as we know of them via the Gruntwork Security Alerts mailing list. It is up to you to scan this list and decide which of these apply and what to do about them, but most of these are severe vulnerabilities, and we recommend patching them ASAP.

  • USN-4377–1: One of the root CA certificates installed in the ca-certificates package expired. This can cause connectivity issues with certain hosts when attempting to connect over https. It is recommended to update to the latest version of ca-certificates to ensure you have the most recent certificates.

Explore our latest blog

Get the most up-to-date information and trends from our DevOps community.
TerraformResouces Image

Promotion Workflows with Terraform

How to configure GitOps-driven, immutable infrastructure workflows for Terraform using Gruntwork Patcher.

Jason Griffin

October 3, 2023 7 min read
TerraformResouces Image

The Impact of the HashiCorp License Change on Gruntwork Customers

How to configure GitOps-driven, immutable infrastructure workflows for Terraform using Gruntwork Patcher.

Josh Padnick

October 3, 2023 7 min read