Creating Nested Conditional Dynamic Terraform Blocks

While working on an assignment to update the AWS Cognito User Pool for the team that I’ve built using Terraform, I faced a problem where the software development team is working on revising custom attributes. They gave me a heads up that additional custom attributes maybe added at a later date.

When I initially created the user pool, I made the mistake of hard coding the schema with the expectation that the schema was set in stone. Instead of continuing to hard code the attributes, which would be cumbersome to maintain in the future and make my Terraform code longer than necessary, I’ve taken on the challenge to refactor the code.

My weapon of choice? Dynamic Blocks.

Dynamic Blocks are great because it keeps Terraform code DRY (Do not Repeat Yourself). You provide a list of data and dynamic blocks will generate the type of blocks you define.

Foundations of Dynamic Blocks

Dynamic blocks have 3 distinctive components:

  • The type that specifies the type of block you wish to generate dynamically
  • for_each meta-argument which allows you to reference a variable that contains a list of elements as data used for each block dynamically generated
  • Content block that specifies the contents of the dynamic block

To demonstrate the use of a dynamic block, I will use the example of defining recovery mechanisms for AWS Cognito user pool

resource "aws_cognito_user_pool" "main" {
  account_recovery_setting {
    recovery_mechanism {
      name     = "verified_email"
      priority = 1
    }
  }
}

Here’s an example of the same recovery_mechanism block written using dynamic block

resource "aws_cognito_user_pool" "main" {
//... other user pool settings omitted for brevity

  account_recovery_setting {
    dynamic "recovery_mechanism" {
      for_each = var.user_pool_account_recovery_mechanisms
      content {
        name     = recovery_mechanism.value["name"]
        priority = recovery_mechanism.value["priority"]
      }
    }
  }
}

The keyword “dynamic” indicates that the block being defined is a dynamic block. “recovery_mechanism” is the type of block that needs to be made dynamic and it is a type of block that is defined under aws_cognito_user_pool resource. The for_each meta-argument allows me to specify the variable that contains a list of recovery mechanisms:

// terraform.auto.tfvars
cognito_user_pool_account_recovery_mechanisms=[
  {
    name     = "verified_email"
    priority = 1
  }
]

The content block allows me to specify where to obtain the value for recovery mechanism’s blocks name and priority parameters. Notice in the content block, when referring to each recovery mechanism, I used “recover_mechanism.value” followed by square brackets and the name of the key as the string within to reference the value. This is how dynamic blocks refer to the item being iterated over. You must use the dynamic block type as a reference point to access those values.

Nested Dynamic Blocks

Now that we know how dynamic blocks work, how do we define nested dynamic blocks? Under what circumstance should nested dynamic blocks be used? And How can we make nested dynamic blocks conditional?

When Should Nested Dynamic Blocks Be Used?

Nested dynamic blocks should be used when the block you define that repeats contains a child block.

Going back to my original goal, I have a list of custom user pool attributes that need to be passed to aws_cognito_user_pool resource. This is a great use case for nested dynamic blocks. Each schema block defines a custom attribute and within that custom attribute, the string_attribute_constraints block may be defined.

resource aws_cognito_user_pool {
//...
    schema {
      name                = "my-custom-attribute"
      attribute_data_type = "String"
      is_required            = false
      is_mutable             = true

      string_attribute_constraints = [
        {
          min_length = 4
          max_length = 256
        }
      ]
    }
}

Defining Nested Dynamic Blocks

It comes with no surprise that nested dynamic blocks has the same core components as regular dynamic blocks so the way it works is very similar. The trick however, is figuring out how to structure the data to take advantage of nested dynamic blocks.

The solution is to use a map:

  {
    name                = "my-custom-attribute"
    attribute_data_type = "String"
    is_required            = false
    is_mutable             = true

    string_attribute_constraints = [
      {
        min_length = 4
        max_length = 256
      }
    ]
  }

In our Terraform codebase, this is what my code looks like:

resource "aws_cognito_user_pool" {
  dynamic "schema" {
    for_each = var.user_pool_custom_attributes
    content {
      name                = schema.value["name"]
      attribute_data_type = schema.value["attribute_data_type"]
      mutable             = schema.value["is_mutable"]
      required            = schema.value["is_required"]

      dynamic "string_attribute_constraints" {
        for_each = lookup(schema.value, "string_attribute_constraints", [])
        content {
          min_length = string_attribute_constraints.value["min_length"]
          max_length = string_attribute_constraints.value["max_length"]
        }
      }
    }
}

In the inner dynamic block, I’m defining “string_attribute_constraints” block as dynamic. Notice the for_each attribute utilizes the terraform function lookup. That function will check for “string_attribute_constraints” sub attribute within the map. This is how nested dynamic blocks can obtain its data.

for_each will pull in a list of maps that you can iterate over in which you can access by calling “string_attribute_constraints.value” and access the values by providing the name of the key in squared brackets. Dynamic blocks allow you reference each item it is iterating over when you use the type of the dynamic block as shown above.

Making Nested Dynamic Blocks Optional

We now have cognito user pool custom attributes generated as schema blocks with “string_attribute_constraint” blocks generated in a consistent manner; The only issue is for each custom attribute we define right now, the “string_attribute_constraint” block must also be generated. This is problematic because a custom attribute’s data type can be a string or a number.

To make string_attribute_constraint block dynamic, we lean on the fact that for_each meta-argument will instruct Terraform to iterate through a list or map. All we have to do is create a list with a single map for the nested map value. For custom attributes that do not require string_attributes_constraints block, we simply do not define this attribute since lookup function will automatically default to the value we provide, which in this case, is an empty list if the the attribute does not exist; This will stop our dynamic block from being generated.

Credit

Pfeifer studio wood nesting boxes – Jeri’s Organizing and Decluttering News Blog

env command

env command enables printing and setting environment variables. Executing “env” will print all of the environment variables.

Alternatively, a command to print environment variables is “printenv”

Automatic IAM User Access Key Rotation

The company I work at uses a hybrid cloud setup. While modern applications are able to utilize HashiCorp Vault to provision temporary access, applications that continue to run on legacy on premises servers are unable to do so. In these scenarios, an IAM user needs to be provisioned in order to generate AWS Access Key and Secret Access Key to grant programmatic access.

Following best security practices, these keys need to be rotated from time to time. Having many of these access keys generated would increase the team’s operational workload.

Here’s how I automated this process.

AWS Services Used

  • SNS Topic
  • Secrets Manager
  • EventBridge (Formerly known as CloudWatch Events)
  • AWS Lambda
  • Identity Access Management

Workflow

It’s important for us to get a clear picture of what we’re trying to achieve and how all of these AWS services are connected to one another. Here’s how it works

  • An IAM User is created with Access Key generated
  • 3 Event Bridge rules are used to trigger a lambda function based on number of days elapsed.
    • Every 90 days – Create new access keys
    • Every 104 days – Deactivate the old access key
    • Every 128 days – Delete the old access key
  • Lambda function receives 1) IAM Username 2) The action to perform and performs the 3 actions above accordingly on IAM User.
  • Secrets Manager stores the new access key and holds records of previous access keys
Create Access Key
Deactivate Access Key
Delete Access Key

Steps

  1. Create the IAM user and generate AWS Access Key.
Add user with “Programmatic access” type selected
Access Key Generated
  • For all IAM user and roles created, skip attaching or creating IAM policies. This will be revisited once we have created all necessary resources.

2. Create IAM Role for our Lambda function.

  • When creating this role, set the trusted entity type to be AWS service and the usecase to be “Lambda”.

3. Create Secrets Manager Secret with the secret name matching the name of the IAM username you intended.

Add new secret
  • Select “Other type of secrets” option as secret type
  • For secret key names, use “AccessKey” for IAM Access Key and “SecretKey” for IAM secret access key
  • Keep key rotation disabled

4. Create Lambda Function that will process IAM key rotation requests.

Use the following code for the lambda function. Be sure to modify parts of the code that have been surrounded by ankle brackets.

import json
import boto3
import base64
import datetime
import os
from datetime import date
from botocore.exceptions import ClientError
AWS_REGION_NAME = '<your region here>'
SNS_TOPIC_ARN = '<your sns topic arn>'
ACCESS_KEY_SECRET_NAME = '<iam username>'
iam = boto3.client('iam')
secretmanager = boto3.client('secretsmanager')
sns = boto3.client('sns', region_name=<AWS_REGION_NAME)

def create_key(iam_username):
    '''Generates a new access key on behalf of the user and stores the new
    access key in secrets manager. Then, send a notification email to users to
    notify them to rotate the key for their applications. It returns
    a JSON with status 200 if successful and 500 if error occurs.

    Arguments:
    iam_username - The iam user's username as a string.
    '''

    try:
        response = iam.create_access_key(UserName=iam_username)
        access_key = response['AccessKey']['AccessKeyId']
        secret_key = response['AccessKey']['SecretAccessKey']
        json_data = json.dumps(
            {'AccessKey': access_key, 'SecretKey': secret_key})
        secretmanager.put_secret_value(
            SecretId=iam_username, SecretString=json_data)

        iam_user_details = get_iam_user_details(iam_username)

        emailmsg = 'Hello,\n\n' \
            'A new access key has been created for key rotation. \n\n' \
            f'Access Key Id: {access_key}\n' \
            f'Secrets Manager Secret Id: {iam_username}'

        emailmsg = f'{emailmsg}\n\n' \
            f'Please obtain the new access key information from ' \
            'secrets manager using the secret Id provided above in ' \
            f'{AWS_REGION_NAME} and update your application within 14 days ' \
            'to avoid interruption.\n'

        sns.publish(TopicArn=SNS_TOPIC_ARN, Message=emailmsg,
                    Subject=f'AWS Access Key Rotation: New key is available for '
                            f'{iam_username}')
        print(f'New access key has been created for {iam_username}')
        return {'status': 200}
    except ClientError as e:
        print(e)
        return {"status": 500}


def deactive_key(iam_username):
    '''Finds the secret that stores the user's previous access key
    and mark it as inactive. Then, send a notification email to users to remind
    them to rotate the key for their applications. It returns
    a JSON with status 200 if successful and 500 if error occurs.

    Arguments:
    iam_username - The iam user's username as a string.
    '''

    try:
        previous_secret_value = secretmanager.get_secret_value(
            SecretId=iam_username, VersionStage='AWSPREVIOUS')
        previous_secret_data = json.loads(
            previous_secret_value['SecretString'])
        previous_access_key = previous_secret_data['AccessKey']

        iam_user_details = get_iam_user_details(iam_username)

        print(
            f'deactivating access key {previous_access_key} '
            f'for IAM user {iam_username}')

        iam.update_access_key(AccessKeyId=previous_access_key,
                              Status='Inactive', UserName=iam_username)

        emailmsg = f'Hello,\n\n' \
            f'The previous access key {previous_access_key}'

        emailmsg = f'{emailmsg} has been disabled for {iam_username}.\n\n' \
            f'This key will be deleted in the next 14 days. ' \
            f'If your application has lost access, be sure to update the ' \
            f'access key.\n You can find the new key by looking up the secret ' \
            f'"{iam_username}" under secrets manager via AWS Console ' \
            f'in {AWS_REGION_NAME}.\n\n'
 
        sns.publish(
            TopicArn=SNS_TOPIC_ARN, Message=emailmsg,
            Subject='AWS Access Key Rotation: Previous key deactivated for '
                    f'{iam_username}')
        print('Access Key has been deacivated')
        return {'status': 200}
    except ClientError as e:
        print(e)
        return {'status': 500}


def delete_key(iam_username):
    '''Deletes the deactivated access key in the given iam user. Returns
    a JSON with status 200 if successful, 500 for error and 400 for
    if secrets don't match

    Arguments:
    iam_username - The iam user's username as a string.
    '''
    try:
        previous_secret_value = secretmanager.get_secret_value(
            SecretId=iam_username, VersionStage='AWSPREVIOUS')
        previous_secret_string = json.loads(
            previous_secret_value['SecretString'])
        previous_access_key_id = previous_secret_string['AccessKey']
        pprint(f'previous_access_key_id: {previous_access_key_id}')
        keylist = iam.list_access_keys(UserName=iam_username)[
            'AccessKeyMetadata']

        for key in keylist:
            key_status = key['Status']
            key_id = key['AccessKeyId']

            print(f'key id: {key_id}')
            print(f'key status: {key_status}')

            if key_status == "Inactive":
                if previous_access_key_id == key_id:
                    print('Deleting previous access key from IAM user')
                    iam.delete_access_key(
                        UserName=iam_username, AccessKeyId=key_id)
                    print(f'Previous access key: '
                          f'{key_id} has been deleted for user '
                          f' {iam_username}.')
                    return {'status': 200}
                else:
                    print(
                        'secret manager previous value doesn\'t match with '
                        'inactive IAM key value')
                    return {'status': 400}
            else:
                print('previous key is still active')
        return {'status': 200}
    except ClientError as e:
        print(e)
        return {'status': 500}


def lambda_handler(event, context):
    action = event["action"]
    iam_username = event["username"]
    status = {'status': 500}

    print(f'Detected Action: {action}')
    print(f'Detected IAM username: {iam_username}')

    if action == "create":
        status = create_key(iam_username)
    elif action == "deactivate":
        status = deactive_key(iam_username)
    elif action == "delete":
        status = delete_key(iam_username)

    return status

5. Create Event bridge rule to trigger creating access key.

  • Select “default” event bus
  • Define the pattern to use “Schedule”
    • Set Fixed Rate to every 90 days
  • Set the target as “Lambda function”
    • Set the Function to the lambda function name in step 4
    • Set “Configure input” setting to “Constant (JSON text)”
      • Set value to { “action”: “create”, “username”: “<the iam username in step 1>”
  • Add tags as necessary

6. Create Event bridge rule to trigger deactivating access key.

  • Select “default” event busDefine the pattern to use “Schedule”
  • Set Fixed Rate to every 104 daysSet the target as “Lambda function”
  • Set the Function to the lambda function name in step 4
    • Set “Configure input” setting to “Constant (JSON text)”
      • Set value to { “action”: “deactivate”, “username”: “<the iam username in step 1>”

7. Create Event bridge rule to trigger deleting deactivated access key.

  • Select “default” event busDefine the pattern to use “Schedule”
  • Set Fixed Rate to every 118 daysSet the target as “Lambda function”
  • Set the Function to the lambda function name in step 4
  • Set the Function to the lambda function name in step 4
    • Set “Configure input” setting to “Constant (JSON text)”
      • Set value to { “action”: “delete”, “username”: “<the iam username in step 1>”

8. Create IAM Policy to enabled our lambda function to 1) Access Secrets Manager Secret 3) Access IAM service to manage user access key.

{
    "Version":"2012-10-17",
    "Statement": [
      {
        "Effect": "Allow"
        "Action": [
          "secretsmanager:GetSecretValue",
          "secretsmanager:PutSecretValue"
        ],
        "Resource": "<secrets manager secret arn>"
      },
      {
        "Effect": "Allow"
        "Action": [
          "iam:UpdateAccessKey",
          "iam:CreateAccessKey",
          "iam:DeleteAccessKey",
        ],
        "Resource": "<secrets manager secret arn>"
        "Principal": {
          "AWS": "<iam role arn>"
        }         
      },
      {
        "Effect": "Allow"
        "Action": "iam:ListAccessKeys",
        "Resource": "*"
      },
      {
        "Effect": "Allow"
        "Action": "sns:Publish",
        "Resource": "<sns topic arn>"
      }
    ]
}

9. Create IAM Policy to grant IAM User permissions to access secrets manager secret that stores AWS Access Key and Secret Access Key.

{
    "Version":"2012-10-17",
    "Statement": [
      {
        "Effect": "Allow"
        "Action": [
           <list of actions the API access should grant>
        ],
        "Resource": [
           "<the resources access should be granted to>"
        ],
      },
      {
        "Effect": "Allow"
        "Action": [
          "secretsmanager:GetSecretValue"
        ],
        "Resource": "<secrets manager secret arn>"
        "Principal": {
          "AWS": "<iam role arn>"
        }         
      }
   ]
}

10. Revisit IAM user and attach the new policy created in step 9.

11. Revisit IAM Lambda role and attach the new policy created in step 8.

12. Attach AWS Managed IAM Policy “AWSLambdaBasicExecutionRole” to IAM lambda role.

13. Revisit Sthe secrets manager secret in step 3 and add the following policy to “Resource Permissions”

{
  "Version" : "2012-10-17",
  "Statement" : [ {
    "Sid" : "AllowLambdaFunctionReadWriteAccess",
    "Effect" : "Allow",
    "Principal" : {
      "AWS" : "<lambda iam role>"
    },
    "Action" : [ "secretsmanager:GetSecretValue", "secretsmanager:PutSecretValue" ],
    "Resource" : "<the arn of the secret>"
  }, {
    "Sid" : "AllowIAMUserReadAccess",
    "Effect" : "Allow",
    "Principal" : {
      "AWS" : "<the arn of the iam user>"
    },
    "Action" : "secretsmanager:GetSecretValue",
    "Resource" : "<the arn of the secret>"
  } ]
}

Conclusion

By using event bridge rules, we can set schedules to trigger the lambda function and pass the event data needed to process key rotation. In our design, we provide the developers with 14 days to rotate their keys and provide an additional 14 days of grace period before deleting the keys permanently. The notification is provided to the software developers via SNS topic and subscribing them to those topics. We use secrets manager to store the secret key and secret access key information; Since secrets manager maintain versions of secret, the lambda function can leverage this to match the last issued access key Id in order to deactivate it during key rotation. IAM user and policy is setup to grant the user access to the access key secret as well as permissions that the software developers needed. IAM role and policy is created to allow the lambda to execute via AWSLambdaBasicExecutionRole managed policy and attaching inline or custom policy to grant access to the access key secret as well as SNS topic to publish notifications to software developers during key rotation and key deactivation.

Coming Soon

In this post, you’ve seen how I build these resources manually using AWS Console. Benhur P.’s post provides you with an overview of how to create the same resources using CloudFormation.

In a follow up post at later date, I will demonstrate how this can be done using Terraform, a tool that I use on a daily basis.

Credits

This post was made possible due to these amazing authors. My ideas were expressed and built on top of their work.

lsof command

lsof command is short for “list open files”. It lists all files that are open on the operating system. This is great to troubleshoot networking issues because every open socket on a Linux/Unix operating system is treated as a file.

Usage

lsof -i <protocol><@hostname or host address>:<service or port>

-i by default will prioritize IPv4 addresses above IPv6 but will list both. If you prefer to for the tool to only list either one, specify a 4 or 6 after the “i” (example: lsof -i4 …)

Example of Searching TCP Connection By App

The following will list all sockets opened with TCP protocol for Brave Browser.

lsof -i tcp | grep Brave

Example Output

Brave\x20 10426 user   23u  IPv4 0x1bf754502b0433d3      0t0  TCP 192.168.1.3:63665->ec2-52-37-64-206.us-west-2.compute.amazonaws.com:https (ESTABLISHED)
Brave\x20 10426 user   35u  IPv4 0x1bf754502d4c1313      0t0  TCP 192.168.1.3:61523->192.0.78.23:https (ESTABLISHED)

Example of Searching TCP Connection By Port

The following will list all sockets opened on TCP Protocol over the port number 10426

lsof -i tcp:10426

Example Output

Brave\x20 10426 user   23u  IPv4 0x1bf754502b0433d3      0t0  TCP 192.168.1.3:63665->ec2-52-37-64-206.us-west-2.compute.amazonaws.com:https (ESTABLISHED)
Brave\x20 10426 user   35u  IPv4 0x1bf754502d4c1313      0t0  TCP 192.168.1.3:61523->192.0.78.23:https (ESTABLISHED)

By doing so you can see that TCP port 10426 is being used by Brave Browser.

top command

As someone who started using computers by clicking around with a mouse, I relied heavily on apps like “Activity Monitor” on the Mac or “Task Manager” on Windows to show me the status of applications and resource comsumption.

As I dive deeper into the world of Linux, command line interfaces (CLI) is the currency which software developers and system administrators rely heavily upon. This is especially important you need to SSH into another machine to troubleshoot issues.

The Top command is the CLI replacement for their friendly user interface counterpart while still listing information clearly in one single window.

Usage

top

IP Command

The IP command is based on iproute2, a collection of utilities for controlling and monitoring networking in the Linux Kernel. It is an open-source project that is closely tied to networking components in the Linux Kernel.

The utility under IP Command is “address”. This is a good starting point to learn troubleshooting network interfaces for general network issues.

This command replaces the outgoing ifconfig command that is common across Linux distributions.

Installation

This tool does not come prepacked with macOS. To get iproute2 package on the Mac, use Homebrew to install the package.

brew install iproute2mac

Usage

  • ip a show
  • ip a show <networking interface>

The subcommand “a” is short for “address” which is used to display information regarding networking interface.

Examples

  • ip show en0

Example Output

en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	ether 19:ef:ab:dc:2a:39
	inet6 fc10::120:a132:1ab:3f17/64 secured scopeid 0x5
	inet 192.168.1.3/24 brd 10.0.1.255 en0

Python JSON.Tool

Some CLIs will spit out JSON as its response. Those types of response tends to be jumbled into a single line with poor formatting, making them humanly impossible to read.

Luckily the Python Library has a wonderful JSON tool that can help format the output into a readable manner.

Tools Needed

  • Python v2.7+

Usage

python -m json.tool <filename>

Example

python -m json.tool my-json-file.json

Example Input

{ "data":{"days":5,"world":{"name":"earth","neighbours": ["sun","mercury","venus"] }}}

Example Output

{
    "data": {
        "days": 5,
        "world": {
            "name": "earth",
            "neighbours": [
                "sun",
                "mercury",
                "venus"
            ]
        }
    }
}

Curl Command

The CURL is a great tool to troubleshoot applications against connectivity issues. It also has the ability to download files based on the specified endpoint from the Internet; For our purposes, we will focus on troubleshooting.

Troubleshooting Approach

  1. Run CURL against Web Application Domain
  2. Run CURL against web services and databases the web application uses

Steps 1 and 2 will tell us where the root problem is. If we fail on step 1, but succeed with step 2, we know that most likely there is an issue with either 1) The Web Application 2) Domain name server look up process.

If the issue lies within the web application, then inspecting the domain controllers will help us determine the cause.

If we determine the issue to be in the domain name server look up process, we may need to revisit the DNS configurations and ensure that our domain has been registered properly with a name server that will be able to route traffic to our web application.

Common CURL Options

The two CURL command options we will be using today are -I (as in uppercase i) and -s. The -I option will enable displaying header response information while -s will silence the response body. This will allow us to focus on the HTTP response status code to determine our root cause.

Command Usage Example

  • curl -I -s my-website.com
  • cur -I -s my-webservice.com:3001

Notice in the 2nd example, I’ve provided a port number. This is common with web services where it uses a specific designated port.

Sys Admin Commands Series

Having graduated from college for Software Development in 2020, I’ve found myself in a full time position as a Cloud Operations Analyst. This position has brought forth tremendous amounts of challenge and learning since this wasn’t part of my training despite having some background training in operating systems in general and linux commands.

Here are a list of technologies that I encounter on a daily basis:

  • Kubernetes
    • Rancher Kubernetes
    • Kube2IAM
    • Helm
    • Kustomize
  • Amazon Web Services
    • AWS CLI
    • Boto3 SDK
    • AWS JavaScript SDK
    • AWS SAM CLI
  • HashiCorps Terraform
  • Gruntworks.io Terragrunt
  • HasiCorps Vault
  • GitLab
    • Repositories
    • CI-CD pipeline
  • Black Duck
  • Twistlock/Prisma
  • Linux Commands in general

At first blush, a sys admin’s job looked simple. Provisioning infrastructure is as easy as 1, 2, oh wait, I have to deal with the long list above?

That’s right. Not only that, software development teams within the company expect you to be an expert in all of these areas including all of AWS services; This means if they run into issues, they expect you to help troubleshoot their applications on top of that.

Even after a year of working with these tools, I feel I’m only scratching the surface.

With my new mission of becoming a better system administrator with a software development background, you will see in the next series of posts related with lessons learned as I tackle daily tasks as well as new skills I’m developing to rapidly fill the knowledge gaps that I have.

I feel the right path forward is to focus on the basics. One way of doing this is by getting more familiar with Linux and Unix like operating system commands. Luckily I use a Mac at home and at work so there are plenty of opportunities for me to practice.

This series will be split into 10 parts. Each part will describe a single Linux/Unix command that I’m learning for the week. In the next post, you will find the very first command.

How To Add Cognito UserPool Client Access Token and ID Token Using Terraform

At the time of this write up, if you look at Terraform’s documentation for aws_userpool_client, you will notice that there is an attribute called “refresh_token_validity”. At the same time, you might also notice that there’s no attribute for “access_token_validity” or “id_token_validity”.

The reason is because AWS did not implement the ability to do so in their API until August 12, 2020.

Terraform has recently added this ability; You must ensure the version the AWS Terraform provider must be version 3.32.0 or above to use this feature.

Here’s an example of how to use it

Code Explaination

By adding the token_validity_units block, you specify the units for each token’s validity. Based on the updated Terraform documentation, these values and the block itself are optional. If you do not specify them, the default will be used.

Token Validity Units Defaults

  • Access Token: Hours
  • Id Token: Hours
  • Refresh Token: Days

Once you’ve specified token validity units, the numbers you provide for access_token_validity, id_token_validity, and refresh_token_validity will be formatted accordingly.

Beware Of Gotchas

Thanks to this new powerful code block, you maybe tempted to provide fine grained numbers that specify down to 1 second. What Terraform’s documentation does not tell you is that AWS has hard limits for the token validity range of each token type. Terraform cannot escape the laws of AWS API limits. Failure to obey these limits will result in “InvalidParameterException: Invalid range for token validity”.

Here are the limits outlined by AWS:

  • Refesh Token: 60 mins – 3650 days (10 years)
  • Access Token: 5 mins – 1 day (Must be less than or equal to refresh token expiry)
  • ID Token: 5 mins – 1 day (Must be less than or equal to refresh token expiry)

Happy Terraforming 🙂