Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3760.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-03c4d59dad9754ae0 Launch Stack
    HVM (arm64) ami-0d8179a2849f5b148 Launch Stack
    ap-east-1 HVM (amd64) ami-08420884fa3340826 Launch Stack
    HVM (arm64) ami-00ea4550738ab24dc Launch Stack
    ap-northeast-1 HVM (amd64) ami-053686c1db5c3fa70 Launch Stack
    HVM (arm64) ami-069dd6d086940a420 Launch Stack
    ap-northeast-2 HVM (amd64) ami-016d3b2081242a3ac Launch Stack
    HVM (arm64) ami-02d4227193df4197d Launch Stack
    ap-south-1 HVM (amd64) ami-07f602bd4e96c2751 Launch Stack
    HVM (arm64) ami-0d589c50b054178ac Launch Stack
    ap-southeast-1 HVM (amd64) ami-03ab243c81283cdf5 Launch Stack
    HVM (arm64) ami-05546e8a1dc7e31c4 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0b7384cc3053e1361 Launch Stack
    HVM (arm64) ami-0a5651d8df7b4faa8 Launch Stack
    ap-southeast-3 HVM (amd64) ami-096b3aaad1e90471b Launch Stack
    HVM (arm64) ami-0e4753c95c6985838 Launch Stack
    ca-central-1 HVM (amd64) ami-08db79ca7609b3e51 Launch Stack
    HVM (arm64) ami-056423a1683c1c979 Launch Stack
    eu-central-1 HVM (amd64) ami-094b7d060f9c5e2c5 Launch Stack
    HVM (arm64) ami-02cabc6ea0fcffba3 Launch Stack
    eu-north-1 HVM (amd64) ami-0bc8600c771157620 Launch Stack
    HVM (arm64) ami-02636c4341b42d5a7 Launch Stack
    eu-south-1 HVM (amd64) ami-0bed419d45e347384 Launch Stack
    HVM (arm64) ami-0c006aa63f9164cb1 Launch Stack
    eu-west-1 HVM (amd64) ami-0dec16d7fedd9e0ba Launch Stack
    HVM (arm64) ami-09e80d254658d0677 Launch Stack
    eu-west-2 HVM (amd64) ami-0fd6113e629596ebd Launch Stack
    HVM (arm64) ami-0d41eada807935fa6 Launch Stack
    eu-west-3 HVM (amd64) ami-0217b8220cc849523 Launch Stack
    HVM (arm64) ami-04303beb3fb155c58 Launch Stack
    me-south-1 HVM (amd64) ami-02110a0d567a85dde Launch Stack
    HVM (arm64) ami-0ad9064d3b60b6df7 Launch Stack
    sa-east-1 HVM (amd64) ami-03ff73e6c890341d9 Launch Stack
    HVM (arm64) ami-074502cca0fc5327d Launch Stack
    us-east-1 HVM (amd64) ami-06737f85f51b291a7 Launch Stack
    HVM (arm64) ami-0c2831248f0fa190d Launch Stack
    us-east-2 HVM (amd64) ami-04de20dcdb0f117a0 Launch Stack
    HVM (arm64) ami-0019a3fbb12fdd580 Launch Stack
    us-west-1 HVM (amd64) ami-015264ac13023c349 Launch Stack
    HVM (arm64) ami-0fbc8a0bf9b1c38be Launch Stack
    us-west-2 HVM (amd64) ami-089f1c3ad7b8d4738 Launch Stack
    HVM (arm64) ami-0934c5b544941698b Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3745.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0ed24f3412e0a68cc Launch Stack
    HVM (arm64) ami-0cbd30ad64123ac88 Launch Stack
    ap-east-1 HVM (amd64) ami-07d7dab8bf7acdc81 Launch Stack
    HVM (arm64) ami-060a56ddc05dbe54f Launch Stack
    ap-northeast-1 HVM (amd64) ami-0e69bf1f4ee97ac64 Launch Stack
    HVM (arm64) ami-022258889d5d8fa7f Launch Stack
    ap-northeast-2 HVM (amd64) ami-0e8f03b31f48d06b2 Launch Stack
    HVM (arm64) ami-0b963c4dd773caf94 Launch Stack
    ap-south-1 HVM (amd64) ami-0b11b31f6385e09a0 Launch Stack
    HVM (arm64) ami-0d4981f571000e3ff Launch Stack
    ap-southeast-1 HVM (amd64) ami-0413956144e899b9a Launch Stack
    HVM (arm64) ami-0478a65db4611e02f Launch Stack
    ap-southeast-2 HVM (amd64) ami-0efed3a3c21bc5378 Launch Stack
    HVM (arm64) ami-06de6e540dc0f4a69 Launch Stack
    ap-southeast-3 HVM (amd64) ami-017500726f2dbc251 Launch Stack
    HVM (arm64) ami-049408dbe89287c16 Launch Stack
    ca-central-1 HVM (amd64) ami-06415e2ae96882589 Launch Stack
    HVM (arm64) ami-04465164f55fce4b3 Launch Stack
    eu-central-1 HVM (amd64) ami-0451ace1b81ce0387 Launch Stack
    HVM (arm64) ami-02b2185181d353310 Launch Stack
    eu-north-1 HVM (amd64) ami-0afbd0bf15309fbd2 Launch Stack
    HVM (arm64) ami-0c3e0e6ff069f7449 Launch Stack
    eu-south-1 HVM (amd64) ami-02d637c8319248492 Launch Stack
    HVM (arm64) ami-0cddb277c203f60c1 Launch Stack
    eu-west-1 HVM (amd64) ami-0be275a8455c54fea Launch Stack
    HVM (arm64) ami-07bd05da39958b045 Launch Stack
    eu-west-2 HVM (amd64) ami-07be45219299dc547 Launch Stack
    HVM (arm64) ami-0c76cb97385bc5ab7 Launch Stack
    eu-west-3 HVM (amd64) ami-09d6028bb85fa6754 Launch Stack
    HVM (arm64) ami-043632b0c2f1d0ee0 Launch Stack
    me-south-1 HVM (amd64) ami-0a241f92e761cb9f6 Launch Stack
    HVM (arm64) ami-013e2de2cc0fdb09b Launch Stack
    sa-east-1 HVM (amd64) ami-0010e6a4e9c56f1a8 Launch Stack
    HVM (arm64) ami-0d179041acf659915 Launch Stack
    us-east-1 HVM (amd64) ami-0ef5c53c3cdac0ea1 Launch Stack
    HVM (arm64) ami-0420e71fd58b20c59 Launch Stack
    us-east-2 HVM (amd64) ami-07a844b409b74432c Launch Stack
    HVM (arm64) ami-0454607287e715abf Launch Stack
    us-west-1 HVM (amd64) ami-02a60f44342871f66 Launch Stack
    HVM (arm64) ami-08bd898eb9c9beee9 Launch Stack
    us-west-2 HVM (amd64) ami-093d4b3adac4cee77 Launch Stack
    HVM (arm64) ami-091becc77e8951b85 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3602.2.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-097488680d71eca73 Launch Stack
    HVM (arm64) ami-07902907a9277bec4 Launch Stack
    ap-east-1 HVM (amd64) ami-0f07205ae67bbc643 Launch Stack
    HVM (arm64) ami-04efca7fa4d881009 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0be127e34b9de2d91 Launch Stack
    HVM (arm64) ami-0133f5f094da51865 Launch Stack
    ap-northeast-2 HVM (amd64) ami-00a28d7b6c3a2941f Launch Stack
    HVM (arm64) ami-0957f16c337a645ee Launch Stack
    ap-south-1 HVM (amd64) ami-099f89600a3e7be42 Launch Stack
    HVM (arm64) ami-0393c7d1c9a619d5c Launch Stack
    ap-southeast-1 HVM (amd64) ami-0e0f99ffaff15079c Launch Stack
    HVM (arm64) ami-0bbeab2a3071724a5 Launch Stack
    ap-southeast-2 HVM (amd64) ami-014e556d2ec06e2f5 Launch Stack
    HVM (arm64) ami-0483328d88c637fb4 Launch Stack
    ap-southeast-3 HVM (amd64) ami-04e436d2335818662 Launch Stack
    HVM (arm64) ami-0c2d9fbccef95185e Launch Stack
    ca-central-1 HVM (amd64) ami-04a8af4c3cad13e04 Launch Stack
    HVM (arm64) ami-03b560530179c7e4f Launch Stack
    eu-central-1 HVM (amd64) ami-067d5917875d02d3a Launch Stack
    HVM (arm64) ami-07119ef446475c695 Launch Stack
    eu-north-1 HVM (amd64) ami-01d0b8725ca46b427 Launch Stack
    HVM (arm64) ami-03b03f4c8401e4866 Launch Stack
    eu-south-1 HVM (amd64) ami-09305d5acfe5301a1 Launch Stack
    HVM (arm64) ami-05ed57da8b6cb29c0 Launch Stack
    eu-west-1 HVM (amd64) ami-00608b016ff882bc5 Launch Stack
    HVM (arm64) ami-0602f82f93459aac2 Launch Stack
    eu-west-2 HVM (amd64) ami-083d828d4f91e462a Launch Stack
    HVM (arm64) ami-0e301976a025fdb4f Launch Stack
    eu-west-3 HVM (amd64) ami-02afddc708a6316e2 Launch Stack
    HVM (arm64) ami-09b57258b09fc8145 Launch Stack
    me-south-1 HVM (amd64) ami-0294e745bbb404605 Launch Stack
    HVM (arm64) ami-013e983579e407111 Launch Stack
    sa-east-1 HVM (amd64) ami-04c86686205b2968c Launch Stack
    HVM (arm64) ami-08b79cf58ac308618 Launch Stack
    us-east-1 HVM (amd64) ami-068c9f344cfa33323 Launch Stack
    HVM (arm64) ami-0c7741fb0d4396bb5 Launch Stack
    us-east-2 HVM (amd64) ami-0108e09121726fc88 Launch Stack
    HVM (arm64) ami-0ffb241e989053bd2 Launch Stack
    us-west-1 HVM (amd64) ami-0b2bc1a1493b4b95f Launch Stack
    HVM (arm64) ami-07f607c7b1ddd90c7 Launch Stack
    us-west-2 HVM (amd64) ami-0acf5f8c6f38cfe9e Launch Stack
    HVM (arm64) ami-07a43af77a2df7312 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-06737f85f51b291a7 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-06737f85f51b291a7 (amd64), Beta ami-0ef5c53c3cdac0ea1 (amd64), or Stable ami-068c9f344cfa33323 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <buildbot@flatcar-linux.org>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... me@mail.net"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .