Migrate bare-metal module Terraform v0.11 to v0.12

* Replace v0.11 bracket type hints with Terraform v0.12 list expressions
* Use expression syntax instead of interpolated strings, where suggested
* Update bare-metal tutorial
* Define `clc_snippets` type constraint map(list(string))
* Define Terraform and plugin version requirements in versions.tf
  * Require matchbox ~> 0.3.0 to support Terraform v0.12
  * Require ct ~> 0.3.2 to support Terraform v0.12
This commit is contained in:
Dalton Hubble 2019-05-28 19:19:23 -07:00
parent 28506df9c7
commit db36959178
11 changed files with 199 additions and 228 deletions

View File

@ -2,32 +2,33 @@
Notable changes between versions.
#### AWS
## Latest
* Migrate from Terraform v0.11 to v0.12.x (**action required!**)
* Require `terraform-provider-aws` v2.7+ to support Terraform v0.12
* Require `terraform-provider-ct` v0.3.2+ to support Terraform v0.12
* Require `terraform-provider-ct` v0.3.2+ to support Terraform v0.12 (action required)
#### AWS
* Require `terraform-provider-aws` v2.7+ to support Terraform v0.12 (action required)
#### Azure
* Migrate from Terraform v0.11 to v0.12.x (**action required!**)
* Require `terraform-provider-azurerm` v1.27+ to support Terraform v0.12
* Require `terraform-provider-ct` v0.3.2+ to support Terraform v0.12
* Require `terraform-provider-azurerm` v1.27+ to support Terraform v0.12 (action required)
* Avoid unneeded rotations of Regular priority virtual machine scale sets
* Azure only allows `eviction_policy` to be set for Low priority VMs. Supporting Low priority VMs meant when Regular VMs were used, each `terraform apply` rolled workers, to set eviction_policy to null.
* Terraform v0.12 nullable variables fix the issue and plan does not produce a diff.
* Terraform v0.12 nullable variables fix the issue so plan does not produce a diff.
#### Bare-Metal
* Require `terraform-provider-matchbox` v0.3.0+ to support Terraform v0.12 (action required)
#### DigitalOcean
* Migrate from Terraform v0.11 to v0.12.x (**action required!**)
* Require `terraform-provider-digitalocean` v1.3+ to support Terraform v0.12
* Require `terraform-provider-ct` ~> v0.3.2+ to support Terraform v0.12
* Require `terraform-provider-digitalocean` v1.3+ to support Terraform v0.12 (action required)
#### Google Cloud
* Migrate from Terraform v0.11 to v0.12.x (**action required!**)
* Require `terraform-provider-google` v2.5+ to support Terraform v0.12
* Require `terraform-provider-ct` v0.3.2+ to support Terraform v0.12
* Require `terraform-provider-google` v2.5+ to support Terraform v0.12 (action required)
## v1.14.3

View File

@ -1,17 +1,18 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=082921d67905417755609eebda7d39a7e26f7fdb"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=0103bc06bb3f597455a765bf5d916f9b241cbea0"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]
etcd_servers = ["${var.controller_domains}"]
asset_dir = "${var.asset_dir}"
networking = "${var.networking}"
network_mtu = "${var.network_mtu}"
network_ip_autodetection_method = "${var.network_ip_autodetection_method}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
enable_reporting = "${var.enable_reporting}"
enable_aggregation = "${var.enable_aggregation}"
cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name]
etcd_servers = var.controller_domains
asset_dir = var.asset_dir
networking = var.networking
network_mtu = var.network_mtu
network_ip_autodetection_method = var.network_ip_autodetection_method
pod_cidr = var.pod_cidr
service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
}

View File

@ -160,7 +160,7 @@ storage:
--mount volume=assets,target=/assets \
--volume bootstrap,kind=host,source=/etc/kubernetes \
--mount volume=bootstrap,target=/etc/kubernetes \
$$RKT_OPTS \
$${RKT_OPTS} \
quay.io/coreos/bootkube:v0.14.0 \
--net=host \
--dns=host \

View File

@ -1,33 +1,35 @@
resource "matchbox_group" "install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
count = length(var.controller_names) + length(var.worker_names)
name = "${format("install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
name = format("install-%s", element(concat(var.controller_names, var.worker_names), count.index))
profile = "${local.flavor == "flatcar" ? var.cached_install == "true" ? element(matchbox_profile.cached-flatcar-linux-install.*.name, count.index) : element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
# pick one of 4 Matchbox profiles (Container Linux or Flatcar, cached or non-cached)
profile = local.flavor == "flatcar" ? var.cached_install == "true" ? element(matchbox_profile.cached-flatcar-linux-install.*.name, count.index) : element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)
selector = {
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
mac = element(concat(var.controller_macs, var.worker_macs), count.index)
}
}
resource "matchbox_group" "controller" {
count = "${length(var.controller_names)}"
name = "${format("%s-%s", var.cluster_name, element(var.controller_names, count.index))}"
profile = "${element(matchbox_profile.controllers.*.name, count.index)}"
count = length(var.controller_names)
name = format("%s-%s", var.cluster_name, element(var.controller_names, count.index))
profile = element(matchbox_profile.controllers.*.name, count.index)
selector = {
mac = "${element(var.controller_macs, count.index)}"
mac = element(var.controller_macs, count.index)
os = "installed"
}
}
resource "matchbox_group" "worker" {
count = "${length(var.worker_names)}"
name = "${format("%s-%s", var.cluster_name, element(var.worker_names, count.index))}"
profile = "${element(matchbox_profile.workers.*.name, count.index)}"
count = length(var.worker_names)
name = format("%s-%s", var.cluster_name, element(var.worker_names, count.index))
profile = element(matchbox_profile.workers.*.name, count.index)
selector = {
mac = "${element(var.worker_macs, count.index)}"
mac = element(var.worker_macs, count.index)
os = "installed"
}
}

View File

@ -1,3 +1,4 @@
output "kubeconfig-admin" {
value = "${module.bootkube.kubeconfig-admin}"
value = module.bootkube.kubeconfig-admin
}

View File

@ -1,15 +1,15 @@
locals {
# coreos-stable -> coreos flavor, stable channel
# flatcar-stable -> flatcar flavor, stable channel
flavor = "${element(split("-", var.os_channel), 0)}"
flavor = element(split("-", var.os_channel), 0)
channel = "${element(split("-", var.os_channel), 1)}"
channel = element(split("-", var.os_channel), 1)
}
// Container Linux Install profile (from release.core-os.net)
resource "matchbox_profile" "container-linux-install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("%s-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
count = length(var.controller_names) + length(var.worker_names)
name = format("%s-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))
kernel = "${var.download_protocol}://${local.channel}.release.core-os.net/amd64-usr/${var.os_version}/coreos_production_pxe.vmlinuz"
@ -17,32 +17,31 @@ resource "matchbox_profile" "container-linux-install" {
"${var.download_protocol}://${local.channel}.release.core-os.net/amd64-usr/${var.os_version}/coreos_production_pxe_image.cpio.gz",
]
args = [
args = flatten([
"initrd=coreos_production_pxe_image.cpio.gz",
"coreos.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
"${var.kernel_args}",
]
var.kernel_args,
])
container_linux_config = "${element(data.template_file.container-linux-install-configs.*.rendered, count.index)}"
container_linux_config = element(data.template_file.container-linux-install-configs.*.rendered, count.index)
}
data "template_file" "container-linux-install-configs" {
count = "${length(var.controller_names) + length(var.worker_names)}"
count = length(var.controller_names) + length(var.worker_names)
template = "${file("${path.module}/cl/install.yaml.tmpl")}"
template = file("${path.module}/cl/install.yaml.tmpl")
vars = {
os_flavor = "${local.flavor}"
os_channel = "${local.channel}"
os_version = "${var.os_version}"
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
install_disk = "${var.install_disk}"
container_linux_oem = "${var.container_linux_oem}"
ssh_authorized_key = "${var.ssh_authorized_key}"
os_flavor = local.flavor
os_channel = local.channel
os_version = var.os_version
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
install_disk = var.install_disk
container_linux_oem = var.container_linux_oem
ssh_authorized_key = var.ssh_authorized_key
# only cached-container-linux profile adds -b baseurl
baseurl_flag = ""
}
@ -51,8 +50,8 @@ data "template_file" "container-linux-install-configs" {
// Container Linux Install profile (from matchbox /assets cache)
// Note: Admin must have downloaded os_version into matchbox assets/coreos.
resource "matchbox_profile" "cached-container-linux-install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("%s-cached-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
count = length(var.controller_names) + length(var.worker_names)
name = format("%s-cached-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))
kernel = "/assets/coreos/${var.os_version}/coreos_production_pxe.vmlinuz"
@ -60,32 +59,31 @@ resource "matchbox_profile" "cached-container-linux-install" {
"/assets/coreos/${var.os_version}/coreos_production_pxe_image.cpio.gz",
]
args = [
args = flatten([
"initrd=coreos_production_pxe_image.cpio.gz",
"coreos.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
"${var.kernel_args}",
]
var.kernel_args,
])
container_linux_config = "${element(data.template_file.cached-container-linux-install-configs.*.rendered, count.index)}"
container_linux_config = element(data.template_file.cached-container-linux-install-configs.*.rendered, count.index)
}
data "template_file" "cached-container-linux-install-configs" {
count = "${length(var.controller_names) + length(var.worker_names)}"
count = length(var.controller_names) + length(var.worker_names)
template = "${file("${path.module}/cl/install.yaml.tmpl")}"
template = file("${path.module}/cl/install.yaml.tmpl")
vars = {
os_flavor = "${local.flavor}"
os_channel = "${local.channel}"
os_version = "${var.os_version}"
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
install_disk = "${var.install_disk}"
container_linux_oem = "${var.container_linux_oem}"
ssh_authorized_key = "${var.ssh_authorized_key}"
os_flavor = local.flavor
os_channel = local.channel
os_version = var.os_version
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
install_disk = var.install_disk
container_linux_oem = var.container_linux_oem
ssh_authorized_key = var.ssh_authorized_key
# profile uses -b baseurl to install from matchbox cache
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/${local.flavor}"
}
@ -93,8 +91,8 @@ data "template_file" "cached-container-linux-install-configs" {
// Flatcar Linux install profile (from release.flatcar-linux.net)
resource "matchbox_profile" "flatcar-install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("%s-flatcar-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
count = length(var.controller_names) + length(var.worker_names)
name = format("%s-flatcar-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))
kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
@ -102,23 +100,23 @@ resource "matchbox_profile" "flatcar-install" {
"${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
]
args = [
args = flatten([
"initrd=flatcar_production_pxe_image.cpio.gz",
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"flatcar.first_boot=yes",
"console=tty0",
"console=ttyS0",
"${var.kernel_args}",
]
var.kernel_args,
])
container_linux_config = "${element(data.template_file.container-linux-install-configs.*.rendered, count.index)}"
container_linux_config = element(data.template_file.container-linux-install-configs.*.rendered, count.index)
}
// Flatcar Linux Install profile (from matchbox /assets cache)
// Note: Admin must have downloaded os_version into matchbox assets/flatcar.
resource "matchbox_profile" "cached-flatcar-linux-install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("%s-cached-flatcar-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
count = length(var.controller_names) + length(var.worker_names)
name = format("%s-cached-flatcar-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))
kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
@ -126,90 +124,91 @@ resource "matchbox_profile" "cached-flatcar-linux-install" {
"/assets/flatcar/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
]
args = [
args = flatten([
"initrd=flatcar_production_pxe_image.cpio.gz",
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"flatcar.first_boot=yes",
"console=tty0",
"console=ttyS0",
"${var.kernel_args}",
]
var.kernel_args,
])
container_linux_config = "${element(data.template_file.cached-container-linux-install-configs.*.rendered, count.index)}"
container_linux_config = element(data.template_file.cached-container-linux-install-configs.*.rendered, count.index)
}
// Kubernetes Controller profiles
resource "matchbox_profile" "controllers" {
count = "${length(var.controller_names)}"
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
raw_ignition = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
count = length(var.controller_names)
name = format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))
raw_ignition = element(data.ct_config.controller-ignitions.*.rendered, count.index)
}
data "ct_config" "controller-ignitions" {
count = "${length(var.controller_names)}"
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
count = length(var.controller_names)
content = element(data.template_file.controller-configs.*.rendered, count.index)
pretty_print = false
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
snippets = ["${local.clc_map[element(var.controller_names, count.index)]}"]
snippets = local.clc_map[element(var.controller_names, count.index)]
}
data "template_file" "controller-configs" {
count = "${length(var.controller_names)}"
count = length(var.controller_names)
template = "${file("${path.module}/cl/controller.yaml.tmpl")}"
template = file("${path.module}/cl/controller.yaml.tmpl")
vars = {
domain_name = "${element(var.controller_domains, count.index)}"
etcd_name = "${element(var.controller_names, count.index)}"
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", var.controller_names, var.controller_domains))}"
cluster_dns_service_ip = "${module.bootkube.cluster_dns_service_ip}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
ssh_authorized_key = "${var.ssh_authorized_key}"
domain_name = element(var.controller_domains, count.index)
etcd_name = element(var.controller_names, count.index)
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controller_names, var.controller_domains))
cluster_dns_service_ip = module.bootkube.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key
}
}
// Kubernetes Worker profiles
resource "matchbox_profile" "workers" {
count = "${length(var.worker_names)}"
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
raw_ignition = "${element(data.ct_config.worker-ignitions.*.rendered, count.index)}"
count = length(var.worker_names)
name = format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))
raw_ignition = element(data.ct_config.worker-ignitions.*.rendered, count.index)
}
data "ct_config" "worker-ignitions" {
count = "${length(var.worker_names)}"
content = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
count = length(var.worker_names)
content = element(data.template_file.worker-configs.*.rendered, count.index)
pretty_print = false
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
snippets = ["${local.clc_map[element(var.worker_names, count.index)]}"]
snippets = local.clc_map[element(var.worker_names, count.index)]
}
data "template_file" "worker-configs" {
count = "${length(var.worker_names)}"
count = length(var.worker_names)
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
template = file("${path.module}/cl/worker.yaml.tmpl")
vars = {
domain_name = "${element(var.worker_domains, count.index)}"
cluster_dns_service_ip = "${module.bootkube.cluster_dns_service_ip}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
ssh_authorized_key = "${var.ssh_authorized_key}"
domain_name = element(var.worker_domains, count.index)
cluster_dns_service_ip = module.bootkube.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key
}
}
locals {
# Hack to workaround https://github.com/hashicorp/terraform/issues/17251
# Still an issue in Terraform v0.12 https://github.com/hashicorp/terraform/issues/20572
# Default Container Linux config snippets map every node names to list("\n") so
# all lookups succeed
clc_defaults = "${zipmap(concat(var.controller_names, var.worker_names), chunklist(data.template_file.clc-default-snippets.*.rendered, 1))}"
clc_defaults = zipmap(
concat(var.controller_names, var.worker_names),
chunklist(data.template_file.clc-default-snippets.*.rendered, 1),
)
# Union of the default and user specific snippets, later overrides prior.
clc_map = "${merge(local.clc_defaults, var.clc_snippets)}"
clc_map = merge(local.clc_defaults, var.clc_snippets)
}
// Horrible hack to generate a Terraform list of node count length
data "template_file" "clc-default-snippets" {
count = "${length(var.controller_names) + length(var.worker_names)}"
count = length(var.controller_names) + length(var.worker_names)
template = "\n"
}

View File

@ -1,21 +0,0 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.11.0"
}
provider "local" {
version = "~> 1.0"
}
provider "null" {
version = "~> 1.0"
}
provider "template" {
version = "~> 1.0"
}
provider "tls" {
version = "~> 1.0"
}

View File

@ -1,59 +1,59 @@
# Secure copy etcd TLS assets and kubeconfig to controllers. Activates kubelet.service
resource "null_resource" "copy-controller-secrets" {
count = "${length(var.controller_names)}"
count = length(var.controller_names)
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
"matchbox_group.install",
"matchbox_group.controller",
"matchbox_group.worker",
matchbox_group.install,
matchbox_group.controller,
matchbox_group.worker,
]
connection {
type = "ssh"
host = "${element(var.controller_domains, count.index)}"
host = element(var.controller_domains, count.index)
user = "core"
timeout = "60m"
}
provisioner "file" {
content = "${module.bootkube.kubeconfig-kubelet}"
content = module.bootkube.kubeconfig-kubelet
destination = "$HOME/kubeconfig"
}
provisioner "file" {
content = "${module.bootkube.etcd_ca_cert}"
content = module.bootkube.etcd_ca_cert
destination = "$HOME/etcd-client-ca.crt"
}
provisioner "file" {
content = "${module.bootkube.etcd_client_cert}"
content = module.bootkube.etcd_client_cert
destination = "$HOME/etcd-client.crt"
}
provisioner "file" {
content = "${module.bootkube.etcd_client_key}"
content = module.bootkube.etcd_client_key
destination = "$HOME/etcd-client.key"
}
provisioner "file" {
content = "${module.bootkube.etcd_server_cert}"
content = module.bootkube.etcd_server_cert
destination = "$HOME/etcd-server.crt"
}
provisioner "file" {
content = "${module.bootkube.etcd_server_key}"
content = module.bootkube.etcd_server_key
destination = "$HOME/etcd-server.key"
}
provisioner "file" {
content = "${module.bootkube.etcd_peer_cert}"
content = module.bootkube.etcd_peer_cert
destination = "$HOME/etcd-peer.crt"
}
provisioner "file" {
content = "${module.bootkube.etcd_peer_key}"
content = module.bootkube.etcd_peer_key
destination = "$HOME/etcd-peer.key"
}
@ -76,25 +76,25 @@ resource "null_resource" "copy-controller-secrets" {
# Secure copy kubeconfig to all workers. Activates kubelet.service
resource "null_resource" "copy-worker-secrets" {
count = "${length(var.worker_names)}"
count = length(var.worker_names)
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
"matchbox_group.install",
"matchbox_group.controller",
"matchbox_group.worker",
matchbox_group.install,
matchbox_group.controller,
matchbox_group.worker,
]
connection {
type = "ssh"
host = "${element(var.worker_domains, count.index)}"
host = element(var.worker_domains, count.index)
user = "core"
timeout = "60m"
}
provisioner "file" {
content = "${module.bootkube.kubeconfig-kubelet}"
content = module.bootkube.kubeconfig-kubelet
destination = "$HOME/kubeconfig"
}
@ -112,19 +112,19 @@ resource "null_resource" "bootkube-start" {
# Terraform only does one task at a time, so it would try to bootstrap
# while no Kubelets are running.
depends_on = [
"null_resource.copy-controller-secrets",
"null_resource.copy-worker-secrets",
null_resource.copy-controller-secrets,
null_resource.copy-worker-secrets,
]
connection {
type = "ssh"
host = "${element(var.controller_domains, 0)}"
host = element(var.controller_domains, 0)
user = "core"
timeout = "15m"
}
provisioner "file" {
source = "${var.asset_dir}"
source = var.asset_dir
destination = "$HOME/assets"
}
@ -135,3 +135,4 @@ resource "null_resource" "bootkube-start" {
]
}
}

View File

@ -1,22 +1,22 @@
variable "cluster_name" {
type = "string"
type = string
description = "Unique cluster name"
}
# bare-metal
variable "matchbox_http_endpoint" {
type = "string"
type = string
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "os_channel" {
type = "string"
type = string
description = "Channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
}
variable "os_version" {
type = "string"
type = string
description = "Version for a Container Linux derivative to PXE and install (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
}
@ -24,37 +24,37 @@ variable "os_version" {
# Terraform's crude "type system" does not properly support lists of maps so we do this.
variable "controller_names" {
type = "list"
type = list(string)
description = "Ordered list of controller names (e.g. [node1])"
}
variable "controller_macs" {
type = "list"
type = list(string)
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
}
variable "controller_domains" {
type = "list"
type = list(string)
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
}
variable "worker_names" {
type = "list"
type = list(string)
description = "Ordered list of worker names (e.g. [node2, node3])"
}
variable "worker_macs" {
type = "list"
type = list(string)
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
}
variable "worker_domains" {
type = "list"
type = list(string)
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
}
variable "clc_snippets" {
type = "map"
type = map(list(string))
description = "Map from machine names to lists of Container Linux Config snippets"
default = {}
}
@ -63,40 +63,40 @@ variable "clc_snippets" {
variable "k8s_domain_name" {
description = "Controller DNS name which resolves to a controller instance. Workers and kubeconfig's will communicate with this endpoint (e.g. cluster.example.com)"
type = "string"
type = string
}
variable "ssh_authorized_key" {
type = "string"
type = string
description = "SSH public key for user 'core'"
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
type = string
}
variable "networking" {
description = "Choice of networking provider (flannel or calico)"
type = "string"
type = string
default = "calico"
}
variable "network_mtu" {
description = "CNI interface MTU (applies to calico only)"
type = "string"
type = string
default = "1480"
}
variable "network_ip_autodetection_method" {
description = "Method to autodetect the host IPv4 address (applies to calico only)"
type = "string"
type = string
default = "first-found"
}
variable "pod_cidr" {
description = "CIDR IPv4 range to assign Kubernetes pods"
type = "string"
type = string
default = "10.2.0.0/16"
}
@ -106,7 +106,8 @@ CIDR IPv4 range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
EOD
type = "string"
type = string
default = "10.3.0.0/16"
}
@ -114,48 +115,49 @@ EOD
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
type = "string"
type = string
default = "cluster.local"
}
variable "download_protocol" {
type = "string"
type = string
default = "https"
description = "Protocol iPXE should use to download the kernel and initrd. Defaults to https, which requires iPXE compiled with crypto support. Unused if cached_install is true."
}
variable "cached_install" {
type = "string"
type = string
default = "false"
description = "Whether Container Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
}
variable "install_disk" {
type = "string"
type = string
default = "/dev/sda"
description = "Disk device to which the install profiles should install Container Linux (e.g. /dev/sda)"
}
variable "container_linux_oem" {
type = "string"
type = string
default = ""
description = "DEPRECATED: Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
}
variable "kernel_args" {
description = "Additional kernel arguments to provide at PXE boot."
type = "list"
type = list(string)
default = []
}
variable "enable_reporting" {
type = "string"
type = string
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = "false"
}
variable "enable_aggregation" {
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
type = "string"
type = string
default = "false"
}

View File

@ -0,0 +1,12 @@
# Terraform version and plugin versions
terraform {
required_version = "~> 0.12.0"
required_providers {
matchbox = "~> 0.3.0"
ct = "~> 0.3.2"
template = "~> 2.1"
null = "~> 2.1"
}
}

View File

@ -12,7 +12,7 @@ Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service
* PXE-enabled [network boot](https://coreos.com/matchbox/docs/latest/network-setup.html) environment (with HTTPS support)
* Matchbox v0.6+ deployment with API enabled
* Matchbox credentials `client.crt`, `client.key`, `ca.crt`
* Terraform v0.11.x, [terraform-provider-matchbox](https://github.com/poseidon/terraform-provider-matchbox), and [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) installed locally
* Terraform v0.12.x, [terraform-provider-matchbox](https://github.com/poseidon/terraform-provider-matchbox), and [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) installed locally
## Machines
@ -107,11 +107,11 @@ Read about the [many ways](https://coreos.com/matchbox/docs/latest/network-setup
## Terraform Setup
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
Install [Terraform](https://www.terraform.io/downloads.html) v0.12.x on your system.
```sh
$ terraform version
Terraform v0.11.14
Terraform v0.12.0
```
Add the [terraform-provider-matchbox](https://github.com/poseidon/terraform-provider-matchbox) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
@ -152,26 +152,6 @@ provider "matchbox" {
provider "ct" {
version = "0.3.2"
}
provider "local" {
version = "~> 1.0"
alias = "default"
}
provider "null" {
version = "~> 1.0"
alias = "default"
}
provider "template" {
version = "~> 1.0"
alias = "default"
}
provider "tls" {
version = "~> 1.0"
alias = "default"
}
```
## Cluster
@ -182,13 +162,6 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.14.3"
providers = {
local = "local.default"
null = "null.default"
template = "template.default"
tls = "tls.default"
}
# bare-metal
cluster_name = "mercury"
matchbox_http_endpoint = "http://matchbox.example.com"