Compare commits

..

8 Commits

Author SHA1 Message Date
0595915a19 Cleanup CHANGES notes 2019-10-15 23:25:45 -07:00
e6bc5143aa Default to Calico as the CNI provider on Azure/DigitalOcean
* Change `networking` default from flannel to calico on
Azure and DigitalOcean
* AWS, bare-metal, and Google Cloud continue to default
to Calico (as they have since v1.7.5)
* Typhoon now defaults to using Calico and supporting
NetworkPolicy on all platforms
2019-10-15 23:15:40 -07:00
e4ac1027c8 Update Grafana from v6.4.1 to v6.4.2
* https://github.com/grafana/grafana/releases/tag/v6.4.2
2019-10-15 22:58:43 -07:00
24fc440d83 Update Kubernetes from v1.16.1 to v1.16.2
* Update Calico from v3.9.1 to v3.9.2
2019-10-15 22:42:52 -07:00
a6702573a2 Update etcd from v3.4.1 to v3.4.2
* https://github.com/etcd-io/etcd/releases/tag/v3.4.2
2019-10-15 00:06:15 -07:00
69188af565 Rename CLUO label from "app" to "name"
* Match the labeling pattern in other addons
2019-10-15 00:05:02 -07:00
d874bdd17d Update bootstrap module control plane manifests and type constraints
* Remove unneeded control plane flags that correspond to defaults
* Adopt Terraform v0.12 type constraints in bootstrap module
2019-10-06 21:09:30 -07:00
5b9dab6659 Introduce list of detail objects for bare-metal machines
* Define bare-metal `controllers` and `workers` as a complex type
list(object{name=string, mac=string, domain=string}) to allow
clusters with many machines to be defined more cleanly
* Remove `controller_names` list variable
* Remove `controller_macs` list variable
* Remove `controller_domains` list variable
* Remove `worker_names` list variable
* Remove `worker_macs` list variable
* Remove `worker_domains` list variable
2019-10-06 20:22:45 -07:00
54 changed files with 309 additions and 295 deletions

View File

@ -2,10 +2,37 @@
Notable changes between versions.
## Latest
* Kubernetes [v1.16.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1162)
* Update etcd from v3.4.1 to v3.4.2 ([#570](https://github.com/poseidon/typhoon/pull/570))
* Update Calico from v3.9.1 to [v3.9.2](https://docs.projectcalico.org/v3.9/release-notes/)
* Default to using Calico and supporting NetworkPolicy on all platforms
#### Azure
* Change default networking provider from "flannel" to "calico" ([#573](https://github.com/poseidon/typhoon/pull/573))
#### Bare-Metal
* Add `controllers` and `workers` as typed lists of machine detail objects ([#566](https://github.com/poseidon/typhoon/pull/566))
* Define clusters' machines cleanly and with Terraform v0.12 type constraints (**action required**, see PR example)
* Remove `controller_names`, `controller_macs`, and `controller_domains` variables
* Remove `worker_names`, `worker_macs`, and `worker_domains` variables
#### DigitalOcean
* Change default networking provider from "flannel" to "calico" ([#573](https://github.com/poseidon/typhoon/pull/573))
#### Addons
* Update Grafana from v6.4.1 to [v6.4.2](https://github.com/grafana/grafana/releases/tag/v6.4.2)
* Change CLUO label from "app" to "name"
## v1.16.1
* Kubernetes [v1.16.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161)
* Update etcd from v3.3.15 to [v3.4.1](https://github.com/etcd-io/etcd/releases/tag/v3.4.1)
* Update etcd from v3.4.0 to [v3.4.1](https://github.com/etcd-io/etcd/releases/tag/v3.4.1)
* Update Calico from v3.8.2 to [v3.9.1](https://docs.projectcalico.org/v3.9/release-notes/)
* Add Terraform v0.12 variables types ([#553](https://github.com/poseidon/typhoon/pull/553), [#557](https://github.com/poseidon/typhoon/pull/557), [#560](https://github.com/poseidon/typhoon/pull/560), [#556](https://github.com/poseidon/typhoon/pull/556), [#562](https://github.com/poseidon/typhoon/pull/562))
* Deprecate `cluster_domain_suffix` variable

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.16.1 (upstream)
* Kubernetes v1.16.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
@ -48,7 +48,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.2"
# Google Cloud
cluster_name = "yavin"
@ -82,9 +82,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.1
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.16.1
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.16.1
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.2
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.16.2
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.16.2
```
List the pods.

View File

@ -10,11 +10,11 @@ spec:
maxUnavailable: 1
selector:
matchLabels:
app: container-linux-update-agent
name: container-linux-update-agent
template:
metadata:
labels:
app: container-linux-update-agent
name: container-linux-update-agent
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:

View File

@ -7,11 +7,11 @@ spec:
replicas: 1
selector:
matchLabels:
app: container-linux-update-operator
name: container-linux-update-operator
template:
metadata:
labels:
app: container-linux-update-operator
name: container-linux-update-operator
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:

View File

@ -23,7 +23,7 @@ spec:
spec:
containers:
- name: grafana
image: docker.io/grafana/grafana:6.4.1
image: docker.io/grafana/grafana:6.4.2
env:
- name: GF_PATHS_CONFIG
value: "/etc/grafana/custom.ini"

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.16.1 (upstream)
* Kubernetes v1.16.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=586d6e36f67c56fb2283f317a7552638368c5779"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=0fcc067476fa1463d057fd43760df222b7262b27"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.4.1"
Environment="ETCD_IMAGE_TAG=v3.4.2"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
@ -113,7 +113,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.16.1 \
docker://k8s.gcr.io/hyperkube:v1.16.2 \
--net=host \
--dns=host \
--exec=/apply
@ -134,7 +134,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /opt/bootstrap/apply
filesystem: root
mode: 0544

View File

@ -98,7 +98,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -116,7 +116,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.16.1 \
docker://k8s.gcr.io/hyperkube:v1.16.2 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.16.1 (upstream)
* Kubernetes v1.16.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=586d6e36f67c56fb2283f317a7552638368c5779"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=0fcc067476fa1463d057fd43760df222b7262b27"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -28,7 +28,7 @@ systemd:
--network host \
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
quay.io/coreos/etcd:v3.4.1
quay.io/coreos/etcd:v3.4.2
ExecStop=/usr/bin/podman stop etcd
[Install]
WantedBy=multi-user.target
@ -80,7 +80,7 @@ systemd:
--volume /var/run:/var/run \
--volume /var/run/lock:/var/run/lock:z \
--volume /opt/cni/bin:/opt/cni/bin:z \
k8s.gcr.io/hyperkube:v1.16.1 /hyperkube kubelet \
k8s.gcr.io/hyperkube:v1.16.2 /hyperkube kubelet \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -121,7 +121,7 @@ systemd:
--network host \
--volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \
k8s.gcr.io/hyperkube:v1.16.1 \
k8s.gcr.io/hyperkube:v1.16.2 \
/apply
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap

View File

@ -50,7 +50,7 @@ systemd:
--volume /var/run:/var/run \
--volume /var/run/lock:/var/run/lock:z \
--volume /opt/cni/bin:/opt/cni/bin:z \
k8s.gcr.io/hyperkube:v1.16.1 /hyperkube kubelet \
k8s.gcr.io/hyperkube:v1.16.2 /hyperkube kubelet \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.16.1 (upstream)
* Kubernetes v1.16.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=586d6e36f67c56fb2283f317a7552638368c5779"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=0fcc067476fa1463d057fd43760df222b7262b27"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.4.1"
Environment="ETCD_IMAGE_TAG=v3.4.2"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
@ -111,7 +111,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.16.1 \
docker://k8s.gcr.io/hyperkube:v1.16.2 \
--net=host \
--dns=host \
--exec=/apply
@ -132,7 +132,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /opt/bootstrap/apply
filesystem: root
mode: 0544

View File

@ -91,7 +91,7 @@ variable "asset_dir" {
variable "networking" {
type = string
description = "Choice of networking provider (flannel or calico)"
default = "flannel"
default = "calico"
}
variable "host_cidr" {

View File

@ -96,7 +96,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -114,7 +114,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.16.1 \
docker://k8s.gcr.io/hyperkube:v1.16.2 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]')

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.16.1 (upstream)
* Kubernetes v1.16.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,10 +1,10 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=586d6e36f67c56fb2283f317a7552638368c5779"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=0fcc067476fa1463d057fd43760df222b7262b27"
cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name]
etcd_servers = var.controller_domains
etcd_servers = var.controllers.*.domain
asset_dir = var.asset_dir
networking = var.networking
network_mtu = var.network_mtu

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.4.1"
Environment="ETCD_IMAGE_TAG=v3.4.2"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
@ -126,7 +126,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.16.1 \
docker://k8s.gcr.io/hyperkube:v1.16.2 \
--net=host \
--dns=host \
--exec=/apply
@ -141,7 +141,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -91,7 +91,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -1,34 +1,34 @@
resource "matchbox_group" "install" {
count = length(var.controller_names) + length(var.worker_names)
count = length(var.controllers) + length(var.workers)
name = format("install-%s", element(concat(var.controller_names, var.worker_names), count.index))
name = format("install-%s", concat(var.controllers.*.name, var.workers.*.name)[count.index])
# pick one of 4 Matchbox profiles (Container Linux or Flatcar, cached or non-cached)
profile = local.flavor == "flatcar" ? var.cached_install ? element(matchbox_profile.cached-flatcar-linux-install.*.name, count.index) : element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)
profile = local.flavor == "flatcar" ? var.cached_install ? matchbox_profile.cached-flatcar-linux-install.*.name[count.index] : matchbox_profile.flatcar-install.*.name[count.index] : var.cached_install ? matchbox_profile.cached-container-linux-install.*.name[count.index] : matchbox_profile.container-linux-install.*.name[count.index]
selector = {
mac = element(concat(var.controller_macs, var.worker_macs), count.index)
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
}
}
resource "matchbox_group" "controller" {
count = length(var.controller_names)
name = format("%s-%s", var.cluster_name, element(var.controller_names, count.index))
profile = element(matchbox_profile.controllers.*.name, count.index)
count = length(var.controllers)
name = format("%s-%s", var.cluster_name, var.controllers[count.index].name)
profile = matchbox_profile.controllers.*.name[count.index]
selector = {
mac = element(var.controller_macs, count.index)
mac = var.controllers[count.index].mac
os = "installed"
}
}
resource "matchbox_group" "worker" {
count = length(var.worker_names)
name = format("%s-%s", var.cluster_name, element(var.worker_names, count.index))
profile = element(matchbox_profile.workers.*.name, count.index)
count = length(var.workers)
name = format("%s-%s", var.cluster_name, var.workers[count.index].name)
profile = matchbox_profile.workers.*.name[count.index]
selector = {
mac = element(var.worker_macs, count.index)
mac = var.workers[count.index].mac
os = "installed"
}
}

View File

@ -1,15 +1,14 @@
locals {
# coreos-stable -> coreos flavor, stable channel
# flatcar-stable -> flatcar flavor, stable channel
flavor = element(split("-", var.os_channel), 0)
channel = element(split("-", var.os_channel), 1)
flavor = split("-", var.os_channel)[0]
channel = split("-", var.os_channel)[1]
}
// Container Linux Install profile (from release.core-os.net)
resource "matchbox_profile" "container-linux-install" {
count = length(var.controller_names) + length(var.worker_names)
name = format("%s-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))
count = length(var.controllers) + length(var.workers)
name = format("%s-container-linux-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
kernel = "${var.download_protocol}://${local.channel}.release.core-os.net/amd64-usr/${var.os_version}/coreos_production_pxe.vmlinuz"
@ -26,11 +25,11 @@ resource "matchbox_profile" "container-linux-install" {
var.kernel_args,
])
container_linux_config = element(data.template_file.container-linux-install-configs.*.rendered, count.index)
container_linux_config = data.template_file.container-linux-install-configs.*.rendered[count.index]
}
data "template_file" "container-linux-install-configs" {
count = length(var.controller_names) + length(var.worker_names)
count = length(var.controllers) + length(var.workers)
template = file("${path.module}/cl/install.yaml.tmpl")
@ -49,8 +48,8 @@ data "template_file" "container-linux-install-configs" {
// Container Linux Install profile (from matchbox /assets cache)
// Note: Admin must have downloaded os_version into matchbox assets/coreos.
resource "matchbox_profile" "cached-container-linux-install" {
count = length(var.controller_names) + length(var.worker_names)
name = format("%s-cached-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))
count = length(var.controllers) + length(var.workers)
name = format("%s-cached-container-linux-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
kernel = "/assets/coreos/${var.os_version}/coreos_production_pxe.vmlinuz"
@ -67,11 +66,11 @@ resource "matchbox_profile" "cached-container-linux-install" {
var.kernel_args,
])
container_linux_config = element(data.template_file.cached-container-linux-install-configs.*.rendered, count.index)
container_linux_config = data.template_file.cached-container-linux-install-configs.*.rendered[count.index]
}
data "template_file" "cached-container-linux-install-configs" {
count = length(var.controller_names) + length(var.worker_names)
count = length(var.controllers) + length(var.workers)
template = file("${path.module}/cl/install.yaml.tmpl")
@ -89,8 +88,8 @@ data "template_file" "cached-container-linux-install-configs" {
// Flatcar Linux install profile (from release.flatcar-linux.net)
resource "matchbox_profile" "flatcar-install" {
count = length(var.controller_names) + length(var.worker_names)
name = format("%s-flatcar-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))
count = length(var.controllers) + length(var.workers)
name = format("%s-flatcar-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
@ -107,14 +106,14 @@ resource "matchbox_profile" "flatcar-install" {
var.kernel_args,
])
container_linux_config = element(data.template_file.container-linux-install-configs.*.rendered, count.index)
container_linux_config = data.template_file.container-linux-install-configs.*.rendered[count.index]
}
// Flatcar Linux Install profile (from matchbox /assets cache)
// Note: Admin must have downloaded os_version into matchbox assets/flatcar.
resource "matchbox_profile" "cached-flatcar-linux-install" {
count = length(var.controller_names) + length(var.worker_names)
name = format("%s-cached-flatcar-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))
count = length(var.controllers) + length(var.workers)
name = format("%s-cached-flatcar-linux-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
@ -131,32 +130,32 @@ resource "matchbox_profile" "cached-flatcar-linux-install" {
var.kernel_args,
])
container_linux_config = element(data.template_file.cached-container-linux-install-configs.*.rendered, count.index)
container_linux_config = data.template_file.cached-container-linux-install-configs.*.rendered[count.index]
}
// Kubernetes Controller profiles
resource "matchbox_profile" "controllers" {
count = length(var.controller_names)
name = format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))
raw_ignition = element(data.ct_config.controller-ignitions.*.rendered, count.index)
count = length(var.controllers)
name = format("%s-controller-%s", var.cluster_name, var.controllers.*.name[count.index])
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index]
}
data "ct_config" "controller-ignitions" {
count = length(var.controller_names)
content = element(data.template_file.controller-configs.*.rendered, count.index)
count = length(var.controllers)
content = data.template_file.controller-configs.*.rendered[count.index]
pretty_print = false
snippets = local.clc_map[element(var.controller_names, count.index)]
snippets = local.clc_map[var.controllers.*.name[count.index]]
}
data "template_file" "controller-configs" {
count = length(var.controller_names)
count = length(var.controllers)
template = file("${path.module}/cl/controller.yaml.tmpl")
vars = {
domain_name = element(var.controller_domains, count.index)
etcd_name = element(var.controller_names, count.index)
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controller_names, var.controller_domains))
domain_name = var.controllers.*.domain[count.index]
etcd_name = var.controllers.*.name[count.index]
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
cgroup_driver = var.os_channel == "flatcar-edge" ? "systemd" : "cgroupfs"
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
@ -166,25 +165,25 @@ data "template_file" "controller-configs" {
// Kubernetes Worker profiles
resource "matchbox_profile" "workers" {
count = length(var.worker_names)
name = format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))
raw_ignition = element(data.ct_config.worker-ignitions.*.rendered, count.index)
count = length(var.workers)
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index]
}
data "ct_config" "worker-ignitions" {
count = length(var.worker_names)
content = element(data.template_file.worker-configs.*.rendered, count.index)
count = length(var.workers)
content = data.template_file.worker-configs.*.rendered[count.index]
pretty_print = false
snippets = local.clc_map[element(var.worker_names, count.index)]
snippets = local.clc_map[var.workers.*.name[count.index]]
}
data "template_file" "worker-configs" {
count = length(var.worker_names)
count = length(var.workers)
template = file("${path.module}/cl/worker.yaml.tmpl")
vars = {
domain_name = element(var.worker_domains, count.index)
domain_name = var.workers.*.domain[count.index]
cgroup_driver = var.os_channel == "flatcar-edge" ? "systemd" : "cgroupfs"
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
@ -198,7 +197,7 @@ locals {
# Default Container Linux config snippets map every node names to list("\n") so
# all lookups succeed
clc_defaults = zipmap(
concat(var.controller_names, var.worker_names),
concat(var.controllers.*.name, var.workers.*.name),
chunklist(data.template_file.clc-default-snippets.*.rendered, 1),
)
@ -208,7 +207,7 @@ locals {
// Horrible hack to generate a Terraform list of node count length
data "template_file" "clc-default-snippets" {
count = length(var.controller_names) + length(var.worker_names)
count = length(var.controllers) + length(var.workers)
template = "\n"
}

View File

@ -1,6 +1,6 @@
# Secure copy assets to controllers. Activates kubelet.service
resource "null_resource" "copy-controller-secrets" {
count = length(var.controller_names)
count = length(var.controllers)
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
@ -13,7 +13,7 @@ resource "null_resource" "copy-controller-secrets" {
connection {
type = "ssh"
host = var.controller_domains[count.index]
host = var.controllers.*.domain[count.index]
user = "core"
timeout = "60m"
}
@ -88,7 +88,7 @@ resource "null_resource" "copy-controller-secrets" {
# Secure copy kubeconfig to all workers. Activates kubelet.service
resource "null_resource" "copy-worker-secrets" {
count = length(var.worker_names)
count = length(var.workers)
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
@ -100,7 +100,7 @@ resource "null_resource" "copy-worker-secrets" {
connection {
type = "ssh"
host = var.worker_domains[count.index]
host = var.workers.*.domain[count.index]
user = "core"
timeout = "60m"
}
@ -129,7 +129,7 @@ resource "null_resource" "bootstrap" {
connection {
type = "ssh"
host = var.controller_domains[0]
host = var.controllers[0].domain
user = "core"
timeout = "15m"
}

View File

@ -21,36 +21,32 @@ variable "os_version" {
}
# machines
# Terraform's crude "type system" does not properly support lists of maps so we do this.
variable "controller_names" {
type = list(string)
description = "Ordered list of controller names (e.g. [node1])"
variable "controllers" {
type = list(object({
name = string
mac = string
domain = string
}))
description = <<EOD
List of controller machine details (unique name, identifying MAC address, FQDN)
[{ name = "node1", mac = "52:54:00:a1:9c:ae", domain = "node1.example.com"}]
EOD
}
variable "controller_macs" {
type = list(string)
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
}
variable "controller_domains" {
type = list(string)
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
}
variable "worker_names" {
type = list(string)
description = "Ordered list of worker names (e.g. [node2, node3])"
}
variable "worker_macs" {
type = list(string)
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
}
variable "worker_domains" {
type = list(string)
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
variable "workers" {
type = list(object({
name = string
mac = string
domain = string
}))
description = <<EOD
List of worker machine details (unique name, identifying MAC address, FQDN)
[
{ name = "node2", mac = "52:54:00:b2:2f:86", domain = "node2.example.com"},
{ name = "node3", mac = "52:54:00:c3:61:77", domain = "node3.example.com"}
]
EOD
}
variable "clc_snippets" {

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.16.1 (upstream)
* Kubernetes v1.16.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,10 +1,10 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=586d6e36f67c56fb2283f317a7552638368c5779"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=0fcc067476fa1463d057fd43760df222b7262b27"
cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name]
etcd_servers = var.controller_domains
etcd_servers = var.controllers.*.domain
asset_dir = var.asset_dir
networking = var.networking
network_mtu = var.network_mtu

View File

@ -28,7 +28,7 @@ systemd:
--network host \
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
quay.io/coreos/etcd:v3.4.1
quay.io/coreos/etcd:v3.4.2
ExecStop=/usr/bin/podman stop etcd
[Install]
WantedBy=multi-user.target
@ -81,7 +81,7 @@ systemd:
--volume /opt/cni/bin:/opt/cni/bin:z \
--volume /etc/iscsi:/etc/iscsi \
--volume /sbin/iscsiadm:/sbin/iscsiadm \
k8s.gcr.io/hyperkube:v1.16.1 /hyperkube kubelet \
k8s.gcr.io/hyperkube:v1.16.2 /hyperkube kubelet \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -132,7 +132,7 @@ systemd:
--network host \
--volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \
k8s.gcr.io/hyperkube:v1.16.1 \
k8s.gcr.io/hyperkube:v1.16.2 \
/apply
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap

View File

@ -51,7 +51,7 @@ systemd:
--volume /opt/cni/bin:/opt/cni/bin:z \
--volume /etc/iscsi:/etc/iscsi \
--volume /sbin/iscsiadm:/sbin/iscsiadm \
k8s.gcr.io/hyperkube:v1.16.1 /hyperkube kubelet \
k8s.gcr.io/hyperkube:v1.16.2 /hyperkube kubelet \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \

View File

@ -1,22 +1,22 @@
# Match each controller or worker to a profile
resource "matchbox_group" "controller" {
count = length(var.controller_names)
name = format("%s-%s", var.cluster_name, var.controller_names[count.index])
count = length(var.controllers)
name = format("%s-%s", var.cluster_name, var.controllers.*.name[count.index])
profile = matchbox_profile.controllers.*.name[count.index]
selector = {
mac = var.controller_macs[count.index]
mac = var.controllers.*.mac[count.index]
}
}
resource "matchbox_group" "worker" {
count = length(var.worker_names)
name = format("%s-%s", var.cluster_name, var.worker_names[count.index])
count = length(var.workers)
name = format("%s-%s", var.cluster_name, var.workers.*.name[count.index])
profile = matchbox_profile.workers.*.name[count.index]
selector = {
mac = var.worker_macs[count.index]
mac = var.workers.*.mac[count.index]
}
}

View File

@ -29,8 +29,8 @@ locals {
// Fedora CoreOS controller profile
resource "matchbox_profile" "controllers" {
count = length(var.controller_names)
name = format("%s-controller-%s", var.cluster_name, var.controller_names[count.index])
count = length(var.controllers)
name = format("%s-controller-%s", var.cluster_name, var.controllers.*.name[count.index])
kernel = local.kernel
initrd = [
@ -42,20 +42,20 @@ resource "matchbox_profile" "controllers" {
}
data "ct_config" "controller-ignitions" {
count = length(var.controller_names)
count = length(var.controllers)
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
}
data "template_file" "controller-configs" {
count = length(var.controller_names)
count = length(var.controllers)
template = file("${path.module}/fcc/controller.yaml")
vars = {
domain_name = var.controller_domains[count.index]
etcd_name = var.controller_names[count.index]
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controller_names, var.controller_domains))
domain_name = var.controllers.*.domain[count.index]
etcd_name = var.controllers.*.name[count.index]
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key
@ -64,8 +64,8 @@ data "template_file" "controller-configs" {
// Fedora CoreOS worker profile
resource "matchbox_profile" "workers" {
count = length(var.worker_names)
name = format("%s-worker-%s", var.cluster_name, var.worker_names[count.index])
count = length(var.workers)
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
kernel = local.kernel
initrd = [
@ -77,18 +77,18 @@ resource "matchbox_profile" "workers" {
}
data "ct_config" "worker-ignitions" {
count = length(var.worker_names)
count = length(var.workers)
content = data.template_file.worker-configs.*.rendered[count.index]
strict = true
}
data "template_file" "worker-configs" {
count = length(var.worker_names)
count = length(var.workers)
template = file("${path.module}/fcc/worker.yaml")
vars = {
domain_name = var.worker_domains[count.index]
domain_name = var.workers.*.domain[count.index]
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key

View File

@ -1,6 +1,6 @@
# Secure copy assets to controllers. Activates kubelet.service
resource "null_resource" "copy-controller-secrets" {
count = length(var.controller_names)
count = length(var.controllers)
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
@ -12,7 +12,7 @@ resource "null_resource" "copy-controller-secrets" {
connection {
type = "ssh"
host = var.controller_domains[count.index]
host = var.controllers.*.domain[count.index]
user = "core"
timeout = "60m"
}
@ -85,7 +85,7 @@ resource "null_resource" "copy-controller-secrets" {
# Secure copy kubeconfig to all workers. Activates kubelet.service
resource "null_resource" "copy-worker-secrets" {
count = length(var.worker_names)
count = length(var.workers)
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
@ -96,7 +96,7 @@ resource "null_resource" "copy-worker-secrets" {
connection {
type = "ssh"
host = var.worker_domains[count.index]
host = var.workers.*.domain[count.index]
user = "core"
timeout = "60m"
}
@ -125,7 +125,7 @@ resource "null_resource" "bootstrap" {
connection {
type = "ssh"
host = var.controller_domains[0]
host = var.controllers[0].domain
user = "core"
timeout = "15m"
}

View File

@ -22,36 +22,32 @@ variable "os_version" {
}
# machines
# Terraform's crude "type system" does not properly support lists of maps so we do this.
variable "controller_names" {
type = list(string)
description = "Ordered list of controller names (e.g. [node1])"
variable "controllers" {
type = list(object({
name = string
mac = string
domain = string
}))
description = <<EOD
List of controller machine details (unique name, identifying MAC address, FQDN)
[{ name = "node1", mac = "52:54:00:a1:9c:ae", domain = "node1.example.com"}]
EOD
}
variable "controller_macs" {
type = list(string)
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
}
variable "controller_domains" {
type = list(string)
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
}
variable "worker_names" {
type = list(string)
description = "Ordered list of worker names (e.g. [node2, node3])"
}
variable "worker_macs" {
type = list(string)
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
}
variable "worker_domains" {
type = list(string)
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
variable "workers" {
type = list(object({
name = string
mac = string
domain = string
}))
description = <<EOD
List of worker machine details (unique name, identifying MAC address, FQDN)
[
{ name = "node2", mac = "52:54:00:b2:2f:86", domain = "node2.example.com"},
{ name = "node3", mac = "52:54:00:c3:61:77", domain = "node3.example.com"}
]
EOD
}
variable "snippets" {

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.16.1 (upstream)
* Kubernetes v1.16.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=586d6e36f67c56fb2283f317a7552638368c5779"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=0fcc067476fa1463d057fd43760df222b7262b27"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.4.1"
Environment="ETCD_IMAGE_TAG=v3.4.2"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
@ -123,7 +123,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.16.1 \
docker://k8s.gcr.io/hyperkube:v1.16.2 \
--net=host \
--dns=host \
--exec=/apply
@ -138,7 +138,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /opt/bootstrap/apply
filesystem: root
mode: 0544

View File

@ -99,7 +99,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -117,7 +117,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.16.1 \
docker://k8s.gcr.io/hyperkube:v1.16.2 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -74,7 +74,7 @@ variable "asset_dir" {
variable "networking" {
type = string
description = "Choice of networking provider (flannel or calico)"
default = "flannel"
default = "calico"
}
variable "pod_cidr" {

View File

@ -79,7 +79,7 @@ Create a cluster following the Azure [tutorial](../cl/azure.md#cluster). Define
```tf
module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.16.2"
# Azure
region = module.azure-ramius.region
@ -145,7 +145,7 @@ Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#clu
```tf
module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.16.2"
# Google Cloud
region = "europe-west2"
@ -176,11 +176,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
```
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.16.1
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.16.1
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.16.1
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.16.1
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.16.1
yavin-controller-0.c.example-com.internal Ready 6m v1.16.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.16.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.16.2
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.16.2
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.16.2
```
### Variables

View File

@ -1,6 +1,6 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.16.1 cluster on AWS with Container Linux.
In this tutorial, we'll create a Kubernetes v1.16.2 cluster on AWS with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -70,7 +70,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
```tf
module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.16.2"
# AWS
cluster_name = "tempest"
@ -135,9 +135,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.16.1
ip-10-0-26-65 Ready <none> 10m v1.16.1
ip-10-0-41-21 Ready <none> 10m v1.16.1
ip-10-0-3-155 Ready <none> 10m v1.16.2
ip-10-0-26-65 Ready <none> 10m v1.16.2
ip-10-0-41-21 Ready <none> 10m v1.16.2
```
List the pods.

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Azure is alpha. For production, use AWS, Google Cloud, or bare-metal. As Azure matures, check [errata](https://github.com/poseidon/typhoon/wiki/Errata) for known shortcomings.
In this tutorial, we'll create a Kubernetes v1.16.1 cluster on Azure with Container Linux.
In this tutorial, we'll create a Kubernetes v1.16.2 cluster on Azure with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -66,7 +66,7 @@ Define a Kubernetes cluster using the module `azure/container-linux/kubernetes`.
```tf
module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.16.2"
# Azure
cluster_name = "ramius"
@ -132,9 +132,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/ramius/auth/kubeconfig
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.16.1
ramius-worker-000001 Ready <none> 25m v1.16.1
ramius-worker-000002 Ready <none> 24m v1.16.1
ramius-controller-0 Ready <none> 24m v1.16.2
ramius-worker-000001 Ready <none> 25m v1.16.2
ramius-worker-000002 Ready <none> 24m v1.16.2
```
List the pods.
@ -144,9 +144,9 @@ $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
kube-system flannel-bwf24 2/2 Running 0 26m
kube-system flannel-ks5qb 2/2 Running 0 26m
kube-system flannel-tq2wg 2/2 Running 0 26m
kube-system calico-node-1m5bf 2/2 Running 0 26m
kube-system calico-node-7jmr1 2/2 Running 0 26m
kube-system calico-node-bknc8 2/2 Running 0 26m
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
@ -220,7 +220,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
| worker_priority | Set priority to Low to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Low |
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
| worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
| networking | Choice of networking provider | "flannel" | "flannel" or "calico" |
| networking | Choice of networking provider | "calico" | "flannel" or "calico" |
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |

View File

@ -1,6 +1,6 @@
# Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.16.1 cluster on bare-metal with Container Linux.
In this tutorial, we'll network boot and provision a Kubernetes v1.16.2 cluster on bare-metal with Container Linux.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -160,7 +160,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
```tf
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.16.2"
# bare-metal
cluster_name = "mercury"
@ -174,20 +174,22 @@ module "bare-metal-mercury" {
asset_dir = "/home/user/.secrets/clusters/mercury"
# machines
controller_names = ["node1"]
controller_macs = ["52:54:00:a1:9c:ae"]
controller_domains = ["node1.example.com"]
worker_names = [
"node2",
"node3",
]
worker_macs = [
"52:54:00:b2:2f:86",
"52:54:00:c3:61:77",
]
worker_domains = [
"node2.example.com",
"node3.example.com",
controllers = [{
name = "node1"
mac = "52:54:00:a1:9c:ae"
domain = "node1.example.com"
}]
workers = [
{
name = "node2",
mac = "52:54:00:b2:2f:86"
domain = "node2.example.com"
},
{
name = "node3",
mac = "52:54:00:c3:61:77"
domain = "node3.example.com"
}
]
# set to http only if you cannot chainload to iPXE firmware with https support
@ -263,9 +265,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
```
# before v1.16.1
# before v1.16.2
$ ssh debug@node1.example.com
# after v1.16.1
# after v1.16.2
$ ssh -p 2222 core@node1.example.com
```
@ -289,9 +291,9 @@ systemd[1]: Started Kubernetes control plane.
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.16.1
node2.example.com Ready <none> 10m v1.16.1
node3.example.com Ready <none> 10m v1.16.1
node1.example.com Ready <none> 10m v1.16.2
node2.example.com Ready <none> 10m v1.16.2
node3.example.com Ready <none> 10m v1.16.2
```
List the pods.
@ -334,12 +336,8 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
| controller_names | Ordered list of controller short names | ["node1"] |
| controller_macs | Ordered list of controller identifying MAC addresses | ["52:54:00:a1:9c:ae"] |
| controller_domains | Ordered list of controller FQDNs | ["node1.example.com"] |
| worker_names | Ordered list of worker short names | ["node2", "node3"] |
| worker_macs | Ordered list of worker identifying MAC addresses | ["52:54:00:b2:2f:86", "52:54:00:c3:61:77"] |
| worker_domains | Ordered list of worker FQDNs | ["node2.example.com", "node3.example.com"] |
| controllers | List of controller machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node1", mac="52:54:00:a1:9c:ae", domain="node1.example.com"}]` |
| workers | List of worker machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node2", mac="52:54:00:b2:2f:86", domain="node2.example.com"}, {name="node3", mac="52:54:00:c3:61:77", domain="node3.example.com"}]` |
### Optional

View File

@ -1,6 +1,6 @@
# Digital Ocean
In this tutorial, we'll create a Kubernetes v1.16.1 cluster on DigitalOcean with Container Linux.
In this tutorial, we'll create a Kubernetes v1.16.2 cluster on DigitalOcean with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -65,7 +65,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
```tf
module "digital-ocean-nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.16.2"
# Digital Ocean
cluster_name = "nemo"
@ -130,9 +130,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.16.1
10.132.115.81 Ready <none> 10m v1.16.1
10.132.124.107 Ready <none> 10m v1.16.1
10.132.110.130 Ready <none> 10m v1.16.2
10.132.115.81 Ready <none> 10m v1.16.2
10.132.124.107 Ready <none> 10m v1.16.2
```
List the pods.
@ -141,9 +141,9 @@ List the pods.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
kube-system flannel-1cq1v 2/2 Running 0 11m
kube-system flannel-hq9t0 2/2 Running 0 11m
kube-system flannel-v0g9w 2/2 Running 0 11m
kube-system calico-node-1m5bf 2/2 Running 0 11m
kube-system calico-node-7jmr1 2/2 Running 0 11m
kube-system calico-node-bknc8 2/2 Running 0 11m
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
@ -219,7 +219,7 @@ Digital Ocean requires the SSH public key be uploaded to your account, so you ma
| image | Container Linux image for instances | "coreos-stable" | coreos-stable, coreos-beta, coreos-alpha |
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
| worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
| networking | Choice of networking provider | "flannel" | "flannel" or "calico" |
| networking | Choice of networking provider | "calico" | "flannel" or "calico" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |

View File

@ -1,6 +1,6 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.16.1 cluster on Google Compute Engine with Container Linux.
In this tutorial, we'll create a Kubernetes v1.16.2 cluster on Google Compute Engine with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -71,7 +71,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.2"
# Google Cloud
cluster_name = "yavin"
@ -137,9 +137,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.1
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.16.1
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.16.1
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.2
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.16.2
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.16.2
```
List the pods.

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Fedora CoreOS is an early preview! Fedora CoreOS itself is a preview! Expect bugs and design shifts. Please help both projects solve problems. Report Fedora CoreOS bugs to [Fedora](https://github.com/coreos/fedora-coreos-tracker/issues). Report Typhoon issues to Typhoon.
In this tutorial, we'll create a Kubernetes v1.16.1 cluster on AWS with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.16.2 cluster on AWS with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
```tf
module "aws-tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.16.2"
# AWS
cluster_name = "tempest"
@ -138,9 +138,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.16.1
ip-10-0-26-65 Ready <none> 10m v1.16.1
ip-10-0-41-21 Ready <none> 10m v1.16.1
ip-10-0-3-155 Ready <none> 10m v1.16.2
ip-10-0-26-65 Ready <none> 10m v1.16.2
ip-10-0-41-21 Ready <none> 10m v1.16.2
```
List the pods.

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Fedora CoreOS is an early preview! Fedora CoreOS itself is a preview! Expect bugs and design shifts. Please help both projects solve problems. Report Fedora CoreOS bugs to [Fedora](https://github.com/coreos/fedora-coreos-tracker/issues). Report Typhoon issues to Typhoon.
In this tutorial, we'll network boot and provision a Kubernetes v1.16.1 cluster on bare-metal with Fedora CoreOS.
In this tutorial, we'll network boot and provision a Kubernetes v1.16.2 cluster on bare-metal with Fedora CoreOS.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -163,7 +163,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
```tf
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.16.2"
# bare-metal
cluster_name = "mercury"
@ -178,20 +178,22 @@ module "bare-metal-mercury" {
asset_dir = "/home/user/.secrets/clusters/mercury"
# machines
controller_names = ["node1"]
controller_macs = ["52:54:00:a1:9c:ae"]
controller_domains = ["node1.example.com"]
worker_names = [
"node2",
"node3",
]
worker_macs = [
"52:54:00:b2:2f:86",
"52:54:00:c3:61:77",
]
worker_domains = [
"node2.example.com",
"node3.example.com",
controllers = [{
name = "node1"
mac = "52:54:00:a1:9c:ae"
domain = "node1.example.com"
}]
workers = [
{
name = "node2",
mac = "52:54:00:b2:2f:86"
domain = "node2.example.com"
},
{
name = "node3",
mac = "52:54:00:c3:61:77"
domain = "node3.example.com"
}
]
}
```
@ -283,9 +285,9 @@ systemd[1]: Started Kubernetes control plane.
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.16.1
node2.example.com Ready <none> 10m v1.16.1
node3.example.com Ready <none> 10m v1.16.1
node1.example.com Ready <none> 10m v1.16.2
node2.example.com Ready <none> 10m v1.16.2
node3.example.com Ready <none> 10m v1.16.2
```
List the pods.
@ -325,12 +327,8 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
| controller_names | Ordered list of controller short names | ["node1"] |
| controller_macs | Ordered list of controller identifying MAC addresses | ["52:54:00:a1:9c:ae"] |
| controller_domains | Ordered list of controller FQDNs | ["node1.example.com"] |
| worker_names | Ordered list of worker short names | ["node2", "node3"] |
| worker_macs | Ordered list of worker identifying MAC addresses | ["52:54:00:b2:2f:86", "52:54:00:c3:61:77"] |
| worker_domains | Ordered list of worker FQDNs | ["node2.example.com", "node3.example.com"] |
| controllers | List of controller machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node1", mac="52:54:00:a1:9c:ae", domain="node1.example.com"}]` |
| workers | List of worker machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node2", mac="52:54:00:b2:2f:86", domain="node2.example.com"}, {name="node3", mac="52:54:00:c3:61:77", domain="node3.example.com"}]` |
### Optional

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.16.1 (upstream)
* Kubernetes v1.16.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](cl/google-cloud/#preemption) workers, and [snippets](advanced/customization/#container-linux) customization
@ -47,7 +47,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.2"
# Google Cloud
cluster_name = "yavin"
@ -80,9 +80,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.1
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.16.1
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.16.1
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.2
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.16.2
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.16.2
```
List the pods.

View File

@ -18,7 +18,7 @@ module "google-cloud-yavin" {
}
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.16.1"
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.16.2"
...
}
```
@ -279,15 +279,15 @@ Typhoon modules have been adapted for Terraform v0.12. Provider plugins requirem
| Typhoon Release | Terraform version |
|-------------------|---------------------|
| v1.16.1 - ? | v0.12.x |
| v1.10.3 - v1.16.1 | v0.11.x |
| v1.16.2 - ? | v0.12.x |
| v1.10.3 - v1.16.2 | v0.11.x |
| v1.9.2 - v1.10.2 | v0.10.4+ or v0.11.x |
| v1.7.3 - v1.9.1 | v0.10.x |
| v1.6.4 - v1.7.2 | v0.9.x |
### New users
New users can start with Terraform v0.12.x and follow the docs for Typhoon v1.16.1+ without issue.
New users can start with Terraform v0.12.x and follow the docs for Typhoon v1.16.2+ without issue.
### Existing users
@ -404,7 +404,7 @@ tree .
└── infraB <- new Terraform v0.12.x configs
```
Define Typhoon clusters in the new config directory using Terraform v0.12 syntax. Follow the Typhoon v1.16.1+ docs (e.g. use `terraform12` in the `infraB` dir). See [AWS](/cl/aws), [Azure](/cl/azure), [Bare-Metal](/cl/bare-metal), [Digital Ocean](/cl/digital-ocean), or [Google-Cloud](/cl/google-cloud)) to create new clusters. Follow the usual [upgrade](/topics/maintenance/#upgrades) process to apply workloads and shift traffic. Later, switch back to the old config directory and deprovision clusters with Terraform v0.11.
Define Typhoon clusters in the new config directory using Terraform v0.12 syntax. Follow the Typhoon v1.16.2+ docs (e.g. use `terraform12` in the `infraB` dir). See [AWS](/cl/aws), [Azure](/cl/azure), [Bare-Metal](/cl/bare-metal), [Digital Ocean](/cl/digital-ocean), or [Google-Cloud](/cl/google-cloud)) to create new clusters. Follow the usual [upgrade](/topics/maintenance/#upgrades) process to apply workloads and shift traffic. Later, switch back to the old config directory and deprovision clusters with Terraform v0.11.
```shell
terraform12 init

View File

@ -12,7 +12,7 @@ Typhoon aims to be minimal and secure. We're running it ourselves after all.
* Workloads run on worker nodes only, unless they tolerate the master taint
* Kubernetes [Network Policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) and Calico [NetworkPolicy](https://docs.projectcalico.org/latest/reference/calicoctl/resources/networkpolicy) support [^1]
[^1]: Requires `networking = "calico"`. Calico is the default on AWS, bare-metal, and Google Cloud. Azure and Digital Ocean are limited to `networking = "flannel"`.
[^1]: Requires `networking = "calico"`. Calico is the default on all platforms (AWS, Azure, bare-metal, DigitalOcean, and Google Cloud).
**Hosts**

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.16.1 (upstream)
* Kubernetes v1.16.2 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=586d6e36f67c56fb2283f317a7552638368c5779"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=0fcc067476fa1463d057fd43760df222b7262b27"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.4.1"
Environment="ETCD_IMAGE_TAG=v3.4.2"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
@ -112,7 +112,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.16.1 \
docker://k8s.gcr.io/hyperkube:v1.16.2 \
--net=host \
--dns=host \
--exec=/apply
@ -133,7 +133,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /opt/bootstrap/apply
filesystem: root
mode: 0544

View File

@ -97,7 +97,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.16.1
KUBELET_IMAGE_TAG=v1.16.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -115,7 +115,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.16.1 \
docker://k8s.gcr.io/hyperkube:v1.16.2 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)