Compare commits

..

55 Commits

Author SHA1 Message Date
daa5fc4171 Merge remote-tracking branch 'upstream/main' 2024-12-02 11:05:29 +01:00
dghubble-renovate[bot]
17060445f7 Bump mkdocs-material from 9.5.45 to v9.5.46 2024-11-29 08:54:47 -08:00
dghubble-renovate[bot]
10dd385c38 Bump registry.k8s.io/coredns/coredns image from v1.11.4 to v1.12.0 2024-11-29 08:54:38 -08:00
Dalton Hubble
bc59d5153e Update Kubernetes from v1.31.2 to v1.31.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1313
* Update CoreDNS from v1.11.3 to v1.11.4
* Update Cilium from v1.16.3 to v1.16.4
* Plan to drop support for using Calico CNI, recommend everyone use the Cilium default
2024-11-24 08:43:54 -08:00
dghubble-renovate[bot]
cec2a097d4 Bump quay.io/cilium/cilium image from v1.16.3 to v1.16.4 2024-11-24 08:36:50 -08:00
dghubble-renovate[bot]
afbb55b79e Bump quay.io/cilium/operator-generic image from v1.16.3 to v1.16.4 2024-11-24 08:36:46 -08:00
dghubble-renovate[bot]
5cb48f01bd Bump mkdocs-material from 9.5.44 to v9.5.45 2024-11-24 08:36:42 -08:00
Dalton Hubble
dfb307b1a7 Use consistent resources naming btw Azure Flatcar/FCOS
* Fix Azure Public IP name in the Flatcar Linux configuration
2024-11-23 21:20:00 -08:00
dghubble-renovate[bot]
a908d30821 Bump registry.k8s.io/coredns/coredns image from v1.11.3 to v1.11.4 2024-11-14 13:31:17 -08:00
Raimo Radczewski
2b99ccaa39 nginx/bare-metal: fix selector 2024-11-11 10:00:35 -08:00
Raimo Radczewski
93c6c2fed3 nginx: Add endpointslices.discovery.k8s.io to all rbac documents 2024-11-11 10:00:35 -08:00
dghubble-renovate[bot]
93c52df929 Bump mkdocs-material from 9.5.42 to v9.5.44 2024-11-11 09:53:16 -08:00
dghubble-renovate[bot]
ef740832c9 Bump docker.io/flannel/flannel image from v0.26.0 to v0.26.1 2024-11-11 09:41:02 -08:00
dghubble-renovate[bot]
9b28867ea8 Bump pymdown-extensions from 10.11.2 to v10.12 2024-10-30 20:02:18 -07:00
Dalton Hubble
61ffc0bc19
Update Kubernetes from v1.31.1 to v1.31.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1312
* Update Cilium from v1.16.1 to v1.16.3
* Update flannel from v0.25.6 to v0.26.0
2024-10-26 08:33:43 -07:00
dghubble-renovate[bot]
e143061bcf Bump mkdocs-material from 9.5.39 to v9.5.42 2024-10-26 08:21:10 -07:00
dghubble-renovate[bot]
c3cb5a3f1b Bump quay.io/cilium/cilium image from v1.16.2 to v1.16.3 2024-10-26 08:20:58 -07:00
dghubble-renovate[bot]
81265483c6 Bump quay.io/cilium/operator-generic image from v1.16.2 to v1.16.3 2024-10-26 08:19:17 -07:00
dghubble-renovate[bot]
a4e0ade8d9 Bump docker.io/flannel/flannel image from v0.25.7 to v0.26.0 2024-10-26 08:18:52 -07:00
dghubble-renovate[bot]
3d4905bb3a Bump pymdown-extensions from 10.9 to v10.11.2 2024-10-08 21:33:42 -07:00
jordanp
5932b651e3 doc: set file_permission 0600 for kubeconfig file
It's only documentation, but kubeconfig file contains sensitive info so it's better to secure it a little
2024-10-08 21:33:31 -07:00
Dalton Hubble
6a5b808b17
Add region to gcp instance template resource
* Configure the regional worker instance templates with the
region of the cluster. This defaults to the provider's region
which isn't always what you want and if left off causes an error
* Close #1512
2024-10-08 21:28:29 -07:00
dghubble-renovate[bot]
e6989514a5 Bump mkdocs-material from 9.5.36 to v9.5.39 2024-10-08 21:07:25 -07:00
dghubble-renovate[bot]
edd9328554 Bump quay.io/cilium/cilium image from v1.16.1 to v1.16.2 2024-10-08 21:07:18 -07:00
dghubble-renovate[bot]
8656a2d75b Bump quay.io/cilium/operator-generic image from v1.16.1 to v1.16.2 2024-10-08 21:07:13 -07:00
dghubble-renovate[bot]
16c26f4384 Bump docker.io/flannel/flannel image from v0.25.6 to v0.25.7 2024-10-08 21:07:05 -07:00
dghubble-renovate[bot]
c87c21c7e2 Bump mkdocs-material from 9.5.35 to v9.5.36 2024-09-21 19:31:03 -07:00
Dalton Hubble
598f707cbd
Update Kubernetes from v1.31.0 to v1.31.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1311
2024-09-20 14:43:39 -07:00
Jordan Pittier
3f844e3c57
google: Add controller_disk_type and worker_disk_type variables (#1513)
* Add controller_disk_type and worker_disk_type variables
* Properly pass disk_type to worker nodes
2024-09-20 14:31:17 -07:00
dghubble-renovate[bot]
b2fad7771f Bump mkdocs from 1.6.0 to v1.6.1 2024-09-20 14:20:43 -07:00
dghubble-renovate[bot]
3ae8794c6c Bump mkdocs-material from 9.5.34 to v9.5.35 2024-09-20 13:06:40 -07:00
dghubble-renovate[bot]
6878fa9fe6 Bump mkdocs-material from 9.5.33 to v9.5.34 2024-09-09 19:55:42 -07:00
dghubble-renovate[bot]
c72e99834c Bump docker.io/flannel/flannel image from v0.25.5 to v0.25.6 2024-08-28 19:45:28 -07:00
Dalton Hubble
7d2d8e16e5
google: Use regional instance templates for workers
* Use regional instance templates for the worker node regional
managed instance groups. Regional instance templates are kept in
the associated region, whereas the older "global" instance templates
were kept in a particular region (regardless of where the MIG region)
so outages in a region X could affect clusters in a region Y which
is undesired
2024-08-27 21:35:02 -07:00
dghubble-renovate[bot]
be9ba51269 Bump mkdocs-material from 9.5.32 to v9.5.33 2024-08-23 21:51:36 -07:00
Dalton Hubble
9a2448f711 Remove upper bound on azurerm provider version
* Allow folks to start upgrading to azurerm provider v4.0.0,
don't set an upper bound on versions going forward
2024-08-23 21:51:29 -07:00
Dalton Hubble
3412060c3c
Use Cilium kube-proxy replacement when Cilium CNI is used
* When using the Cilium component, disable bootstrapping the
kube-proxy DaemonSet. Instead, configure Cilium to provide its
kube-proxy replacement with BPF
* Update the self-managed Cilium component to use kube-proxy
replacement as well
2024-08-23 12:33:32 -07:00
Dalton Hubble
808b8a948f
aws: Switch EC2 instances to use resource-based hostnames
* Use EC2 resource-based hostnames instead of IP-based hostnames. The Amazon
DNS server can resolve A and AAAA queries to IPv4 and IPv6 node addresses
* For example, nodes used to be named like `ip-10-11-12-13.us-east-1.compute.internal`
but going forward use the instance id `i-0123456789abcdef.us-east-1.compute.internal`
* Tag controller node EBS volumes with a name based on the controller node name
2024-08-22 20:02:53 -07:00
Dalton Hubble
effa13c141
Fix flannel-cni container image
* Close #1496
2024-08-22 19:26:19 -07:00
dghubble-renovate[bot]
b8645f3ec2 Bump mkdocs-material from 9.5.31 to v9.5.32 2024-08-22 10:36:50 -07:00
Dalton Hubble
10be34daa2
Update Kubernetes from v1.30.4 to v1.31.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1310
2024-08-17 08:32:35 -07:00
dghubble-renovate[bot]
1cb49e1267 Bump quay.io/cilium/cilium image from v1.16.0 to v1.16.1 2024-08-16 08:31:11 -07:00
dghubble-renovate[bot]
d79f94f4f5 Bump quay.io/cilium/operator-generic image from v1.16.0 to v1.16.1 2024-08-16 08:31:01 -07:00
Dalton Hubble
320d76c934
Update Kubernetes from v1.30.3 to v1.30.4
* Update Cilium from v1.16.0 to v1.16.1
2024-08-16 08:27:07 -07:00
Dalton Hubble
2daa23be50
Update default Cilium and CoreDNS components
* Update the CoreDNS and Cilium versons used by default when
folks aren't managing the components themselves
2024-08-05 08:47:06 -07:00
Dalton Hubble
6e2daded02
Remove some seldom used variables and set reasonable
* Set reasonable values and remove some variable clutter
* enable_reporting is only used with Calico and we can just default
to false, I doubt anyone uses Calico and cares much about reporting
metrics to upstream Calico
2024-08-02 20:45:37 -07:00
Dalton Hubble
83f1bd2373
Update ARM64 cluster and hybrid cluster docs
* Typhoon now supports arbitrary combinations of controller, worker,
and worker pool architectures so we can drop the specific details of
full-cluster vs hybrid cluster. Just pick the architecture for each
group of nodes accordingly.
* However, if a custom node taint is set, continue to configure the
cluster's daemonsets accordingly with `daemonset_tolerations`
2024-08-02 20:34:23 -07:00
dghubble-renovate[bot]
67e5ecf6f2 Bump mkdocs-material from 9.5.30 to v9.5.31 2024-08-02 16:46:36 -07:00
Dalton Hubble
0120b9f38d
Remove the cluster_domain_suffix variable
* Drop support for `cluster_domain_suffix` customization and
always use `cluster.local`. Many components in the Kubernetes
ecosystem assume this default suffix and its very rare to be
setting a special value here these days
* Cleanup a few variables that are seldom used
2024-08-02 15:05:25 -07:00
516517fafe Merge remote-tracking branch 'upstream/main' 2023-11-02 11:56:22 +01:00
21f7142464 Merge remote-tracking branch 'upstream/main' 2023-10-20 14:00:37 +02:00
73e7448f53 Merge remote-tracking branch 'upstream/main' 2023-10-11 13:31:16 +02:00
27cecd0f94 fix typo in variable name 2023-08-03 14:26:39 +02:00
634deaf92e Adding install_snippets support.
During the "real" first boot (install boot), we need tu run butane
config to manipulate disks, so we add install_snippets variable to do
so.

This snippets are added to the install.yaml butane configuration
2023-08-03 14:16:24 +02:00
cd699ee1aa Update docs on flatcar-linux bare-metal kubernetes worker module usage. 2023-08-02 12:07:53 +02:00
122 changed files with 844 additions and 948 deletions

View File

@ -4,11 +4,57 @@ Notable changes between versions.
## Latest ## Latest
### Azure ## v1.31.3
* Allow controller and worker nodes to use different CPU architectures * Kubernetes [v1.31.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1312)
* Add `controller_arch` and `worker_arch` variables * Update CoreDNS from v1.11.3 to v1.11.4
* Remove the `arch` variable * Update Cilium from v1.16.3 to [v1.16.4](https://github.com/cilium/cilium/releases/tag/v1.16.4)
### Deprecations
* Plan to drop support for using Calico CNI, recommend everyone use the Cilium default
## v1.31.2
* Kubernetes [v1.31.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1312)
* Update Cilium from v1.16.1 to [v1.16.3](https://github.com/cilium/cilium/releases/tag/v1.16.3)
* Update flannel from v0.25.6 to [v0.26.0](https://github.com/flannel-io/flannel/releases/tag/v0.26.0)
## v1.31.1
* Kubernetes [v1.31.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1311)
* Update flannel from v0.25.5 to [v0.25.6](https://github.com/flannel-io/flannel/releases/tag/v0.25.6)
### Google
* Add `controller_disk_type` and `worker_disk_type` variables ([#1513](https://github.com/poseidon/typhoon/pull/1513))
* Add explicit `region` field to regional worker instance templates ([#1524](https://github.com/poseidon/typhoon/pull/1524))
## v1.31.0
* Kubernetes [v1.31.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1310)
* Use Cilium kube-proxy replacement mode when `cilium` networking is chosen ([#1501](https://github.com/poseidon/typhoon/pull/1501))
* Fix invalid flannel-cni container image for those using `flannel` networking ([#1497](https://github.com/poseidon/typhoon/pull/1497))
### AWS
* Use EC2 resource-based hostnames instead of IP-based hostnames ([#1499](https://github.com/poseidon/typhoon/pull/1499))
* The Amazon DNS server can resolve A and AAAA queries to IPv4 and IPv6 node addresses
* Tag controller node EBS volumes with a name based on the controller node name
### Google
* Use `google_compute_region_instance_template` instead of `google_compute_instance_template`
* Google's regional instance template metadata is kept in the associated region for greater resiliency. The "global" instance templates were kept in a single region
## v1.30.4
* Kubernetes [v1.30.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1304)
* Update Cilium from v1.15.7 to [v1.16.1](https://github.com/cilium/cilium/releases/tag/v1.16.1)
* Update CoreDNS from v1.11.1 to v1.11.3
* Remove `enable_aggregation` variable for Kubernetes Aggregation Layer, always set to true
* Remove `cluster_domain_suffix` variable, always use "cluster.local"
* Remove `enable_reporting` variable for analytics, always set to false
## v1.30.3 ## v1.30.3
@ -18,12 +64,12 @@ Notable changes between versions.
### AWS ### AWS
* Allow configuring controller and worker disks ([#1482](https://github.com/poseidon/typhoon/pull/1482)) * Configure controller and worker disks ([#1482](https://github.com/poseidon/typhoon/pull/1482))
* Add `controller_disk_type`, `controller_disk_size`, and `controller_disk_iops` variables * Add `controller_disk_type`, `controller_disk_size`, and `controller_disk_iops` variables
* Add `worker_disk_type`, `worker_disk_size`, and `worker_disk_iops` variables * Add `worker_disk_type`, `worker_disk_size`, and `worker_disk_iops` variables
* Remove `disk_type`, `disk_size`, and `disk_iops` variables * Remove `disk_type`, `disk_size`, and `disk_iops` variables
* Fix propagating settings to worker disks, previously ignored * Fix propagating settings to worker disks, previously ignored
* Allow configuring CPU pricing model for burstable instance types ([#1482](https://github.com/poseidon/typhoon/pull/1482)) * Configure CPU pricing model for burstable instance types ([#1482](https://github.com/poseidon/typhoon/pull/1482))
* Add `controller_cpu_credits` and `worker_cpu_credits` variables (`standard` or `unlimited`) * Add `controller_cpu_credits` and `worker_cpu_credits` variables (`standard` or `unlimited`)
* Configure controller or worker instance architecture ([#1485](https://github.com/poseidon/typhoon/pull/1485)) * Configure controller or worker instance architecture ([#1485](https://github.com/poseidon/typhoon/pull/1485))
* Add `controller_arch` and `worker_arch` variables (`amd64` or `arm64`) * Add `controller_arch` and `worker_arch` variables (`amd64` or `arm64`)

View File

@ -18,7 +18,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.30.3 (upstream) * Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
@ -78,7 +78,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -98,6 +98,7 @@ module "yavin" {
resource "local_file" "kubeconfig-yavin" { resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config" filename = "/home/user/.kube/configs/yavin-config"
file_permission = "0600"
} }
``` ```
@ -117,9 +118,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.3 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.31.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.3 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.31.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.3 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.31.3
``` ```
List the pods. List the pods.
@ -127,9 +128,10 @@ List the pods.
``` ```
$ kubectl get pods --all-namespaces $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1cs8z 2/2 Running 0 6m kube-system cilium-1cs8z 1/1 Running 0 6m
kube-system calico-node-d1l5b 2/2 Running 0 6m kube-system cilium-d1l5b 1/1 Running 0 6m
kube-system calico-node-sp9ps 2/2 Running 0 6m kube-system cilium-sp9ps 1/1 Running 0 6m
kube-system cilium-operator-68d778b448-g744f 1/1 Running 0 6m
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
kube-system kube-apiserver-controller-0 1/1 Running 0 6m kube-system kube-apiserver-controller-0 1/1 Running 0 6m

View File

@ -128,8 +128,8 @@ resource "kubernetes_config_map" "cilium" {
enable-bpf-masquerade = "true" enable-bpf-masquerade = "true"
# kube-proxy # kube-proxy
kube-proxy-replacement = "false" kube-proxy-replacement = "true"
kube-proxy-replacement-healthz-bind-address = "" kube-proxy-replacement-healthz-bind-address = ":10256"
enable-session-affinity = "true" enable-session-affinity = "true"
# ClusterIPs from host namespace # ClusterIPs from host namespace

View File

@ -61,7 +61,7 @@ resource "kubernetes_daemonset" "cilium" {
# https://github.com/cilium/cilium/pull/24075 # https://github.com/cilium/cilium/pull/24075
init_container { init_container {
name = "install-cni" name = "install-cni"
image = "quay.io/cilium/cilium:v1.16.0" image = "quay.io/cilium/cilium:v1.16.4"
command = ["/install-plugin.sh"] command = ["/install-plugin.sh"]
security_context { security_context {
allow_privilege_escalation = true allow_privilege_escalation = true
@ -80,7 +80,7 @@ resource "kubernetes_daemonset" "cilium" {
# We use nsenter command with host's cgroup and mount namespaces enabled. # We use nsenter command with host's cgroup and mount namespaces enabled.
init_container { init_container {
name = "mount-cgroup" name = "mount-cgroup"
image = "quay.io/cilium/cilium:v1.16.0" image = "quay.io/cilium/cilium:v1.16.4"
command = [ command = [
"sh", "sh",
"-ec", "-ec",
@ -115,7 +115,7 @@ resource "kubernetes_daemonset" "cilium" {
init_container { init_container {
name = "clean-cilium-state" name = "clean-cilium-state"
image = "quay.io/cilium/cilium:v1.16.0" image = "quay.io/cilium/cilium:v1.16.4"
command = ["/init-container.sh"] command = ["/init-container.sh"]
security_context { security_context {
allow_privilege_escalation = true allow_privilege_escalation = true
@ -139,7 +139,7 @@ resource "kubernetes_daemonset" "cilium" {
container { container {
name = "cilium-agent" name = "cilium-agent"
image = "quay.io/cilium/cilium:v1.16.0" image = "quay.io/cilium/cilium:v1.16.4"
command = ["cilium-agent"] command = ["cilium-agent"]
args = [ args = [
"--config-dir=/tmp/cilium/config-map" "--config-dir=/tmp/cilium/config-map"

View File

@ -58,7 +58,7 @@ resource "kubernetes_deployment" "operator" {
enable_service_links = false enable_service_links = false
container { container {
name = "cilium-operator" name = "cilium-operator"
image = "quay.io/cilium/operator-generic:v1.16.0" image = "quay.io/cilium/operator-generic:v1.16.4"
command = ["cilium-operator-generic"] command = ["cilium-operator-generic"]
args = [ args = [
"--config-dir=/tmp/cilium/config-map", "--config-dir=/tmp/cilium/config-map",

View File

@ -77,7 +77,7 @@ resource "kubernetes_deployment" "coredns" {
} }
container { container {
name = "coredns" name = "coredns"
image = "registry.k8s.io/coredns/coredns:v1.11.3" image = "registry.k8s.io/coredns/coredns:v1.12.0"
args = ["-conf", "/etc/coredns/Corefile"] args = ["-conf", "/etc/coredns/Corefile"]
port { port {
name = "dns" name = "dns"

View File

@ -73,7 +73,7 @@ resource "kubernetes_daemonset" "flannel" {
container { container {
name = "flannel" name = "flannel"
image = "docker.io/flannel/flannel:v0.25.5" image = "docker.io/flannel/flannel:v0.26.1"
command = [ command = [
"/opt/bin/flanneld", "/opt/bin/flanneld",
"--ip-masq", "--ip-masq",

View File

@ -59,4 +59,11 @@ rules:
- get - get
- list - list
- watch - watch
- apiGroups:
- discovery.k8s.io
resources:
- "endpointslices"
verbs:
- get
- list
- watch

View File

@ -59,4 +59,11 @@ rules:
- get - get
- list - list
- watch - watch
- apiGroups:
- discovery.k8s.io
resources:
- "endpointslices"
verbs:
- get
- list
- watch

View File

@ -59,4 +59,11 @@ rules:
- get - get
- list - list
- watch - watch
- apiGroups:
- discovery.k8s.io
resources:
- "endpointslices"
verbs:
- get
- list
- watch

View File

@ -1,7 +1,7 @@
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: ingress-controller-public name: nginx-ingress-controller
namespace: ingress namespace: ingress
annotations: annotations:
prometheus.io/scrape: 'true' prometheus.io/scrape: 'true'
@ -10,7 +10,7 @@ spec:
type: ClusterIP type: ClusterIP
clusterIP: 10.3.0.12 clusterIP: 10.3.0.12
selector: selector:
name: ingress-controller-public name: nginx-ingress-controller
phase: prod phase: prod
ports: ports:
- name: http - name: http

View File

@ -59,4 +59,11 @@ rules:
- get - get
- list - list
- watch - watch
- apiGroups:
- discovery.k8s.io
resources:
- "endpointslices"
verbs:
- get
- list
- watch

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.30.3 (upstream) * Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
@ -9,9 +9,6 @@ module "bootstrap" {
network_mtu = var.network_mtu network_mtu = var.network_mtu
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
daemonset_tolerations = var.daemonset_tolerations daemonset_tolerations = var.daemonset_tolerations
components = var.components components = var.components
} }

View File

@ -57,7 +57,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -116,7 +116,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.30.3 quay.io/poseidon/kubelet:v1.31.3
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -149,7 +149,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -20,10 +20,8 @@ resource "aws_instance" "controllers" {
tags = { tags = {
Name = "${var.cluster_name}-controller-${count.index}" Name = "${var.cluster_name}-controller-${count.index}"
} }
instance_type = var.controller_type instance_type = var.controller_type
ami = var.controller_arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id ami = var.controller_arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
user_data = data.ct_config.controllers.*.rendered[count.index]
# storage # storage
root_block_device { root_block_device {
@ -31,7 +29,9 @@ resource "aws_instance" "controllers" {
volume_size = var.controller_disk_size volume_size = var.controller_disk_size
iops = var.controller_disk_iops iops = var.controller_disk_iops
encrypted = true encrypted = true
tags = {} tags = {
Name = "${var.cluster_name}-controller-${count.index}"
}
} }
# network # network
@ -39,6 +39,10 @@ resource "aws_instance" "controllers" {
subnet_id = element(aws_subnet.public.*.id, count.index) subnet_id = element(aws_subnet.public.*.id, count.index)
vpc_security_group_ids = [aws_security_group.controller.id] vpc_security_group_ids = [aws_security_group.controller.id]
# boot
user_data = data.ct_config.controllers.*.rendered[count.index]
# cost
credit_specification { credit_specification {
cpu_credits = var.controller_cpu_credits cpu_credits = var.controller_cpu_credits
} }
@ -65,7 +69,6 @@ data "ct_config" "controllers" {
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}) })
strict = true strict = true
snippets = var.controller_snippets snippets = var.controller_snippets

View File

@ -47,17 +47,25 @@ resource "aws_route" "egress-ipv6" {
resource "aws_subnet" "public" { resource "aws_subnet" "public" {
count = length(data.aws_availability_zones.all.names) count = length(data.aws_availability_zones.all.names)
vpc_id = aws_vpc.network.id
availability_zone = data.aws_availability_zones.all.names[count.index]
cidr_block = cidrsubnet(var.host_cidr, 4, count.index)
ipv6_cidr_block = cidrsubnet(aws_vpc.network.ipv6_cidr_block, 8, count.index)
map_public_ip_on_launch = true
assign_ipv6_address_on_creation = true
tags = { tags = {
"Name" = "${var.cluster_name}-public-${count.index}" "Name" = "${var.cluster_name}-public-${count.index}"
} }
vpc_id = aws_vpc.network.id
availability_zone = data.aws_availability_zones.all.names[count.index]
# IPv4 and IPv6 CIDR blocks
cidr_block = cidrsubnet(var.host_cidr, 4, count.index)
ipv6_cidr_block = cidrsubnet(aws_vpc.network.ipv6_cidr_block, 8, count.index)
# Assign IPv4 and IPv6 addresses to instances
map_public_ip_on_launch = true
assign_ipv6_address_on_creation = true
# Hostnames assigned to instances
# resource-name: <ec2-instance-id>.region.compute.internal
private_dns_hostname_type_on_launch = "resource-name"
enable_resource_name_dns_a_record_on_launch = true
enable_resource_name_dns_aaaa_record_on_launch = true
} }
resource "aws_route_table_association" "public" { resource "aws_route_table_association" "public" {

View File

@ -164,32 +164,12 @@ EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "enable_reporting" {
type = bool
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer"
default = true
}
variable "worker_node_labels" { variable "worker_node_labels" {
type = list(string) type = list(string)
description = "List of initial worker node labels" description = "List of initial worker node labels"
default = [] default = []
} }
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by CoreDNS. Default is cluster.local (e.g. foo.default.svc.cluster.local)"
default = "cluster.local"
}
# advanced # advanced
variable "controller_arch" { variable "controller_arch" {

View File

@ -6,9 +6,11 @@ module "workers" {
vpc_id = aws_vpc.network.id vpc_id = aws_vpc.network.id
subnet_ids = aws_subnet.public.*.id subnet_ids = aws_subnet.public.*.id
security_groups = [aws_security_group.worker.id] security_groups = [aws_security_group.worker.id]
# instances
os_stream = var.os_stream
worker_count = var.worker_count worker_count = var.worker_count
instance_type = var.worker_type instance_type = var.worker_type
os_stream = var.os_stream
arch = var.worker_arch arch = var.worker_arch
disk_type = var.worker_disk_type disk_type = var.worker_disk_type
disk_size = var.worker_disk_size disk_size = var.worker_disk_size
@ -21,7 +23,6 @@ module "workers" {
kubeconfig = module.bootstrap.kubeconfig-kubelet kubeconfig = module.bootstrap.kubeconfig-kubelet
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
snippets = var.worker_snippets snippets = var.worker_snippets
node_labels = var.worker_node_labels node_labels = var.worker_node_labels
} }

View File

@ -29,7 +29,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -104,7 +104,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -108,12 +108,6 @@ EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}
variable "node_labels" { variable "node_labels" {
type = list(string) type = list(string)
description = "List of initial node labels" description = "List of initial node labels"
@ -126,15 +120,14 @@ variable "node_taints" {
default = [] default = []
} }
# unofficial, undocumented, unsupported # advanced
variable "arch" { variable "arch" {
type = string type = string
description = "Container architecture (amd64 or arm64)" description = "Container architecture (amd64 or arm64)"
default = "amd64" default = "amd64"
validation { validation {
condition = var.arch == "amd64" || var.arch == "arm64" condition = contains(["amd64", "arm64"], var.arch)
error_message = "The arch must be amd64 or arm64." error_message = "The arch must be amd64 or arm64."
} }
} }

View File

@ -6,13 +6,11 @@ resource "aws_autoscaling_group" "workers" {
desired_capacity = var.worker_count desired_capacity = var.worker_count
min_size = var.worker_count min_size = var.worker_count
max_size = var.worker_count + 2 max_size = var.worker_count + 2
default_cooldown = 30
health_check_grace_period = 30
# network # network
vpc_zone_identifier = var.subnet_ids vpc_zone_identifier = var.subnet_ids
# template # instance template
launch_template { launch_template {
id = aws_launch_template.worker.id id = aws_launch_template.worker.id
version = aws_launch_template.worker.latest_version version = aws_launch_template.worker.latest_version
@ -32,6 +30,11 @@ resource "aws_autoscaling_group" "workers" {
min_healthy_percentage = 90 min_healthy_percentage = 90
} }
} }
# Grace period before checking new instance's health
health_check_grace_period = 30
# Cooldown period between scaling activities
default_cooldown = 30
lifecycle { lifecycle {
# override the default destroy and replace update behavior # override the default destroy and replace update behavior
@ -56,11 +59,6 @@ resource "aws_launch_template" "worker" {
name_prefix = "${var.name}-worker" name_prefix = "${var.name}-worker"
image_id = local.ami_id image_id = local.ami_id
instance_type = var.instance_type instance_type = var.instance_type
monitoring {
enabled = false
}
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
# storage # storage
ebs_optimized = true ebs_optimized = true
@ -76,14 +74,26 @@ resource "aws_launch_template" "worker" {
} }
# network # network
vpc_security_group_ids = var.security_groups network_interfaces {
associate_public_ip_address = true
security_groups = var.security_groups
}
# boot
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
# metadata # metadata
metadata_options { metadata_options {
http_tokens = "optional" http_tokens = "optional"
} }
monitoring {
enabled = false
}
# spot # cost
credit_specification {
cpu_credits = var.cpu_credits
}
dynamic "instance_market_options" { dynamic "instance_market_options" {
for_each = var.spot_price > 0 ? [1] : [] for_each = var.spot_price > 0 ? [1] : []
content { content {
@ -94,10 +104,6 @@ resource "aws_launch_template" "worker" {
} }
} }
credit_specification {
cpu_credits = var.cpu_credits
}
lifecycle { lifecycle {
// Override the default destroy and replace update behavior // Override the default destroy and replace update behavior
create_before_destroy = true create_before_destroy = true
@ -111,7 +117,6 @@ data "ct_config" "worker" {
kubeconfig = indent(10, var.kubeconfig) kubeconfig = indent(10, var.kubeconfig)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
}) })

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.30.3 (upstream) * Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
@ -9,9 +9,6 @@ module "bootstrap" {
network_mtu = var.network_mtu network_mtu = var.network_mtu
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
daemonset_tolerations = var.daemonset_tolerations daemonset_tolerations = var.daemonset_tolerations
components = var.components components = var.components
} }

View File

@ -58,7 +58,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -109,7 +109,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \
@ -148,7 +148,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -20,11 +20,8 @@ resource "aws_instance" "controllers" {
tags = { tags = {
Name = "${var.cluster_name}-controller-${count.index}" Name = "${var.cluster_name}-controller-${count.index}"
} }
instance_type = var.controller_type instance_type = var.controller_type
ami = local.ami_id ami = local.ami_id
user_data = data.ct_config.controllers.*.rendered[count.index]
# storage # storage
root_block_device { root_block_device {
@ -32,7 +29,9 @@ resource "aws_instance" "controllers" {
volume_size = var.controller_disk_size volume_size = var.controller_disk_size
iops = var.controller_disk_iops iops = var.controller_disk_iops
encrypted = true encrypted = true
tags = {} tags = {
Name = "${var.cluster_name}-controller-${count.index}"
}
} }
# network # network
@ -40,6 +39,10 @@ resource "aws_instance" "controllers" {
subnet_id = element(aws_subnet.public.*.id, count.index) subnet_id = element(aws_subnet.public.*.id, count.index)
vpc_security_group_ids = [aws_security_group.controller.id] vpc_security_group_ids = [aws_security_group.controller.id]
# boot
user_data = data.ct_config.controllers.*.rendered[count.index]
# cost
credit_specification { credit_specification {
cpu_credits = var.controller_cpu_credits cpu_credits = var.controller_cpu_credits
} }
@ -66,7 +69,6 @@ data "ct_config" "controllers" {
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}) })
strict = true strict = true
snippets = var.controller_snippets snippets = var.controller_snippets

View File

@ -47,17 +47,25 @@ resource "aws_route" "egress-ipv6" {
resource "aws_subnet" "public" { resource "aws_subnet" "public" {
count = length(data.aws_availability_zones.all.names) count = length(data.aws_availability_zones.all.names)
vpc_id = aws_vpc.network.id
availability_zone = data.aws_availability_zones.all.names[count.index]
cidr_block = cidrsubnet(var.host_cidr, 4, count.index)
ipv6_cidr_block = cidrsubnet(aws_vpc.network.ipv6_cidr_block, 8, count.index)
map_public_ip_on_launch = true
assign_ipv6_address_on_creation = true
tags = { tags = {
"Name" = "${var.cluster_name}-public-${count.index}" "Name" = "${var.cluster_name}-public-${count.index}"
} }
vpc_id = aws_vpc.network.id
availability_zone = data.aws_availability_zones.all.names[count.index]
# IPv4 and IPv6 CIDR blocks
cidr_block = cidrsubnet(var.host_cidr, 4, count.index)
ipv6_cidr_block = cidrsubnet(aws_vpc.network.ipv6_cidr_block, 8, count.index)
# Assign IPv4 and IPv6 addresses to instances
map_public_ip_on_launch = true
assign_ipv6_address_on_creation = true
# Hostnames assigned to instances
# resource-name: <ec2-instance-id>.region.compute.internal
private_dns_hostname_type_on_launch = "resource-name"
enable_resource_name_dns_a_record_on_launch = true
enable_resource_name_dns_aaaa_record_on_launch = true
} }
resource "aws_route_table_association" "public" { resource "aws_route_table_association" "public" {

View File

@ -164,31 +164,13 @@ EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "enable_reporting" {
type = bool
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer"
default = true
}
variable "worker_node_labels" { variable "worker_node_labels" {
type = list(string) type = list(string)
description = "List of initial worker node labels" description = "List of initial worker node labels"
default = [] default = []
} }
# unofficial, undocumented, unsupported # advanced
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by CoreDNS. Default is cluster.local (e.g. foo.default.svc.cluster.local)"
default = "cluster.local"
}
variable "controller_arch" { variable "controller_arch" {
type = string type = string
@ -210,7 +192,6 @@ variable "worker_arch" {
} }
} }
variable "daemonset_tolerations" { variable "daemonset_tolerations" {
type = list(string) type = list(string)
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])" description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"

View File

@ -6,9 +6,11 @@ module "workers" {
vpc_id = aws_vpc.network.id vpc_id = aws_vpc.network.id
subnet_ids = aws_subnet.public.*.id subnet_ids = aws_subnet.public.*.id
security_groups = [aws_security_group.worker.id] security_groups = [aws_security_group.worker.id]
# instances
os_image = var.os_image
worker_count = var.worker_count worker_count = var.worker_count
instance_type = var.worker_type instance_type = var.worker_type
os_image = var.os_image
arch = var.worker_arch arch = var.worker_arch
disk_type = var.worker_disk_type disk_type = var.worker_disk_type
disk_size = var.worker_disk_size disk_size = var.worker_disk_size
@ -20,7 +22,6 @@ module "workers" {
kubeconfig = module.bootstrap.kubeconfig-kubelet kubeconfig = module.bootstrap.kubeconfig-kubelet
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
snippets = var.worker_snippets snippets = var.worker_snippets
node_labels = var.worker_node_labels node_labels = var.worker_node_labels
} }

View File

@ -30,7 +30,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -103,7 +103,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -108,12 +108,6 @@ EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}
variable "node_labels" { variable "node_labels" {
type = list(string) type = list(string)
description = "List of initial node labels" description = "List of initial node labels"
@ -134,7 +128,7 @@ variable "arch" {
default = "amd64" default = "amd64"
validation { validation {
condition = var.arch == "amd64" || var.arch == "arm64" condition = contains(["amd64", "arm64"], var.arch)
error_message = "The arch must be amd64 or arm64." error_message = "The arch must be amd64 or arm64."
} }
} }

View File

@ -6,13 +6,11 @@ resource "aws_autoscaling_group" "workers" {
desired_capacity = var.worker_count desired_capacity = var.worker_count
min_size = var.worker_count min_size = var.worker_count
max_size = var.worker_count + 2 max_size = var.worker_count + 2
default_cooldown = 30
health_check_grace_period = 30
# network # network
vpc_zone_identifier = var.subnet_ids vpc_zone_identifier = var.subnet_ids
# template # instance template
launch_template { launch_template {
id = aws_launch_template.worker.id id = aws_launch_template.worker.id
version = aws_launch_template.worker.latest_version version = aws_launch_template.worker.latest_version
@ -32,6 +30,10 @@ resource "aws_autoscaling_group" "workers" {
min_healthy_percentage = 90 min_healthy_percentage = 90
} }
} }
# Grace period before checking new instance's health
health_check_grace_period = 30
# Cooldown period between scaling activities
default_cooldown = 30
lifecycle { lifecycle {
# override the default destroy and replace update behavior # override the default destroy and replace update behavior
@ -56,11 +58,6 @@ resource "aws_launch_template" "worker" {
name_prefix = "${var.name}-worker" name_prefix = "${var.name}-worker"
image_id = local.ami_id image_id = local.ami_id
instance_type = var.instance_type instance_type = var.instance_type
monitoring {
enabled = false
}
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
# storage # storage
ebs_optimized = true ebs_optimized = true
@ -76,14 +73,26 @@ resource "aws_launch_template" "worker" {
} }
# network # network
vpc_security_group_ids = var.security_groups network_interfaces {
associate_public_ip_address = true
security_groups = var.security_groups
}
# boot
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
# metadata # metadata
metadata_options { metadata_options {
http_tokens = "optional" http_tokens = "optional"
} }
monitoring {
enabled = false
}
# spot # cost
credit_specification {
cpu_credits = var.cpu_credits
}
dynamic "instance_market_options" { dynamic "instance_market_options" {
for_each = var.spot_price > 0 ? [1] : [] for_each = var.spot_price > 0 ? [1] : []
content { content {
@ -94,10 +103,6 @@ resource "aws_launch_template" "worker" {
} }
} }
credit_specification {
cpu_credits = var.cpu_credits
}
lifecycle { lifecycle {
// Override the default destroy and replace update behavior // Override the default destroy and replace update behavior
create_before_destroy = true create_before_destroy = true
@ -111,7 +116,6 @@ data "ct_config" "worker" {
kubeconfig = indent(10, var.kubeconfig) kubeconfig = indent(10, var.kubeconfig)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
}) })

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.30.3 (upstream) * Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
@ -14,9 +14,6 @@ module "bootstrap" {
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
daemonset_tolerations = var.daemonset_tolerations daemonset_tolerations = var.daemonset_tolerations
components = var.components components = var.components
} }

View File

@ -54,7 +54,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -111,7 +111,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.30.3 quay.io/poseidon/kubelet:v1.31.3
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -144,7 +144,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -163,7 +163,6 @@ data "ct_config" "controllers" {
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}) })
strict = true strict = true
snippets = var.controller_snippets snippets = var.controller_snippets

View File

@ -27,7 +27,6 @@ variable "os_image" {
description = "Fedora CoreOS image for instances" description = "Fedora CoreOS image for instances"
} }
variable "controller_count" { variable "controller_count" {
type = number type = number
description = "Number of controllers (i.e. masters)" description = "Number of controllers (i.e. masters)"
@ -145,31 +144,13 @@ EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "enable_reporting" {
type = bool
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer"
default = true
}
variable "worker_node_labels" { variable "worker_node_labels" {
type = list(string) type = list(string)
description = "List of initial worker node labels" description = "List of initial worker node labels"
default = [] default = []
} }
# unofficial, undocumented, unsupported # advanced
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}
variable "daemonset_tolerations" { variable "daemonset_tolerations" {
type = list(string) type = list(string)

View File

@ -3,7 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = ">= 2.8, < 4.0" azurerm = ">= 2.8"
null = ">= 2.1" null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"

View File

@ -9,9 +9,10 @@ module "workers" {
security_group_id = azurerm_network_security_group.worker.id security_group_id = azurerm_network_security_group.worker.id
backend_address_pool_ids = local.backend_address_pool_ids backend_address_pool_ids = local.backend_address_pool_ids
# instances
os_image = var.os_image
worker_count = var.worker_count worker_count = var.worker_count
vm_type = var.worker_type vm_type = var.worker_type
os_image = var.os_image
disk_type = var.worker_disk_type disk_type = var.worker_disk_type
disk_size = var.worker_disk_size disk_size = var.worker_disk_size
ephemeral_disk = var.worker_ephemeral_disk ephemeral_disk = var.worker_ephemeral_disk
@ -22,7 +23,6 @@ module "workers" {
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
azure_authorized_key = var.azure_authorized_key azure_authorized_key = var.azure_authorized_key
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
snippets = var.worker_snippets snippets = var.worker_snippets
node_labels = var.worker_node_labels node_labels = var.worker_node_labels
} }

View File

@ -26,7 +26,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -99,7 +99,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -120,12 +120,3 @@ variable "node_taints" {
description = "List of initial node taints" description = "List of initial node taints"
default = [] default = []
} }
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
type = string
default = "cluster.local"
}

View File

@ -3,7 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = ">= 2.8, < 4.0" azurerm = ">= 2.8"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.13" version = "~> 0.13"

View File

@ -84,7 +84,6 @@ data "ct_config" "worker" {
kubeconfig = indent(10, var.kubeconfig) kubeconfig = indent(10, var.kubeconfig)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
}) })

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.30.3 (upstream) * Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
@ -14,9 +14,6 @@ module "bootstrap" {
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
daemonset_tolerations = var.daemonset_tolerations daemonset_tolerations = var.daemonset_tolerations
components = var.components components = var.components
} }

View File

@ -56,7 +56,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -105,7 +105,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \
@ -144,7 +144,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -185,7 +185,6 @@ data "ct_config" "controllers" {
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}) })
strict = true strict = true
snippets = var.controller_snippets snippets = var.controller_snippets

View File

@ -34,7 +34,7 @@ resource "azurerm_public_ip" "frontend-ipv4" {
# Static IPv6 address for the load balancer # Static IPv6 address for the load balancer
resource "azurerm_public_ip" "frontend-ipv6" { resource "azurerm_public_ip" "frontend-ipv6" {
name = "${var.cluster_name}-ingress-ipv6" name = "${var.cluster_name}-frontend-ipv6"
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
location = var.location location = var.location
ip_version = "IPv6" ip_version = "IPv6"

View File

@ -150,18 +150,6 @@ EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "enable_reporting" {
type = bool
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer"
default = true
}
variable "worker_node_labels" { variable "worker_node_labels" {
type = list(string) type = list(string)
description = "List of initial worker node labels" description = "List of initial worker node labels"
@ -196,14 +184,6 @@ variable "daemonset_tolerations" {
default = [] default = []
} }
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}
variable "components" { variable "components" {
description = "Configure pre-installed cluster components" description = "Configure pre-installed cluster components"
# Component configs are passed through to terraform-render-bootstrap, # Component configs are passed through to terraform-render-bootstrap,

View File

@ -3,7 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = ">= 2.8, < 4.0" azurerm = ">= 2.8"
null = ">= 2.1" null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"

View File

@ -22,7 +22,6 @@ module "workers" {
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
azure_authorized_key = var.azure_authorized_key azure_authorized_key = var.azure_authorized_key
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
snippets = var.worker_snippets snippets = var.worker_snippets
node_labels = var.worker_node_labels node_labels = var.worker_node_labels
arch = var.worker_arch arch = var.worker_arch

View File

@ -28,7 +28,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -99,7 +99,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -137,12 +137,3 @@ variable "arch" {
error_message = "The arch must be amd64 or arm64." error_message = "The arch must be amd64 or arm64."
} }
} }
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
type = string
default = "cluster.local"
}

View File

@ -3,7 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = ">= 2.8, < 4.0" azurerm = ">= 2.8"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.13" version = "~> 0.13"

View File

@ -105,7 +105,6 @@ data "ct_config" "worker" {
kubeconfig = indent(10, var.kubeconfig) kubeconfig = indent(10, var.kubeconfig)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
}) })

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.30.3 (upstream) * Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name] api_servers = [var.k8s_domain_name]
@ -10,9 +10,6 @@ module "bootstrap" {
network_ip_autodetection_method = var.network_ip_autodetection_method network_ip_autodetection_method = var.network_ip_autodetection_method
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
components = var.components components = var.components
} }

View File

@ -53,7 +53,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -113,7 +113,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStartPre=-/usr/bin/podman rm bootstrap ExecStartPre=-/usr/bin/podman rm bootstrap
ExecStart=/usr/bin/podman run --name bootstrap \ ExecStart=/usr/bin/podman run --name bootstrap \
--network host \ --network host \
@ -154,7 +154,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -59,7 +59,6 @@ data "ct_config" "controllers" {
etcd_name = var.controllers.*.name[count.index] etcd_name = var.controllers.*.name[count.index]
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain)) etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
}) })
strict = true strict = true

View File

@ -139,25 +139,7 @@ variable "kernel_args" {
default = [] default = []
} }
variable "enable_reporting" { # advanced
type = bool
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer"
default = true
}
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
type = string
default = "cluster.local"
}
variable "components" { variable "components" {
description = "Configure pre-installed cluster components" description = "Configure pre-installed cluster components"

View File

@ -25,7 +25,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -108,7 +108,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -53,7 +53,6 @@ data "ct_config" "worker" {
domain_name = var.domain domain_name = var.domain
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
}) })

View File

@ -103,9 +103,3 @@ The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for
EOD EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "cluster_domain_suffix" {
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
type = string
default = "cluster.local"
}

View File

@ -18,7 +18,6 @@ module "workers" {
kubeconfig = module.bootstrap.kubeconfig-kubelet kubeconfig = module.bootstrap.kubeconfig-kubelet
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, []) node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, []) node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
snippets = lookup(var.snippets, var.workers[count.index].name, []) snippets = lookup(var.snippets, var.workers[count.index].name, [])

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.30.3 (upstream) * Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name] api_servers = [var.k8s_domain_name]
@ -10,9 +10,6 @@ module "bootstrap" {
network_ip_autodetection_method = var.network_ip_autodetection_method network_ip_autodetection_method = var.network_ip_autodetection_method
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
components = var.components components = var.components
} }

View File

@ -64,7 +64,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -114,7 +114,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \
@ -155,7 +155,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -89,7 +89,6 @@ data "ct_config" "controllers" {
etcd_name = var.controllers.*.name[count.index] etcd_name = var.controllers.*.name[count.index]
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain)) etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
}) })
strict = true strict = true

View File

@ -150,18 +150,6 @@ variable "kernel_args" {
default = [] default = []
} }
variable "enable_reporting" {
type = bool
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer"
default = true
}
variable "oem_type" { variable "oem_type" {
type = string type = string
description = <<EOD description = <<EOD
@ -173,13 +161,7 @@ EOD
default = "" default = ""
} }
# unofficial, undocumented, unsupported # advanced
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}
variable "components" { variable "components" {
description = "Configure pre-installed cluster components" description = "Configure pre-installed cluster components"

View File

@ -36,7 +36,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -113,7 +113,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -80,7 +80,6 @@ data "ct_config" "worker" {
domain_name = var.domain domain_name = var.domain
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
}) })

View File

@ -120,13 +120,3 @@ The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for
EOD EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}

View File

@ -18,7 +18,6 @@ module "workers" {
kubeconfig = module.bootstrap.kubeconfig-kubelet kubeconfig = module.bootstrap.kubeconfig-kubelet
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, []) node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, []) node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
snippets = lookup(var.snippets, var.workers[count.index].name, []) snippets = lookup(var.snippets, var.workers[count.index].name, [])

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.30.3 (upstream) * Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
@ -13,9 +13,6 @@ module "bootstrap" {
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
components = var.components components = var.components
} }

View File

@ -55,7 +55,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -123,7 +123,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.30.3 quay.io/poseidon/kubelet:v1.31.3
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -151,7 +151,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -28,7 +28,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -104,7 +104,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -74,7 +74,6 @@ data "ct_config" "controllers" {
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380" for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
]) ])
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}) })
strict = true strict = true
snippets = var.controller_snippets snippets = var.controller_snippets

View File

@ -86,25 +86,7 @@ EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "enable_reporting" { # advanced
type = bool
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer"
default = true
}
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}
variable "components" { variable "components" {
description = "Configure pre-installed cluster components" description = "Configure pre-installed cluster components"

View File

@ -62,7 +62,6 @@ resource "digitalocean_tag" "workers" {
data "ct_config" "worker" { data "ct_config" "worker" {
content = templatefile("${path.module}/butane/worker.yaml", { content = templatefile("${path.module}/butane/worker.yaml", {
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}) })
strict = true strict = true
snippets = var.worker_snippets snippets = var.worker_snippets

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.30.3 (upstream) * Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
@ -13,9 +13,6 @@ module "bootstrap" {
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr
service_cidr = var.service_cidr service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
components = var.components components = var.components
} }

View File

@ -66,7 +66,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -117,7 +117,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \
@ -153,7 +153,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -38,7 +38,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -103,7 +103,7 @@ storage:
cgroupDriver: systemd cgroupDriver: systemd
clusterDNS: clusterDNS:
- ${cluster_dns_service_ip} - ${cluster_dns_service_ip}
clusterDomain: ${cluster_domain_suffix} clusterDomain: cluster.local
healthzPort: 0 healthzPort: 0
rotateCertificates: true rotateCertificates: true
shutdownGracePeriod: 45s shutdownGracePeriod: 45s

View File

@ -79,7 +79,6 @@ data "ct_config" "controllers" {
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380" for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
]) ])
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}) })
strict = true strict = true
snippets = var.controller_snippets snippets = var.controller_snippets

View File

@ -86,25 +86,7 @@ EOD
default = "10.3.0.0/16" default = "10.3.0.0/16"
} }
variable "enable_reporting" { # advanced
type = bool
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer"
default = true
}
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}
variable "components" { variable "components" {
description = "Configure pre-installed cluster components" description = "Configure pre-installed cluster components"

View File

@ -60,7 +60,6 @@ resource "digitalocean_tag" "workers" {
data "ct_config" "worker" { data "ct_config" "worker" {
content = templatefile("${path.module}/butane/worker.yaml", { content = templatefile("${path.module}/butane/worker.yaml", {
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}) })
strict = true strict = true
snippets = var.worker_snippets snippets = var.worker_snippets

View File

@ -1,13 +1,11 @@
# ARM64 # ARM64
Typhoon supports ARM64 Kubernetes clusters with ARM64 controller and worker nodes (full-cluster) or adding worker pools of ARM64 nodes to clusters with an x86/amd64 control plane for a hybdrid (mixed-arch) cluster. Typhoon supports Kubernetes clusters with ARM64 controller or worker nodes on several platforms:
Typhoon ARM64 clusters (full-cluster or mixed-arch) are available on:
* AWS with Fedora CoreOS or Flatcar Linux * AWS with Fedora CoreOS or Flatcar Linux
* Azure with Flatcar Linux * Azure with Flatcar Linux
## Cluster ## AWS
Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images. Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
@ -15,24 +13,23 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.31.3"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
dns_zone = "aws.example.com" dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0" dns_zone_id = "Z3PAABBCFAKEC0"
# configuration # instances
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..." controller_type = "t4g.small"
controller_arch = "arm64"
# optional
arch = "arm64"
networking = "cilium"
worker_count = 2 worker_count = 2
worker_type = "t4g.small"
worker_arch = "arm64"
worker_price = "0.0168" worker_price = "0.0168"
controller_type = "t4g.small" # configuration
worker_type = "t4g.small" ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
} }
``` ```
@ -40,24 +37,23 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.31.3"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
dns_zone = "aws.example.com" dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0" dns_zone_id = "Z3PAABBCFAKEC0"
# configuration # instances
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..." controller_type = "t4g.small"
controller_arch = "arm64"
# optional
arch = "arm64"
networking = "cilium"
worker_count = 2 worker_count = 2
worker_type = "t4g.small"
worker_arch = "arm64"
worker_price = "0.0168" worker_price = "0.0168"
controller_type = "t4g.small" # configuration
worker_type = "t4g.small" ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
} }
``` ```
@ -66,118 +62,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
``` ```
$ kubectl get nodes -o wide $ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-21-119 Ready <none> 77s v1.30.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8 ip-10-0-21-119 Ready <none> 77s v1.31.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-32-166 Ready <none> 80s v1.30.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8 ip-10-0-32-166 Ready <none> 80s v1.31.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-5-79 Ready <none> 77s v1.30.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8 ip-10-0-5-79 Ready <none> 77s v1.31.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
```
## Hybrid
Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [worker pool](worker-pools.md#aws) with ARM64 workers. Optional taints are added to aid in scheduling.
=== "FCOS Cluster"
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
networking = "cilium"
worker_count = 2
worker_price = "0.021"
daemonset_tolerations = ["arch"] # important
}
```
=== "Flatcar Cluster"
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
networking = "cilium"
worker_count = 2
worker_price = "0.021"
daemonset_tolerations = ["arch"] # important
}
```
=== "FCOS ARM64 Workers"
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.30.3"
# AWS
vpc_id = module.gravitas.vpc_id
subnet_ids = module.gravitas.subnet_ids
security_groups = module.gravitas.worker_security_groups
# configuration
name = "gravitas-arm64"
kubeconfig = module.gravitas.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional
arch = "arm64"
instance_type = "t4g.small"
spot_price = "0.0168"
node_taints = ["arch=arm64:NoSchedule"]
}
```
=== "Flatcar ARM64 Workers"
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.30.3"
# AWS
vpc_id = module.gravitas.vpc_id
subnet_ids = module.gravitas.subnet_ids
security_groups = module.gravitas.worker_security_groups
# configuration
name = "gravitas-arm64"
kubeconfig = module.gravitas.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional
arch = "arm64"
instance_type = "t4g.small"
spot_price = "0.0168"
node_taints = ["arch=arm64:NoSchedule"]
}
```
Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-1-73 Ready <none> 111m v1.30.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-22-79... Ready <none> 111m v1.30.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
ip-10-0-24-130 Ready <none> 111m v1.30.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-39-19 Ready <none> 111m v1.30.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
``` ```
## Azure ## Azure
@ -186,7 +73,7 @@ Create a cluster on Azure with ARM64 controller and worker nodes. Container work
```tf ```tf
module "ramius" { module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.31.3"
# Azure # Azure
cluster_name = "ramius" cluster_name = "ramius"
@ -194,13 +81,128 @@ module "ramius" {
dns_zone = "azure.example.com" dns_zone = "azure.example.com"
dns_zone_group = "example-group" dns_zone_group = "example-group"
# instances
controller_arch = "arm64"
controller_type = "Standard_B2pls_v5"
worker_count = 2
controller_arch = "arm64"
worker_type = "Standard_D2pls_v5"
# configuration # configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..." ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
arch = "arm64"
controller_type = "Standard_D2pls_v5"
worker_type = "Standard_D2pls_v5"
worker_count = 2
} }
``` ```
## Hybrid
Create a hybrid/mixed arch cluster by defining a cluster where [worker pool(s)](worker-pools.md#aws) have a different instance type architecture than controllers or other workers. Taints are added to aid in scheduling.
Here's an AWS example,
=== "FCOS Cluster"
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.31.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# instances
worker_count = 2
worker_arch = "arm64"
worker_type = "t4g.medium"
worker_price = "0.021"
# configuration
daemonset_tolerations = ["arch"] # important
networking = "cilium"
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
}
```
=== "Flatcar Cluster"
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.31.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# instances
worker_count = 2
worker_arch = "arm64"
worker_type = "t4g.medium"
worker_price = "0.021"
# configuration
daemonset_tolerations = ["arch"] # important
networking = "cilium"
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
}
```
=== "FCOS ARM64 Workers"
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# AWS
vpc_id = module.gravitas.vpc_id
subnet_ids = module.gravitas.subnet_ids
security_groups = module.gravitas.worker_security_groups
# instances
arch = "arm64"
instance_type = "t4g.small"
spot_price = "0.0168"
# configuration
name = "gravitas-arm64"
kubeconfig = module.gravitas.kubeconfig
node_taints = ["arch=arm64:NoSchedule"]
ssh_authorized_key = var.ssh_authorized_key
}
```
=== "Flatcar ARM64 Workers"
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.31.3"
# AWS
vpc_id = module.gravitas.vpc_id
subnet_ids = module.gravitas.subnet_ids
security_groups = module.gravitas.worker_security_groups
# instances
arch = "arm64"
instance_type = "t4g.small"
spot_price = "0.0168"
# configuration
name = "gravitas-arm64"
kubeconfig = module.gravitas.kubeconfig
node_taints = ["arch=arm64:NoSchedule"]
ssh_authorized_key = var.ssh_authorized_key
}
```
Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-1-73 Ready <none> 111m v1.31.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-22-79... Ready <none> 111m v1.31.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
ip-10-0-24-130 Ready <none> 111m v1.31.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-39-19 Ready <none> 111m v1.31.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
```

View File

@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf ```tf
module "yavin-pool" { module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf ```tf
module "yavin-pool" { module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"

View File

@ -19,7 +19,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf ```tf
module "tempest-worker-pool" { module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# AWS # AWS
vpc_id = module.tempest.vpc_id vpc_id = module.tempest.vpc_id
@ -42,7 +42,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf ```tf
module "tempest-worker-pool" { module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.31.3"
# AWS # AWS
vpc_id = module.tempest.vpc_id vpc_id = module.tempest.vpc_id
@ -111,7 +111,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf ```tf
module "ramius-worker-pool" { module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# Azure # Azure
location = module.ramius.location location = module.ramius.location
@ -137,7 +137,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf ```tf
module "ramius-worker-pool" { module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.31.3"
# Azure # Azure
location = module.ramius.location location = module.ramius.location
@ -207,7 +207,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf ```tf
module "yavin-worker-pool" { module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# Google Cloud # Google Cloud
region = "europe-west2" region = "europe-west2"
@ -231,7 +231,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf ```tf
module "yavin-worker-pool" { module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.31.3"
# Google Cloud # Google Cloud
region = "europe-west2" region = "europe-west2"
@ -262,11 +262,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
``` ```
$ kubectl get nodes $ kubectl get nodes
NAME STATUS AGE VERSION NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.30.3 yavin-controller-0.c.example-com.internal Ready 6m v1.31.3
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.30.3 yavin-worker-jrbf.c.example-com.internal Ready 5m v1.31.3
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.30.3 yavin-worker-mzdm.c.example-com.internal Ready 5m v1.31.3
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.30.3 yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.31.3
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.30.3 yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.31.3
``` ```
### Variables ### Variables

View File

@ -1,6 +1,6 @@
# AWS # AWS
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on AWS with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.31.3 cluster on AWS with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -72,19 +72,19 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
```tf ```tf
module "tempest" { module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.31.3"
# AWS # AWS
cluster_name = "tempest" cluster_name = "tempest"
dns_zone = "aws.example.com" dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0" dns_zone_id = "Z3PAABBCFAKEC0"
# configuration # instances
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
worker_count = 2 worker_count = 2
worker_type = "t3.small" worker_type = "t3.small"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
} }
``` ```
@ -136,6 +136,7 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
resource "local_file" "kubeconfig-tempest" { resource "local_file" "kubeconfig-tempest" {
content = module.tempest.kubeconfig-admin content = module.tempest.kubeconfig-admin
filename = "/home/user/.kube/configs/tempest-config" filename = "/home/user/.kube/configs/tempest-config"
file_permission = "0600"
} }
``` ```
@ -145,9 +146,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config $ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.30.3 ip-10-0-3-155 Ready <none> 10m v1.31.3
ip-10-0-26-65 Ready <none> 10m v1.30.3 ip-10-0-26-65 Ready <none> 10m v1.31.3
ip-10-0-41-21 Ready <none> 10m v1.30.3 ip-10-0-41-21 Ready <none> 10m v1.31.3
``` ```
List the pods. List the pods.
@ -155,9 +156,9 @@ List the pods.
``` ```
$ kubectl get pods --all-namespaces $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1m5bf 2/2 Running 0 34m kube-system cilium-1m5bf 1/1 Running 0 34m
kube-system calico-node-7jmr1 2/2 Running 0 34m kube-system cilium-7jmr1 1/1 Running 0 34m
kube-system calico-node-bknc8 2/2 Running 0 34m kube-system cilium-bknc8 1/1 Running 0 34m
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
@ -206,16 +207,21 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
| Name | Description | Default | Example | | Name | Description | Default | Example |
|:-----|:------------|:--------|:--------| |:-----|:------------|:--------|:--------|
| os_stream | Fedora CoreOS stream for instances | "stable" | "testing", "next" |
| controller_count | Number of controllers (i.e. masters) | 1 | 1 | | controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | EC2 instance type for controllers | "t3.small" | See below | | controller_type | EC2 instance type for controllers | "t3.small" | See below |
| controller_disk_size | Size of EBS volume in GB | 30 | 100 |
| controller_disk_type | Type of EBS volume | gp3 | io1 |
| controller_disk_iops | IOPS of EBS volume | 3000 | 4000 |
| controller_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
| worker_count | Number of workers | 1 | 3 |
| worker_type | EC2 instance type for workers | "t3.small" | See below | | worker_type | EC2 instance type for workers | "t3.small" | See below |
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" | | worker_disk_size | Size of EBS volume in GB | 30 | 100 |
| disk_size | Size of the EBS volume in GB | 30 | 100 | | worker_disk_type | Type of EBS volume | gp3 | io1 |
| disk_type | Type of the EBS volume | "gp3" | standard, gp2, gp3, io1 | | worker_disk_iops | IOPS of EBS volume | 3000 | 4000 |
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 | | worker_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 | | worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 |
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
| controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) | | controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) |
| worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) | | worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" | | networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
@ -228,7 +234,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/). Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
!!! warning !!! warning
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller. Do not choose a `controller_type` smaller than `t3.small`. Smaller instances are not sufficient for running a controller.
!!! tip "MTU" !!! tip "MTU"
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth. If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
@ -236,4 +242,3 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
#### Spot #### Spot
Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled. Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled.

View File

@ -1,6 +1,6 @@
# Azure # Azure
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on Azure with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Azure with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -86,23 +86,23 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
```tf ```tf
module "ramius" { module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.31.3"
# Azure # Azure
cluster_name = "ramius" cluster_name = "ramius"
location = "centralus" location = "centralus"
dns_zone = "azure.example.com" dns_zone = "azure.example.com"
dns_zone_group = "example-group" dns_zone_group = "example-group"
# configuration
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-36.20220716.3.1"
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
worker_count = 2
network_cidr = { network_cidr = {
ipv4 = ["10.0.0.0/20"] ipv4 = ["10.0.0.0/20"]
} }
# instances
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-36.20220716.3.1"
worker_count = 2
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
} }
``` ```
@ -154,6 +154,7 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
resource "local_file" "kubeconfig-ramius" { resource "local_file" "kubeconfig-ramius" {
content = module.ramius.kubeconfig-admin content = module.ramius.kubeconfig-admin
filename = "/home/user/.kube/configs/ramius-config" filename = "/home/user/.kube/configs/ramius-config"
file_permission = "0600"
} }
``` ```
@ -163,9 +164,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config $ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.30.3 ramius-controller-0 Ready <none> 24m v1.31.3
ramius-worker-000001 Ready <none> 25m v1.30.3 ramius-worker-000001 Ready <none> 25m v1.31.3
ramius-worker-000002 Ready <none> 24m v1.30.3 ramius-worker-000002 Ready <none> 24m v1.31.3
``` ```
List the pods. List the pods.
@ -175,9 +176,9 @@ $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
kube-system calico-node-1m5bf 2/2 Running 0 26m kube-system cilium-1m5bf 1/1 Running 0 26m
kube-system calico-node-7jmr1 2/2 Running 0 26m kube-system cilium-7jmr1 1/1 Running 0 26m
kube-system calico-node-bknc8 2/2 Running 0 26m kube-system cilium-bknc8 1/1 Running 0 26m
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
kube-system kube-proxy-j4vpq 1/1 Running 0 26m kube-system kube-proxy-j4vpq 1/1 Running 0 26m
@ -240,10 +241,14 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
| Name | Description | Default | Example | | Name | Description | Default | Example |
|:-----|:------------|:--------|:--------| |:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 | | controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "Standard_B2s" | See below | | controller_type | Machine type for controllers | "Standard_B2s" | See below |
| controller_disk_type | Managed disk for controllers | Premium_LRS | Standard_LRS |
| controller_disk_size | Managed disk size in GB | 30 | 50 |
| worker_count | Number of workers | 1 | 3 |
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below | | worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
| disk_size | Size of the disk in GB | 30 | 100 | | worker_disk_type | Managed disk for workers | Standard_LRS | Premium_LRS |
| worker_disk_size | Size of the disk in GB | 30 | 100 |
| worker_ephemeral_disk | Use ephemeral local disk instead of managed disk | false | true |
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot | | worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
| controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) | | controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) |
| worker_snippets | Worker Butane snippets | [] | [example](/advanced/customization/#usage) | | worker_snippets | Worker Butane snippets | [] | [example](/advanced/customization/#usage) |
@ -255,9 +260,6 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier. Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
!!! warning
Unlike AWS and GCP, Azure requires its *virtual* networks to have non-overlapping IPv4 CIDRs (yeah, go figure). Instead of each cluster just using `10.0.0.0/16` for instances, each Azure cluster's `host_cidr` must be non-overlapping (e.g. 10.0.0.0/20 for the 1st cluster, 10.0.16.0/20 for the 2nd cluster, etc).
!!! warning !!! warning
Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller. Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller.

View File

@ -1,6 +1,6 @@
# Bare-Metal # Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.30.3 cluster on bare-metal with Fedora CoreOS. In this tutorial, we'll network boot and provision a Kubernetes v1.31.3 cluster on bare-metal with Fedora CoreOS.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition. First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
```tf ```tf
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.31.3"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -191,7 +191,7 @@ Workers with similar features can be defined inline using the `workers` field as
```tf ```tf
module "mercury-node1" { module "mercury-node1" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.31.3"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -304,6 +304,7 @@ systemd[1]: Started Kubernetes control plane.
resource "local_file" "kubeconfig-mercury" { resource "local_file" "kubeconfig-mercury" {
content = module.mercury.kubeconfig-admin content = module.mercury.kubeconfig-admin
filename = "/home/user/.kube/configs/mercury-config" filename = "/home/user/.kube/configs/mercury-config"
file_permission = "0600"
} }
``` ```
@ -313,9 +314,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config $ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.30.3 node1.example.com Ready <none> 10m v1.31.3
node2.example.com Ready <none> 10m v1.30.3 node2.example.com Ready <none> 10m v1.31.3
node3.example.com Ready <none> 10m v1.30.3 node3.example.com Ready <none> 10m v1.31.3
``` ```
List the pods. List the pods.
@ -323,9 +324,10 @@ List the pods.
``` ```
$ kubectl get pods --all-namespaces $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-6qp7f 2/2 Running 1 11m kube-system cilium-6qp7f 1/1 Running 1 11m
kube-system calico-node-gnjrm 2/2 Running 0 11m kube-system cilium-gnjrm 1/1 Running 0 11m
kube-system calico-node-llbgt 2/2 Running 0 11m kube-system cilium-llbgt 1/1 Running 0 11m
kube-system cilium-operator-68d778b448-g744f 1/1 Running 0 11m
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m
@ -372,4 +374,3 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| kernel_args | Additional kernel args to provide at PXE boot | [] | ["kvm-intel.nested=1"] | | kernel_args | Additional kernel args to provide at PXE boot | [] | ["kvm-intel.nested=1"] |
| worker_node_labels | Map from worker name to list of initial node labels | {} | {"node2" = ["role=special"]} | | worker_node_labels | Map from worker name to list of initial node labels | {} | {"node2" = ["role=special"]} |
| worker_node_taints | Map from worker name to list of initial node taints | {} | {"node2" = ["role=special:NoSchedule"]} | | worker_node_taints | Map from worker name to list of initial node taints | {} | {"node2" = ["role=special:NoSchedule"]} |

View File

@ -1,6 +1,6 @@
# DigitalOcean # DigitalOcean
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on DigitalOcean with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.31.3 cluster on DigitalOcean with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -81,19 +81,19 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
```tf ```tf
module "nemo" { module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.31.3"
# Digital Ocean # Digital Ocean
cluster_name = "nemo" cluster_name = "nemo"
region = "nyc3" region = "nyc3"
dns_zone = "digital-ocean.example.com" dns_zone = "digital-ocean.example.com"
# configuration # instances
os_image = data.digitalocean_image.fedora-coreos-31-20200323-3-2.id os_image = data.digitalocean_image.fedora-coreos-31-20200323-3-2.id
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
# optional
worker_count = 2 worker_count = 2
# configuration
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
} }
``` ```
@ -146,6 +146,7 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
resource "local_file" "kubeconfig-nemo" { resource "local_file" "kubeconfig-nemo" {
content = module.nemo.kubeconfig-admin content = module.nemo.kubeconfig-admin
filename = "/home/user/.kube/configs/nemo-config" filename = "/home/user/.kube/configs/nemo-config"
file_permission = "0600"
} }
``` ```
@ -155,9 +156,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config $ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.30.3 10.132.110.130 Ready <none> 10m v1.31.3
10.132.115.81 Ready <none> 10m v1.30.3 10.132.115.81 Ready <none> 10m v1.31.3
10.132.124.107 Ready <none> 10m v1.30.3 10.132.124.107 Ready <none> 10m v1.31.3
``` ```
List the pods. List the pods.
@ -166,9 +167,9 @@ List the pods.
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
kube-system calico-node-1m5bf 2/2 Running 0 11m kube-system cilium-1m5bf 1/1 Running 0 11m
kube-system calico-node-7jmr1 2/2 Running 0 11m kube-system cilium-7jmr1 1/1 Running 0 11m
kube-system calico-node-bknc8 2/2 Running 0 11m kube-system cilium-bknc8 1/1 Running 0 11m
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-proxy-6kxjf 1/1 Running 0 11m kube-system kube-proxy-6kxjf 1/1 Running 0 11m
@ -248,4 +249,3 @@ Check the list of valid [droplet types](https://developers.digitalocean.com/docu
!!! warning !!! warning
Do not choose a `controller_type` smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail. Do not choose a `controller_type` smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail.

View File

@ -1,6 +1,6 @@
# Google Cloud # Google Cloud
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on Google Compute Engine with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Google Compute Engine with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kuberne
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -81,11 +81,11 @@ module "yavin" {
dns_zone = "example.com" dns_zone = "example.com"
dns_zone_name = "example-zone" dns_zone_name = "example-zone"
# instances
worker_count = 2
# configuration # configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..." ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
worker_count = 2
} }
``` ```
@ -138,6 +138,7 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
resource "local_file" "kubeconfig-yavin" { resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config" filename = "/home/user/.kube/configs/yavin-config"
file_permission = "0600"
} }
``` ```
@ -147,9 +148,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.3 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.31.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.3 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.31.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.3 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.31.3
``` ```
List the pods. List the pods.
@ -157,9 +158,9 @@ List the pods.
``` ```
$ kubectl get pods --all-namespaces $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1cs8z 2/2 Running 0 6m kube-system cilium-1cs8z 1/1 Running 0 6m
kube-system calico-node-d1l5b 2/2 Running 0 6m kube-system cilium-d1l5b 1/1 Running 0 6m
kube-system calico-node-sp9ps 2/2 Running 0 6m kube-system cilium-sp9ps 1/1 Running 0 6m
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
kube-system kube-apiserver-controller-0 1/1 Running 0 6m kube-system kube-apiserver-controller-0 1/1 Running 0 6m
@ -210,13 +211,16 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
### Optional ### Optional
| Name | Description | Default | Example | | Name | Description | Default | Example |
|:-----|:------------|:--------|:--------| |:---------------------|:---------------------------------------------------------------------------|:----------------|:-------------------------------------|
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "stable", "testing", "next" | | os_stream | Fedora CoreOS stream for compute instances | "stable" | "stable", "testing", "next" |
| disk_size | Size of the disk in GB | 30 | 100 | | controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| controller_disk_size | Controller disk size in GB | 30 | 20 |
| controller_disk_type | Controller disk type | "pd-standard" | "pd-ssd" |
| worker_count | Number of workers | 1 | 3 |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| worker_disk_size | Worker disk size in GB | 30 | 100 |
| worker_disk_type | Worker disk type | "pd-standard" | "pd-ssd" |
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true | | worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
| controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) | | controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) |
| worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) | | worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) |
@ -230,4 +234,3 @@ Check the list of valid [machine types](https://cloud.google.com/compute/docs/ma
#### Preemption #### Preemption
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.` Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`

View File

@ -1,6 +1,6 @@
# AWS # AWS
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on AWS with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.31.3 cluster on AWS with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -72,19 +72,19 @@ Define a Kubernetes cluster using the module `aws/flatcar-linux/kubernetes`.
```tf ```tf
module "tempest" { module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.31.3"
# AWS # AWS
cluster_name = "tempest" cluster_name = "tempest"
dns_zone = "aws.example.com" dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0" dns_zone_id = "Z3PAABBCFAKEC0"
# configuration # instances
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
worker_count = 2 worker_count = 2
worker_type = "t3.small" worker_type = "t3.small"
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
} }
``` ```
@ -136,6 +136,7 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
resource "local_file" "kubeconfig-tempest" { resource "local_file" "kubeconfig-tempest" {
content = module.tempest.kubeconfig-admin content = module.tempest.kubeconfig-admin
filename = "/home/user/.kube/configs/tempest-config" filename = "/home/user/.kube/configs/tempest-config"
file_permission = "0600"
} }
``` ```
@ -145,9 +146,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config $ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.30.3 ip-10-0-3-155 Ready <none> 10m v1.31.3
ip-10-0-26-65 Ready <none> 10m v1.30.3 ip-10-0-26-65 Ready <none> 10m v1.31.3
ip-10-0-41-21 Ready <none> 10m v1.30.3 ip-10-0-41-21 Ready <none> 10m v1.31.3
``` ```
List the pods. List the pods.
@ -155,9 +156,9 @@ List the pods.
``` ```
$ kubectl get pods --all-namespaces $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1m5bf 2/2 Running 0 34m kube-system cilium-1m5bf 1/1 Running 0 34m
kube-system calico-node-7jmr1 2/2 Running 0 34m kube-system cilium-7jmr1 1/1 Running 0 34m
kube-system calico-node-bknc8 2/2 Running 0 34m kube-system cilium-bknc8 1/1 Running 0 34m
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
@ -206,16 +207,19 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
| Name | Description | Default | Example | | Name | Description | Default | Example |
|:-----|:------------|:--------|:--------| |:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
| worker_type | EC2 instance type for workers | "t3.small" | See below |
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha | | os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
| disk_size | Size of the EBS volume in GB | 30 | 100 | | controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| disk_type | Type of the EBS volume | "gp3" | standard, gp2, gp3, io1 | | controller_type | EC2 instance type for controllers | "t3.small" | See below |
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 | | controller_disk_size | Size of EBS volume in GB | 30 | 100 |
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] | | controller_disk_type | Type of EBS volume | gp3 | io1 |
| controller_disk_iops | IOPS of EBS volume | 3000 | 4000 |
| controller_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
| worker_disk_size | Size of EBS volume in GB | 30 | 100 |
| worker_disk_type | Type of EBS volume | gp3 | io1 |
| worker_disk_iops | IOPS of EBS volume | 3000 | 4000 |
| worker_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0/null | 0.10 | | worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0/null | 0.10 |
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) | | controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) | | worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" | | networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
@ -228,7 +232,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/). Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
!!! warning !!! warning
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller. Do not choose a `controller_type` smaller than `t3.small`. Smaller instances are not sufficient for running a controller.
!!! tip "MTU" !!! tip "MTU"
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth. If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
@ -236,4 +240,3 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
#### Spot #### Spot
Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled. Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled.

View File

@ -1,6 +1,6 @@
# Azure # Azure
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on Azure with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Azure with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -75,22 +75,22 @@ Define a Kubernetes cluster using the module `azure/flatcar-linux/kubernetes`.
```tf ```tf
module "ramius" { module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.31.3"
# Azure # Azure
cluster_name = "ramius" cluster_name = "ramius"
location = "centralus" location = "centralus"
dns_zone = "azure.example.com" dns_zone = "azure.example.com"
dns_zone_group = "example-group" dns_zone_group = "example-group"
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
worker_count = 2
network_cidr = { network_cidr = {
ipv4 = ["10.0.0.0/20"] ipv4 = ["10.0.0.0/20"]
} }
# instances
worker_count = 2
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
} }
``` ```
@ -142,6 +142,7 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
resource "local_file" "kubeconfig-ramius" { resource "local_file" "kubeconfig-ramius" {
content = module.ramius.kubeconfig-admin content = module.ramius.kubeconfig-admin
filename = "/home/user/.kube/configs/ramius-config" filename = "/home/user/.kube/configs/ramius-config"
file_permission = "0600"
} }
``` ```
@ -151,9 +152,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config $ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.30.3 ramius-controller-0 Ready <none> 24m v1.31.3
ramius-worker-000001 Ready <none> 25m v1.30.3 ramius-worker-000001 Ready <none> 25m v1.31.3
ramius-worker-000002 Ready <none> 24m v1.30.3 ramius-worker-000002 Ready <none> 24m v1.31.3
``` ```
List the pods. List the pods.
@ -163,9 +164,9 @@ $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
kube-system calico-node-1m5bf 2/2 Running 0 26m kube-system cilium-1m5bf 1/1 Running 0 26m
kube-system calico-node-7jmr1 2/2 Running 0 26m kube-system cilium-7jmr1 1/1 Running 0 26m
kube-system calico-node-bknc8 2/2 Running 0 26m kube-system cilium-bknc8 1/1 Running 0 26m
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
kube-system kube-proxy-j4vpq 1/1 Running 0 26m kube-system kube-proxy-j4vpq 1/1 Running 0 26m
@ -226,12 +227,16 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
| Name | Description | Default | Example | | Name | Description | Default | Example |
|:-----|:------------|:--------|:--------| |:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha | | os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
| disk_size | Size of the disk in GB | 30 | 100 | | controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
| controller_disk_type | Managed disk for controllers | Premium_LRS | Standard_LRS |
| controller_disk_size | Managed disk size in GB | 30 | 50 |
| worker_count | Number of workers | 1 | 3 |
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
| worker_disk_type | Managed disk for workers | Standard_LRS | Premium_LRS |
| worker_disk_size | Size of the disk in GB | 30 | 100 |
| worker_ephemeral_disk | Use ephemeral local disk instead of managed disk | false | true |
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot | | worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) | | controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) | | worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
@ -243,9 +248,6 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier. Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
!!! warning
Unlike AWS and GCP, Azure requires its *virtual* networks to have non-overlapping IPv4 CIDRs (yeah, go figure). Instead of each cluster just using `10.0.0.0/16` for instances, each Azure cluster's `host_cidr` must be non-overlapping (e.g. 10.0.0.0/20 for the 1st cluster, 10.0.16.0/20 for the 2nd cluster, etc).
!!! warning !!! warning
Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller. Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller.

View File

@ -1,6 +1,6 @@
# Bare-Metal # Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.30.3 cluster on bare-metal with Flatcar Linux. In this tutorial, we'll network boot and provision a Kubernetes v1.31.3 cluster on bare-metal with Flatcar Linux.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition. First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/flatcar-linux/kubernete
```tf ```tf
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.31.3"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -194,7 +194,7 @@ Workers with similar features can be defined inline using the `workers` field as
```tf ```tf
module "mercury-node1" { module "mercury-node1" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.31.3"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -206,13 +206,13 @@ module "mercury-node1" {
name = "node2" name = "node2"
mac = "52:54:00:b2:2f:86" mac = "52:54:00:b2:2f:86"
domain = "node2.example.com" domain = "node2.example.com"
kubeconfig = module.mercury.kubeconfig kubeconfig = module.mercury.kubeconfig-admin
ssh_authorized_key = "ssh-rsa AAAAB3Nz..." ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional # optional
snippets = [] snippets = []
node_labels = [] node_labels = []
node_tains = [] node_taints = []
install_disk = "/dev/vda" install_disk = "/dev/vda"
cached_install = false cached_install = false
} }
@ -314,6 +314,7 @@ systemd[1]: Started Kubernetes control plane.
resource "local_file" "kubeconfig-mercury" { resource "local_file" "kubeconfig-mercury" {
content = module.mercury.kubeconfig-admin content = module.mercury.kubeconfig-admin
filename = "/home/user/.kube/configs/mercury-config" filename = "/home/user/.kube/configs/mercury-config"
file_permission = "0600"
} }
``` ```
@ -323,9 +324,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config $ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.30.3 node1.example.com Ready <none> 10m v1.31.3
node2.example.com Ready <none> 10m v1.30.3 node2.example.com Ready <none> 10m v1.31.3
node3.example.com Ready <none> 10m v1.30.3 node3.example.com Ready <none> 10m v1.31.3
``` ```
List the pods. List the pods.
@ -333,9 +334,10 @@ List the pods.
``` ```
$ kubectl get pods --all-namespaces $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-6qp7f 2/2 Running 1 11m kube-system cilium-6qp7f 1/1 Running 1 11m
kube-system calico-node-gnjrm 2/2 Running 0 11m kube-system cilium-gnjrm 1/1 Running 0 11m
kube-system calico-node-llbgt 2/2 Running 0 11m kube-system cilium-llbgt 1/1 Running 0 11m
kube-system cilium-operator-68d778b448-g744f 1/1 Running 0 11m
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m

View File

@ -1,6 +1,6 @@
# DigitalOcean # DigitalOcean
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on DigitalOcean with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.31.3 cluster on DigitalOcean with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -81,19 +81,19 @@ Define a Kubernetes cluster using the module `digital-ocean/flatcar-linux/kubern
```tf ```tf
module "nemo" { module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.31.3"
# Digital Ocean # Digital Ocean
cluster_name = "nemo" cluster_name = "nemo"
region = "nyc3" region = "nyc3"
dns_zone = "digital-ocean.example.com" dns_zone = "digital-ocean.example.com"
# configuration # instances
os_image = data.digitalocean_image.flatcar-stable-2303-4-0.id os_image = data.digitalocean_image.flatcar-stable-2303-4-0.id
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
# optional
worker_count = 2 worker_count = 2
# configuration
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
} }
``` ```
@ -146,6 +146,7 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
resource "local_file" "kubeconfig-nemo" { resource "local_file" "kubeconfig-nemo" {
content = module.nemo.kubeconfig-admin content = module.nemo.kubeconfig-admin
filename = "/home/user/.kube/configs/nemo-config" filename = "/home/user/.kube/configs/nemo-config"
file_permission = "0600"
} }
``` ```
@ -155,9 +156,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config $ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.30.3 10.132.110.130 Ready <none> 10m v1.31.3
10.132.115.81 Ready <none> 10m v1.30.3 10.132.115.81 Ready <none> 10m v1.31.3
10.132.124.107 Ready <none> 10m v1.30.3 10.132.124.107 Ready <none> 10m v1.31.3
``` ```
List the pods. List the pods.
@ -166,9 +167,9 @@ List the pods.
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
kube-system calico-node-1m5bf 2/2 Running 0 11m kube-system cilium-1m5bf 1/1 Running 0 11m
kube-system calico-node-7jmr1 2/2 Running 0 11m kube-system cilium-7jmr1 1/1 Running 0 11m
kube-system calico-node-bknc8 2/2 Running 0 11m kube-system cilium-bknc8 1/1 Running 0 11m
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-proxy-6kxjf 1/1 Running 0 11m kube-system kube-proxy-6kxjf 1/1 Running 0 11m

View File

@ -1,6 +1,6 @@
# Google Cloud # Google Cloud
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on Google Compute Engine with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Google Compute Engine with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/flatcar-linux/kuberne
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.30.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.31.3"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -81,11 +81,11 @@ module "yavin" {
dns_zone = "example.com" dns_zone = "example.com"
dns_zone_name = "example-zone" dns_zone_name = "example-zone"
# instances
worker_count = 2
# configuration # configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..." ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
worker_count = 2
} }
``` ```
@ -138,6 +138,7 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
resource "local_file" "kubeconfig-yavin" { resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config" filename = "/home/user/.kube/configs/yavin-config"
file_permission = "0600"
} }
``` ```
@ -147,9 +148,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.3 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.31.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.3 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.31.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.3 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.31.3
``` ```
List the pods. List the pods.
@ -157,9 +158,9 @@ List the pods.
``` ```
$ kubectl get pods --all-namespaces $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1cs8z 2/2 Running 0 6m kube-system cilium-1cs8z 1/1 Running 0 6m
kube-system calico-node-d1l5b 2/2 Running 0 6m kube-system cilium-d1l5b 1/1 Running 0 6m
kube-system calico-node-sp9ps 2/2 Running 0 6m kube-system cilium-sp9ps 1/1 Running 0 6m
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
kube-system kube-apiserver-controller-0 1/1 Running 0 6m kube-system kube-apiserver-controller-0 1/1 Running 0 6m
@ -210,13 +211,14 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
### Optional ### Optional
| Name | Description | Default | Example | | Name | Description | Default | Example |
|:-----|:------------|:--------|:--------| |:---------------------|:---------------------------------------------------------------------------|:-----------------|:--------------------------------------------|
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| os_image | Flatcar Linux image for compute instances | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha | | os_image | Flatcar Linux image for compute instances | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
| disk_size | Size of the disk in GB | 30 | 100 | | controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| controller_disk_size | Controller disk size in GB | 30 | 20 |
| worker_count | Number of workers | 1 | 3 |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| worker_disk_size | Worker disk size in GB | 30 | 100 |
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true | | worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) | | controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) | | worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
@ -230,4 +232,3 @@ Check the list of valid [machine types](https://cloud.google.com/compute/docs/ma
#### Preemption #### Preemption
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.` Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`

Some files were not shown because too many files have changed in this diff Show More