Merge remote-tracking branch 'upstream/main'

This commit is contained in:
2024-12-02 11:05:29 +01:00
173 changed files with 4505 additions and 1838 deletions

View File

@ -37,7 +37,7 @@ resource "google_dns_record_set" "some-application" {
## Azure
On Azure, a load balancer distributes traffic across a backend address pool of worker nodes running an Ingress controller deployment. Security group rules allow traffic to ports 80 and 443. Health probes ensure only workers with a healthy Ingress controller receive traffic.
On Azure, an Azure Load Balancer distributes IPv4/IPv6 traffic across backend address pools of worker nodes running an Ingress controller deployment. Security group rules allow traffic to ports 80 and 443. Health probes ensure only workers with a healthy Ingress controller receive traffic.
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, and namespace.
@ -53,10 +53,10 @@ app2.example.com -> 11.22.33.44
app3.example.com -> 11.22.33.44
```
Find the load balancer's IPv4 address with the Azure console or use the Typhoon module's output `ingress_static_ipv4`. For example, you might use Terraform to manage a Google Cloud DNS record:
Find the load balancer's addresses with the Azure console or use the Typhoon module's outputs `ingress_static_ipv4` or `ingress_static_ipv6`. For example, you might use Terraform to manage a Google Cloud DNS record:
```tf
resource "google_dns_record_set" "some-application" {
resource "google_dns_record_set" "app-record-a" {
# DNS zone name
managed_zone = "example-zone"
@ -66,6 +66,17 @@ resource "google_dns_record_set" "some-application" {
ttl = 300
rrdatas = [module.ramius.ingress_static_ipv4]
}
resource "google_dns_record_set" "app-record-aaaa" {
# DNS zone name
managed_zone = "example-zone"
# DNS record
name = "app.example.com."
type = "AAAA"
ttl = 300
rrdatas = [module.ramius.ingress_static_ipv6]
}
```
## Bare-Metal

View File

@ -1,9 +1,131 @@
# Addons
# Components
Typhoon clusters are verified to work well with several post-install addons.
Typhoon's component model allows for managing cluster components independent from the cluster's lifecycle, upgrading in a rolling or automated fashion, or customizing components in advanced ways.
Typhoon clusters install core components like `CoreDNS`, `kube-proxy`, and a chosen CNI provider (`flannel`, `calico`, or `cilium`) by default. Since v1.30.1, pre-installed components are optional. Other "addon" components like Nginx Ingress, Prometheus, or Grafana may be optionally applied though the component model (after cluster creation).
## Components
Pre-installed by default:
* CoreDNS
* kube-proxy
* CNI provider (set via `var.networking`)
* flannel
* Calico
* Cilium
Addons:
* Nginx [Ingress Controller](ingress.md)
* [Prometheus](prometheus.md)
* [Grafana](grafana.md)
* [fleetlock](fleetlock.md)
## Pre-installed Components
By default, Typhoon clusters install `CoreDNS`, `kube-proxy`, and a chosen CNI provider (`flannel`, `calico`, or `cilium`). Disable any or all of these components using the `components` system.
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.1"
# Google Cloud
cluster_name = "yavin"
region = "us-central1"
dns_zone = "example.com"
dns_zone_name = "example-zone"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# pre-installed components (defaults shown)
components = {
enable = true
coredns = {
enable = true
}
kube_proxy = {
enable = true
}
# Only the CNI set in var.networking will be installed
flannel = {
enable = true
}
calico = {
enable = true
}
cilium = {
enable = true
}
}
}
```
!!! warn
Disabling pre-installed components is for advanced users who intend to manage these components separately. Without a CNI provider, cluster nodes will be NotReady and wait for the CNI provider to be applied.
## Managing Components
If you choose to manage components youself, a recommended pattern is to use a separate Terraform workspace per component, like you would any application.
```
mkdir -p infra/components/{coredns, cilium}
tree components/coredns
components/coredns/
├── backend.tf
├── manifests.tf
└── providers.tf
```
Let's consider managing CoreDNS resources. Configure the `kubernetes` provider to use the kubeconfig credentials of your Typhoon cluster(s) in a `providers.tf` file. Here we show provider blocks for interacting with Typhoon clusters on AWS, Azure, or Google Cloud, assuming each cluster's `kubeconfig-admin` output was written to local file.
```tf
provider "kubernetes" {
alias = "aws"
config_path = "~/.kube/configs/aws-config"
}
provider "kubernetes" {
alias = "google"
config_path = "~/.kube/configs/google-config"
}
...
```
Typhoon maintains Terraform modules for most addon components. You can reference `main`, a tagged release, a SHA revision, or custom module of your own. Define the CoreDNS manifests using the `addons/coredns` module in a `manifests.tf` file.
```tf
# CoreDNS manifests for the aws cluster
module "aws" {
source = "git::https://github.com/poseidon/typhoon//addons/coredns?ref=v1.30.1"
providers = {
kubernetes = kubernetes.aws
}
}
# CoreDNS manifests for the google cloud cluster
module "aws" {
source = "git::https://github.com/poseidon/typhoon//addons/coredns?ref=v1.30.1"
providers = {
kubernetes = kubernetes.google
}
}
...
```
Plan and apply the CoreDNS Kubernetes resources to cluster(s).
```
terraform plan
terraform apply
...
module.aws.kubernetes_service_account.coredns: Refreshing state... [id=kube-system/coredns]
module.aws.kubernetes_config_map.coredns: Refreshing state... [id=kube-system/coredns]
module.aws.kubernetes_cluster_role.coredns: Refreshing state... [id=system:coredns]
module.aws.kubernetes_cluster_role_binding.coredns: Refreshing state... [id=system:coredns]
module.aws.kubernetes_service.coredns: Refreshing state... [id=kube-system/coredns]
...
```

View File

@ -1,13 +1,11 @@
# ARM64
Typhoon supports ARM64 Kubernetes clusters with ARM64 controller and worker nodes (full-cluster) or adding worker pools of ARM64 nodes to clusters with an x86/amd64 control plane for a hybdrid (mixed-arch) cluster.
Typhoon ARM64 clusters (full-cluster or mixed-arch) are available on:
Typhoon supports Kubernetes clusters with ARM64 controller or worker nodes on several platforms:
* AWS with Fedora CoreOS or Flatcar Linux
* Azure with Flatcar Linux
## Cluster
## AWS
Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
@ -15,24 +13,23 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.31.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# instances
controller_type = "t4g.small"
controller_arch = "arm64"
worker_count = 2
worker_type = "t4g.small"
worker_arch = "arm64"
worker_price = "0.0168"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
arch = "arm64"
networking = "cilium"
worker_count = 2
worker_price = "0.0168"
controller_type = "t4g.small"
worker_type = "t4g.small"
}
```
@ -40,24 +37,23 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.31.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# instances
controller_type = "t4g.small"
controller_arch = "arm64"
worker_count = 2
worker_type = "t4g.small"
worker_arch = "arm64"
worker_price = "0.0168"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
arch = "arm64"
networking = "cilium"
worker_count = 2
worker_price = "0.0168"
controller_type = "t4g.small"
worker_type = "t4g.small"
}
```
@ -66,118 +62,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-21-119 Ready <none> 77s v1.28.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-32-166 Ready <none> 80s v1.28.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-5-79 Ready <none> 77s v1.28.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
```
## Hybrid
Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [worker pool](worker-pools.md#aws) with ARM64 workers. Optional taints are added to aid in scheduling.
=== "FCOS Cluster"
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.28.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
networking = "cilium"
worker_count = 2
worker_price = "0.021"
daemonset_tolerations = ["arch"] # important
}
```
=== "Flatcar Cluster"
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.28.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
networking = "cilium"
worker_count = 2
worker_price = "0.021"
daemonset_tolerations = ["arch"] # important
}
```
=== "FCOS ARM64 Workers"
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.28.3"
# AWS
vpc_id = module.gravitas.vpc_id
subnet_ids = module.gravitas.subnet_ids
security_groups = module.gravitas.worker_security_groups
# configuration
name = "gravitas-arm64"
kubeconfig = module.gravitas.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional
arch = "arm64"
instance_type = "t4g.small"
spot_price = "0.0168"
node_taints = ["arch=arm64:NoSchedule"]
}
```
=== "Flatcar ARM64 Workers"
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.28.3"
# AWS
vpc_id = module.gravitas.vpc_id
subnet_ids = module.gravitas.subnet_ids
security_groups = module.gravitas.worker_security_groups
# configuration
name = "gravitas-arm64"
kubeconfig = module.gravitas.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional
arch = "arm64"
instance_type = "t4g.small"
spot_price = "0.0168"
node_taints = ["arch=arm64:NoSchedule"]
}
```
Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-1-73 Ready <none> 111m v1.28.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-22-79... Ready <none> 111m v1.28.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
ip-10-0-24-130 Ready <none> 111m v1.28.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-39-19 Ready <none> 111m v1.28.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-21-119 Ready <none> 77s v1.31.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-32-166 Ready <none> 80s v1.31.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-5-79 Ready <none> 77s v1.31.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
```
## Azure
@ -186,22 +73,136 @@ Create a cluster on Azure with ARM64 controller and worker nodes. Container work
```tf
module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.31.3"
# Azure
cluster_name = "ramius"
region = "centralus"
location = "centralus"
dns_zone = "azure.example.com"
dns_zone_group = "example-group"
# instances
controller_arch = "arm64"
controller_type = "Standard_B2pls_v5"
worker_count = 2
controller_arch = "arm64"
worker_type = "Standard_D2pls_v5"
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
arch = "arm64"
controller_type = "Standard_D2pls_v5"
worker_type = "Standard_D2pls_v5"
worker_count = 2
host_cidr = "10.0.0.0/20"
}
```
## Hybrid
Create a hybrid/mixed arch cluster by defining a cluster where [worker pool(s)](worker-pools.md#aws) have a different instance type architecture than controllers or other workers. Taints are added to aid in scheduling.
Here's an AWS example,
=== "FCOS Cluster"
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.31.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# instances
worker_count = 2
worker_arch = "arm64"
worker_type = "t4g.medium"
worker_price = "0.021"
# configuration
daemonset_tolerations = ["arch"] # important
networking = "cilium"
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
}
```
=== "Flatcar Cluster"
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.31.3"
# AWS
cluster_name = "gravitas"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# instances
worker_count = 2
worker_arch = "arm64"
worker_type = "t4g.medium"
worker_price = "0.021"
# configuration
daemonset_tolerations = ["arch"] # important
networking = "cilium"
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
}
```
=== "FCOS ARM64 Workers"
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# AWS
vpc_id = module.gravitas.vpc_id
subnet_ids = module.gravitas.subnet_ids
security_groups = module.gravitas.worker_security_groups
# instances
arch = "arm64"
instance_type = "t4g.small"
spot_price = "0.0168"
# configuration
name = "gravitas-arm64"
kubeconfig = module.gravitas.kubeconfig
node_taints = ["arch=arm64:NoSchedule"]
ssh_authorized_key = var.ssh_authorized_key
}
```
=== "Flatcar ARM64 Workers"
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.31.3"
# AWS
vpc_id = module.gravitas.vpc_id
subnet_ids = module.gravitas.subnet_ids
security_groups = module.gravitas.worker_security_groups
# instances
arch = "arm64"
instance_type = "t4g.small"
spot_price = "0.0168"
# configuration
name = "gravitas-arm64"
kubeconfig = module.gravitas.kubeconfig
node_taints = ["arch=arm64:NoSchedule"]
ssh_authorized_key = var.ssh_authorized_key
}
```
Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-1-73 Ready <none> 111m v1.31.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-22-79... Ready <none> 111m v1.31.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
ip-10-0-24-130 Ready <none> 111m v1.31.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-39-19 Ready <none> 111m v1.31.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
```

View File

@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
# Google Cloud
cluster_name = "yavin"
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf
module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# Google Cloud
cluster_name = "yavin"
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
# Google Cloud
cluster_name = "yavin"
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf
module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# Google Cloud
cluster_name = "yavin"

View File

@ -19,7 +19,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf
module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# AWS
vpc_id = module.tempest.vpc_id
@ -42,7 +42,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf
module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.31.3"
# AWS
vpc_id = module.tempest.vpc_id
@ -111,14 +111,14 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf
module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# Azure
region = module.ramius.region
resource_group_name = module.ramius.resource_group_name
subnet_id = module.ramius.subnet_id
security_group_id = module.ramius.security_group_id
backend_address_pool_id = module.ramius.backend_address_pool_id
location = module.ramius.location
resource_group_name = module.ramius.resource_group_name
subnet_id = module.ramius.subnet_id
security_group_id = module.ramius.security_group_id
backend_address_pool_ids = module.ramius.backend_address_pool_ids
# configuration
name = "ramius-spot"
@ -127,7 +127,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
# optional
worker_count = 2
vm_type = "Standard_F4"
vm_type = "Standard_D2as_v5"
priority = "Spot"
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-31.20200323.3.2"
}
@ -137,14 +137,14 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf
module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.31.3"
# Azure
region = module.ramius.region
resource_group_name = module.ramius.resource_group_name
subnet_id = module.ramius.subnet_id
security_group_id = module.ramius.security_group_id
backend_address_pool_id = module.ramius.backend_address_pool_id
location = module.ramius.location
resource_group_name = module.ramius.resource_group_name
subnet_id = module.ramius.subnet_id
security_group_id = module.ramius.security_group_id
backend_address_pool_ids = module.ramius.backend_address_pool_ids
# configuration
name = "ramius-spot"
@ -153,7 +153,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
# optional
worker_count = 2
vm_type = "Standard_F4"
vm_type = "Standard_D2as_v5"
priority = "Spot"
os_image = "flatcar-beta"
}
@ -180,7 +180,7 @@ The Azure internal `workers` module supports a number of [variables](https://git
| resource_group_name | Must be set to `resource_group_name` output by cluster | module.cluster.resource_group_name |
| subnet_id | Must be set to `subnet_id` output by cluster | module.cluster.subnet_id |
| security_group_id | Must be set to `security_group_id` output by cluster | module.cluster.security_group_id |
| backend_address_pool_id | Must be set to `backend_address_pool_id` output by cluster | module.cluster.backend_address_pool_id |
| backend_address_pool_ids | Must be set to `backend_address_pool_ids` output by cluster | module.cluster.backend_address_pool_ids |
| kubeconfig | Must be set to `kubeconfig` output by cluster | module.cluster.kubeconfig |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-ed25519 AAAAB3NZ..." |
@ -207,7 +207,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf
module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.31.3"
# Google Cloud
region = "europe-west2"
@ -231,7 +231,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf
module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.31.3"
# Google Cloud
region = "europe-west2"
@ -262,11 +262,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
```
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.28.3
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.28.3
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.28.3
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.28.3
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.28.3
yavin-controller-0.c.example-com.internal Ready 6m v1.31.3
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.31.3
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.31.3
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.31.3
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.31.3
```
### Variables

View File

@ -10,9 +10,9 @@ A load balancer distributes IPv4 TCP/6443 traffic across a backend address pool
### HTTP/HTTPS Ingress
A load balancer distributes IPv4 TCP/80 and TCP/443 traffic across a backend address pool of workers with a healthy Ingress controller.
An Azure Load Balancer distributes IPv4/IPv6 TCP/80 and TCP/443 traffic across backend address pools of workers with a healthy Ingress controller.
The Azure LB IPv4 address is output as `ingress_static_ipv4` for use in DNS A records. See [Ingress on Azure](/addons/ingress/#azure).
The load balancer addresses are output as `ingress_static_ipv4` and `ingress_static_ipv6` for use in DNS A and AAAA records. See [Ingress on Azure](/addons/ingress/#azure).
### TCP/UDP Services
@ -21,27 +21,25 @@ Load balance TCP/UDP applications by adding rules to the Azure LB (output). A ru
```tf
# Forward traffic to the worker backend address pool
resource "azurerm_lb_rule" "some-app-tcp" {
resource_group_name = module.ramius.resource_group_name
name = "some-app-tcp"
resource_group_name = module.ramius.resource_group_name
loadbalancer_id = module.ramius.loadbalancer_id
frontend_ip_configuration_name = "ingress"
frontend_ip_configuration_name = "ingress-ipv4"
protocol = "Tcp"
frontend_port = 3333
backend_port = 30333
backend_address_pool_id = module.ramius.backend_address_pool_id
probe_id = azurerm_lb_probe.some-app.id
protocol = "Tcp"
frontend_port = 3333
backend_port = 30333
backend_address_pool_ids = module.ramius.backend_address_pool_ids.ipv4
probe_id = azurerm_lb_probe.some-app.id
}
# Health check some-app
resource "azurerm_lb_probe" "some-app" {
name = "some-app"
resource_group_name = module.ramius.resource_group_name
name = "some-app"
loadbalancer_id = module.ramius.loadbalancer_id
protocol = "Tcp"
port = 30333
loadbalancer_id = module.ramius.loadbalancer_id
protocol = "Tcp"
port = 30333
}
```
@ -51,9 +49,8 @@ Add firewall rules to the worker security group.
```tf
resource "azurerm_network_security_rule" "some-app" {
resource_group_name = "${module.ramius.resource_group_name}"
name = "some-app"
resource_group_name = module.ramius.resource_group_name
network_security_group_name = module.ramius.worker_security_group_name
priority = "3001"
access = "Allow"
@ -62,7 +59,7 @@ resource "azurerm_network_security_rule" "some-app" {
source_port_range = "*"
destination_port_range = "30333"
source_address_prefix = "*"
destination_address_prefixes = module.ramius.worker_address_prefixes
destination_address_prefixes = module.ramius.worker_address_prefixes.ipv4
}
```
@ -72,6 +69,6 @@ Azure does not provide public IPv6 addresses at the standard SKU.
| IPv6 Feature | Supported |
|-------------------------|-----------|
| Node IPv6 address | No |
| Node Outbound IPv6 | No |
| Kubernetes Ingress IPv6 | No |
| Node IPv6 address | Yes |
| Node Outbound IPv6 | Yes |
| Kubernetes Ingress IPv6 | Yes |

View File

@ -16,8 +16,8 @@ Together, they diversify Typhoon to support a range of container technologies.
| Property | Flatcar Linux | Fedora CoreOS |
|-------------------|---------------|---------------|
| Kernel | ~5.10.x | ~5.16.x |
| systemd | 249 | 249 |
| Kernel | ~5.15.x | ~6.5.x |
| systemd | 252 | 254 |
| Username | core | core |
| Ignition system | Ignition v3.x spec | Ignition v3.x spec |
| storage driver | overlay2 (extfs) | overlay2 (xfs) |

View File

@ -1,10 +1,10 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.28.3 cluster on AWS with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on AWS with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -72,19 +72,19 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
```tf
module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.31.3"
# AWS
cluster_name = "tempest"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
# instances
worker_count = 2
worker_type = "t3.small"
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
}
```
@ -134,8 +134,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
```
resource "local_file" "kubeconfig-tempest" {
content = module.tempest.kubeconfig-admin
filename = "/home/user/.kube/configs/tempest-config"
content = module.tempest.kubeconfig-admin
filename = "/home/user/.kube/configs/tempest-config"
file_permission = "0600"
}
```
@ -145,9 +146,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.28.3
ip-10-0-26-65 Ready <none> 10m v1.28.3
ip-10-0-41-21 Ready <none> 10m v1.28.3
ip-10-0-3-155 Ready <none> 10m v1.31.3
ip-10-0-26-65 Ready <none> 10m v1.31.3
ip-10-0-41-21 Ready <none> 10m v1.31.3
```
List the pods.
@ -155,9 +156,9 @@ List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1m5bf 2/2 Running 0 34m
kube-system calico-node-7jmr1 2/2 Running 0 34m
kube-system calico-node-bknc8 2/2 Running 0 34m
kube-system cilium-1m5bf 1/1 Running 0 34m
kube-system cilium-7jmr1 1/1 Running 0 34m
kube-system cilium-bknc8 1/1 Running 0 34m
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
@ -206,16 +207,21 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| os_stream | Fedora CoreOS stream for instances | "stable" | "testing", "next" |
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
| controller_disk_size | Size of EBS volume in GB | 30 | 100 |
| controller_disk_type | Type of EBS volume | gp3 | io1 |
| controller_disk_iops | IOPS of EBS volume | 3000 | 4000 |
| controller_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
| worker_count | Number of workers | 1 | 3 |
| worker_type | EC2 instance type for workers | "t3.small" | See below |
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" |
| disk_size | Size of the EBS volume in GB | 30 | 100 |
| disk_type | Type of the EBS volume | "gp3" | standard, gp2, gp3, io1 |
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
| worker_disk_size | Size of EBS volume in GB | 30 | 100 |
| worker_disk_type | Type of EBS volume | gp3 | io1 |
| worker_disk_iops | IOPS of EBS volume | 3000 | 4000 |
| worker_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 |
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
| controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) |
| worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
@ -228,7 +234,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
!!! warning
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller.
Do not choose a `controller_type` smaller than `t3.small`. Smaller instances are not sufficient for running a controller.
!!! tip "MTU"
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
@ -236,4 +242,3 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
#### Spot
Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled.

View File

@ -1,10 +1,10 @@
# Azure
In this tutorial, we'll create a Kubernetes v1.28.3 cluster on Azure with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Azure with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -67,15 +67,15 @@ Fedora CoreOS publishes images for Azure, but does not yet upload them. Azure al
[Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image, decompress it, and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow).
```
xz -d fedora-coreos-36.20220716.3.1-azure.x86_64.vhd.xz
xz -d fedora-coreos-40.20240616.3.0-azure.x86_64.vhd.xz
```
Create an Azure disk (note disk ID) and create an Azure image from it (note image ID).
```
az disk create --name fedora-coreos-36.20220716.3.1 -g GROUP --source https://BUCKET.blob.core.windows.net/fedora-coreos/fedora-coreos-36.20220716.3.1-azure.x86_64.vhd
az disk create --name fedora-coreos-40.20240616.3.0 -g GROUP --source https://BUCKET.blob.core.windows.net/images/fedora-coreos-40.20240616.3.0-azure.x86_64.vhd
az image create --name fedora-coreos-36.20220716.3.1 -g GROUP --os-type=linux --source /subscriptions/some/path/providers/Microsoft.Compute/disks/fedora-coreos-36.20220716.3.1
az image create --name fedora-coreos-40.20240616.3.0 -g GROUP --os-type linux --source /subscriptions/some/path/Microsoft.Compute/disks/fedora-coreos-40.20240616.3.0
```
Set the [os_image](#variables) in the next step.
@ -86,21 +86,23 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
```tf
module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.31.3"
# Azure
cluster_name = "ramius"
region = "centralus"
location = "centralus"
dns_zone = "azure.example.com"
dns_zone_group = "example-group"
network_cidr = {
ipv4 = ["10.0.0.0/20"]
}
# instances
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-36.20220716.3.1"
worker_count = 2
# configuration
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-36.20220716.3.1"
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
worker_count = 2
host_cidr = "10.0.0.0/20"
}
```
@ -150,8 +152,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
```
resource "local_file" "kubeconfig-ramius" {
content = module.ramius.kubeconfig-admin
filename = "/home/user/.kube/configs/ramius-config"
content = module.ramius.kubeconfig-admin
filename = "/home/user/.kube/configs/ramius-config"
file_permission = "0600"
}
```
@ -161,9 +164,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.28.3
ramius-worker-000001 Ready <none> 25m v1.28.3
ramius-worker-000002 Ready <none> 24m v1.28.3
ramius-controller-0 Ready <none> 24m v1.31.3
ramius-worker-000001 Ready <none> 25m v1.31.3
ramius-worker-000002 Ready <none> 24m v1.31.3
```
List the pods.
@ -173,9 +176,9 @@ $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
kube-system calico-node-1m5bf 2/2 Running 0 26m
kube-system calico-node-7jmr1 2/2 Running 0 26m
kube-system calico-node-bknc8 2/2 Running 0 26m
kube-system cilium-1m5bf 1/1 Running 0 26m
kube-system cilium-7jmr1 1/1 Running 0 26m
kube-system cilium-bknc8 1/1 Running 0 26m
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
@ -197,14 +200,14 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/azure/f
| Name | Description | Example |
|:-----|:------------|:--------|
| cluster_name | Unique cluster name (prepended to dns_zone) | "ramius" |
| region | Azure region | "centralus" |
| location | Azure location | "centralus" |
| dns_zone | Azure DNS zone | "azure.example.com" |
| dns_zone_group | Resource group where the Azure DNS zone resides | "global" |
| os_image | Fedora CoreOS image for instances | "/subscriptions/..../custom-image" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-ed25519 AAAAB3NZ..." |
!!! tip
Regions are shown in [docs](https://azure.microsoft.com/en-us/global-infrastructure/regions/) or with `az account list-locations --output table`.
Locations are shown in [docs](https://azure.microsoft.com/en-us/global-infrastructure/regions/) or with `az account list-locations --output table`.
#### DNS Zone
@ -238,24 +241,25 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
| controller_disk_type | Managed disk for controllers | Premium_LRS | Standard_LRS |
| controller_disk_size | Managed disk size in GB | 30 | 50 |
| worker_count | Number of workers | 1 | 3 |
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
| disk_size | Size of the disk in GB | 30 | 100 |
| worker_disk_type | Managed disk for workers | Standard_LRS | Premium_LRS |
| worker_disk_size | Size of the disk in GB | 30 | 100 |
| worker_ephemeral_disk | Use ephemeral local disk instead of managed disk | false | true |
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
| controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) |
| worker_snippets | Worker Butane snippets | [] | [example](/advanced/customization/#usage) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
| network_cidr | Virtual network CIDR ranges | { ipv4 = ["10.0.0.0/16"], ipv6 = [ULA, ...] } | { ipv4 = ["10.0.0.0/20"] } |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
!!! warning
Unlike AWS and GCP, Azure requires its *virtual* networks to have non-overlapping IPv4 CIDRs (yeah, go figure). Instead of each cluster just using `10.0.0.0/16` for instances, each Azure cluster's `host_cidr` must be non-overlapping (e.g. 10.0.0.0/20 for the 1st cluster, 10.0.16.0/20 for the 2nd cluster, etc).
!!! warning
Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller.

View File

@ -1,10 +1,10 @@
# Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.28.3 cluster on bare-metal with Fedora CoreOS.
In this tutorial, we'll network boot and provision a Kubernetes v1.31.3 cluster on bare-metal with Fedora CoreOS.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
```tf
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.31.3"
# bare-metal
cluster_name = "mercury"
@ -191,7 +191,7 @@ Workers with similar features can be defined inline using the `workers` field as
```tf
module "mercury-node1" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.31.3"
# bare-metal
cluster_name = "mercury"
@ -302,8 +302,9 @@ systemd[1]: Started Kubernetes control plane.
```
resource "local_file" "kubeconfig-mercury" {
content = module.mercury.kubeconfig-admin
filename = "/home/user/.kube/configs/mercury-config"
content = module.mercury.kubeconfig-admin
filename = "/home/user/.kube/configs/mercury-config"
file_permission = "0600"
}
```
@ -313,9 +314,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.28.3
node2.example.com Ready <none> 10m v1.28.3
node3.example.com Ready <none> 10m v1.28.3
node1.example.com Ready <none> 10m v1.31.3
node2.example.com Ready <none> 10m v1.31.3
node3.example.com Ready <none> 10m v1.31.3
```
List the pods.
@ -323,9 +324,10 @@ List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-6qp7f 2/2 Running 1 11m
kube-system calico-node-gnjrm 2/2 Running 0 11m
kube-system calico-node-llbgt 2/2 Running 0 11m
kube-system cilium-6qp7f 1/1 Running 1 11m
kube-system cilium-gnjrm 1/1 Running 0 11m
kube-system cilium-llbgt 1/1 Running 0 11m
kube-system cilium-operator-68d778b448-g744f 1/1 Running 0 11m
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m
@ -372,4 +374,3 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| kernel_args | Additional kernel args to provide at PXE boot | [] | ["kvm-intel.nested=1"] |
| worker_node_labels | Map from worker name to list of initial node labels | {} | {"node2" = ["role=special"]} |
| worker_node_taints | Map from worker name to list of initial node taints | {} | {"node2" = ["role=special:NoSchedule"]} |

View File

@ -1,10 +1,10 @@
# DigitalOcean
In this tutorial, we'll create a Kubernetes v1.28.3 cluster on DigitalOcean with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on DigitalOcean with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -81,19 +81,19 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
```tf
module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.31.3"
# Digital Ocean
cluster_name = "nemo"
region = "nyc3"
dns_zone = "digital-ocean.example.com"
# configuration
os_image = data.digitalocean_image.fedora-coreos-31-20200323-3-2.id
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
# optional
# instances
os_image = data.digitalocean_image.fedora-coreos-31-20200323-3-2.id
worker_count = 2
# configuration
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
}
```
@ -144,8 +144,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
```
resource "local_file" "kubeconfig-nemo" {
content = module.nemo.kubeconfig-admin
filename = "/home/user/.kube/configs/nemo-config"
content = module.nemo.kubeconfig-admin
filename = "/home/user/.kube/configs/nemo-config"
file_permission = "0600"
}
```
@ -155,9 +156,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.28.3
10.132.115.81 Ready <none> 10m v1.28.3
10.132.124.107 Ready <none> 10m v1.28.3
10.132.110.130 Ready <none> 10m v1.31.3
10.132.115.81 Ready <none> 10m v1.31.3
10.132.124.107 Ready <none> 10m v1.31.3
```
List the pods.
@ -166,9 +167,9 @@ List the pods.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
kube-system calico-node-1m5bf 2/2 Running 0 11m
kube-system calico-node-7jmr1 2/2 Running 0 11m
kube-system calico-node-bknc8 2/2 Running 0 11m
kube-system cilium-1m5bf 1/1 Running 0 11m
kube-system cilium-7jmr1 1/1 Running 0 11m
kube-system cilium-bknc8 1/1 Running 0 11m
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
@ -248,4 +249,3 @@ Check the list of valid [droplet types](https://developers.digitalocean.com/docu
!!! warning
Do not choose a `controller_type` smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail.

View File

@ -1,10 +1,10 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.28.3 cluster on Google Compute Engine with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Google Compute Engine with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kuberne
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
# Google Cloud
cluster_name = "yavin"
@ -81,11 +81,11 @@ module "yavin" {
dns_zone = "example.com"
dns_zone_name = "example-zone"
# instances
worker_count = 2
# configuration
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional
worker_count = 2
}
```
@ -136,8 +136,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
```
resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
file_permission = "0600"
}
```
@ -147,9 +148,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.28.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.28.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.28.3
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.31.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.31.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.31.3
```
List the pods.
@ -157,9 +158,9 @@ List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1cs8z 2/2 Running 0 6m
kube-system calico-node-d1l5b 2/2 Running 0 6m
kube-system calico-node-sp9ps 2/2 Running 0 6m
kube-system cilium-1cs8z 1/1 Running 0 6m
kube-system cilium-d1l5b 1/1 Running 0 6m
kube-system cilium-sp9ps 1/1 Running 0 6m
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
@ -209,25 +210,27 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
### Optional
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "stable", "testing", "next" |
| disk_size | Size of the disk in GB | 30 | 100 |
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
| controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) |
| worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
| Name | Description | Default | Example |
|:---------------------|:---------------------------------------------------------------------------|:----------------|:-------------------------------------|
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "stable", "testing", "next" |
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| controller_disk_size | Controller disk size in GB | 30 | 20 |
| controller_disk_type | Controller disk type | "pd-standard" | "pd-ssd" |
| worker_count | Number of workers | 1 | 3 |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| worker_disk_size | Worker disk size in GB | 30 | 100 |
| worker_disk_type | Worker disk type | "pd-standard" | "pd-ssd" |
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
| controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) |
| worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
#### Preemption
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`

View File

@ -1,10 +1,10 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.28.3 cluster on AWS with Flatcar Linux.
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on AWS with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -72,19 +72,19 @@ Define a Kubernetes cluster using the module `aws/flatcar-linux/kubernetes`.
```tf
module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.31.3"
# AWS
cluster_name = "tempest"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
# instances
worker_count = 2
worker_type = "t3.small"
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
}
```
@ -134,8 +134,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
```
resource "local_file" "kubeconfig-tempest" {
content = module.tempest.kubeconfig-admin
filename = "/home/user/.kube/configs/tempest-config"
content = module.tempest.kubeconfig-admin
filename = "/home/user/.kube/configs/tempest-config"
file_permission = "0600"
}
```
@ -145,9 +146,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.28.3
ip-10-0-26-65 Ready <none> 10m v1.28.3
ip-10-0-41-21 Ready <none> 10m v1.28.3
ip-10-0-3-155 Ready <none> 10m v1.31.3
ip-10-0-26-65 Ready <none> 10m v1.31.3
ip-10-0-41-21 Ready <none> 10m v1.31.3
```
List the pods.
@ -155,9 +156,9 @@ List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1m5bf 2/2 Running 0 34m
kube-system calico-node-7jmr1 2/2 Running 0 34m
kube-system calico-node-bknc8 2/2 Running 0 34m
kube-system cilium-1m5bf 1/1 Running 0 34m
kube-system cilium-7jmr1 1/1 Running 0 34m
kube-system cilium-bknc8 1/1 Running 0 34m
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
@ -206,16 +207,19 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
| worker_type | EC2 instance type for workers | "t3.small" | See below |
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
| disk_size | Size of the EBS volume in GB | 30 | 100 |
| disk_type | Type of the EBS volume | "gp3" | standard, gp2, gp3, io1 |
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
| controller_disk_size | Size of EBS volume in GB | 30 | 100 |
| controller_disk_type | Type of EBS volume | gp3 | io1 |
| controller_disk_iops | IOPS of EBS volume | 3000 | 4000 |
| controller_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
| worker_disk_size | Size of EBS volume in GB | 30 | 100 |
| worker_disk_type | Type of EBS volume | gp3 | io1 |
| worker_disk_iops | IOPS of EBS volume | 3000 | 4000 |
| worker_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0/null | 0.10 |
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
@ -228,7 +232,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
!!! warning
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller.
Do not choose a `controller_type` smaller than `t3.small`. Smaller instances are not sufficient for running a controller.
!!! tip "MTU"
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
@ -236,4 +240,3 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
#### Spot
Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled.

View File

@ -1,10 +1,10 @@
# Azure
In this tutorial, we'll create a Kubernetes v1.28.3 cluster on Azure with Flatcar Linux.
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Azure with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -75,20 +75,22 @@ Define a Kubernetes cluster using the module `azure/flatcar-linux/kubernetes`.
```tf
module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.31.3"
# Azure
cluster_name = "ramius"
region = "centralus"
location = "centralus"
dns_zone = "azure.example.com"
dns_zone_group = "example-group"
network_cidr = {
ipv4 = ["10.0.0.0/20"]
}
# instances
worker_count = 2
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
worker_count = 2
host_cidr = "10.0.0.0/20"
}
```
@ -138,8 +140,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
```
resource "local_file" "kubeconfig-ramius" {
content = module.ramius.kubeconfig-admin
filename = "/home/user/.kube/configs/ramius-config"
content = module.ramius.kubeconfig-admin
filename = "/home/user/.kube/configs/ramius-config"
file_permission = "0600"
}
```
@ -149,9 +152,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.28.3
ramius-worker-000001 Ready <none> 25m v1.28.3
ramius-worker-000002 Ready <none> 24m v1.28.3
ramius-controller-0 Ready <none> 24m v1.31.3
ramius-worker-000001 Ready <none> 25m v1.31.3
ramius-worker-000002 Ready <none> 24m v1.31.3
```
List the pods.
@ -161,9 +164,9 @@ $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
kube-system calico-node-1m5bf 2/2 Running 0 26m
kube-system calico-node-7jmr1 2/2 Running 0 26m
kube-system calico-node-bknc8 2/2 Running 0 26m
kube-system cilium-1m5bf 1/1 Running 0 26m
kube-system cilium-7jmr1 1/1 Running 0 26m
kube-system cilium-bknc8 1/1 Running 0 26m
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
@ -185,13 +188,13 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/azure/f
| Name | Description | Example |
|:-----|:------------|:--------|
| cluster_name | Unique cluster name (prepended to dns_zone) | "ramius" |
| region | Azure region | "centralus" |
| location | Azure location | "centralus" |
| dns_zone | Azure DNS zone | "azure.example.com" |
| dns_zone_group | Resource group where the Azure DNS zone resides | "global" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
!!! tip
Regions are shown in [docs](https://azure.microsoft.com/en-us/global-infrastructure/regions/) or with `az account list-locations --output table`.
Locations are shown in [docs](https://azure.microsoft.com/en-us/global-infrastructure/regions/) or with `az account list-locations --output table`.
#### DNS Zone
@ -224,26 +227,27 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
| disk_size | Size of the disk in GB | 30 | 100 |
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
| controller_disk_type | Managed disk for controllers | Premium_LRS | Standard_LRS |
| controller_disk_size | Managed disk size in GB | 30 | 50 |
| worker_count | Number of workers | 1 | 3 |
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
| worker_disk_type | Managed disk for workers | Standard_LRS | Premium_LRS |
| worker_disk_size | Size of the disk in GB | 30 | 100 |
| worker_ephemeral_disk | Use ephemeral local disk instead of managed disk | false | true |
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
| network_cidr | Virtual network CIDR ranges | { ipv4 = ["10.0.0.0/16"], ipv6 = [ULA, ...] } | { ipv4 = ["10.0.0.0/20"] } |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
!!! warning
Unlike AWS and GCP, Azure requires its *virtual* networks to have non-overlapping IPv4 CIDRs (yeah, go figure). Instead of each cluster just using `10.0.0.0/16` for instances, each Azure cluster's `host_cidr` must be non-overlapping (e.g. 10.0.0.0/20 for the 1st cluster, 10.0.16.0/20 for the 2nd cluster, etc).
!!! warning
Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller.

View File

@ -1,10 +1,10 @@
# Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.28.3 cluster on bare-metal with Flatcar Linux.
In this tutorial, we'll network boot and provision a Kubernetes v1.31.3 cluster on bare-metal with Flatcar Linux.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns` while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns` while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/flatcar-linux/kubernete
```tf
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.31.3"
# bare-metal
cluster_name = "mercury"
@ -194,7 +194,7 @@ Workers with similar features can be defined inline using the `workers` field as
```tf
module "mercury-node1" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.31.3"
# bare-metal
cluster_name = "mercury"
@ -312,8 +312,9 @@ systemd[1]: Started Kubernetes control plane.
```
resource "local_file" "kubeconfig-mercury" {
content = module.mercury.kubeconfig-admin
filename = "/home/user/.kube/configs/mercury-config"
content = module.mercury.kubeconfig-admin
filename = "/home/user/.kube/configs/mercury-config"
file_permission = "0600"
}
```
@ -323,9 +324,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.28.3
node2.example.com Ready <none> 10m v1.28.3
node3.example.com Ready <none> 10m v1.28.3
node1.example.com Ready <none> 10m v1.31.3
node2.example.com Ready <none> 10m v1.31.3
node3.example.com Ready <none> 10m v1.31.3
```
List the pods.
@ -333,9 +334,10 @@ List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-6qp7f 2/2 Running 1 11m
kube-system calico-node-gnjrm 2/2 Running 0 11m
kube-system calico-node-llbgt 2/2 Running 0 11m
kube-system cilium-6qp7f 1/1 Running 1 11m
kube-system cilium-gnjrm 1/1 Running 0 11m
kube-system cilium-llbgt 1/1 Running 0 11m
kube-system cilium-operator-68d778b448-g744f 1/1 Running 0 11m
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m

View File

@ -1,10 +1,10 @@
# DigitalOcean
In this tutorial, we'll create a Kubernetes v1.28.3 cluster on DigitalOcean with Flatcar Linux.
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on DigitalOcean with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -81,19 +81,19 @@ Define a Kubernetes cluster using the module `digital-ocean/flatcar-linux/kubern
```tf
module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.31.3"
# Digital Ocean
cluster_name = "nemo"
region = "nyc3"
dns_zone = "digital-ocean.example.com"
# configuration
os_image = data.digitalocean_image.flatcar-stable-2303-4-0.id
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
# optional
# instances
os_image = data.digitalocean_image.flatcar-stable-2303-4-0.id
worker_count = 2
# configuration
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
}
```
@ -144,8 +144,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
```
resource "local_file" "kubeconfig-nemo" {
content = module.nemo.kubeconfig-admin
filename = "/home/user/.kube/configs/nemo-config"
content = module.nemo.kubeconfig-admin
filename = "/home/user/.kube/configs/nemo-config"
file_permission = "0600"
}
```
@ -155,9 +156,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.28.3
10.132.115.81 Ready <none> 10m v1.28.3
10.132.124.107 Ready <none> 10m v1.28.3
10.132.110.130 Ready <none> 10m v1.31.3
10.132.115.81 Ready <none> 10m v1.31.3
10.132.124.107 Ready <none> 10m v1.31.3
```
List the pods.
@ -166,9 +167,9 @@ List the pods.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
kube-system calico-node-1m5bf 2/2 Running 0 11m
kube-system calico-node-7jmr1 2/2 Running 0 11m
kube-system calico-node-bknc8 2/2 Running 0 11m
kube-system cilium-1m5bf 1/1 Running 0 11m
kube-system cilium-7jmr1 1/1 Running 0 11m
kube-system cilium-bknc8 1/1 Running 0 11m
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-proxy-6kxjf 1/1 Running 0 11m

View File

@ -1,10 +1,10 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.28.3 cluster on Google Compute Engine with Flatcar Linux.
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Google Compute Engine with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/flatcar-linux/kuberne
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.31.3"
# Google Cloud
cluster_name = "yavin"
@ -81,11 +81,11 @@ module "yavin" {
dns_zone = "example.com"
dns_zone_name = "example-zone"
# instances
worker_count = 2
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
worker_count = 2
}
```
@ -136,8 +136,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
```
resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
file_permission = "0600"
}
```
@ -147,9 +148,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.28.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.28.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.28.3
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.31.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.31.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.31.3
```
List the pods.
@ -157,9 +158,9 @@ List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1cs8z 2/2 Running 0 6m
kube-system calico-node-d1l5b 2/2 Running 0 6m
kube-system calico-node-sp9ps 2/2 Running 0 6m
kube-system cilium-1cs8z 1/1 Running 0 6m
kube-system cilium-d1l5b 1/1 Running 0 6m
kube-system cilium-sp9ps 1/1 Running 0 6m
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
@ -209,25 +210,25 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
### Optional
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| os_image | Flatcar Linux image for compute instances | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
| disk_size | Size of the disk in GB | 30 | 100 |
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
| Name | Description | Default | Example |
|:---------------------|:---------------------------------------------------------------------------|:-----------------|:--------------------------------------------|
| os_image | Flatcar Linux image for compute instances | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| controller_disk_size | Controller disk size in GB | 30 | 20 |
| worker_count | Number of workers | 1 | 3 |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| worker_disk_size | Worker disk size in GB | 30 | 100 |
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
#### Preemption
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 82 KiB

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.28.3 (upstream)
* Kubernetes v1.31.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#hosts) customization
@ -19,7 +19,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Modules
Typhoon provides a Terraform Module for each supported operating system and platform.
Typhoon provides a Terraform Module for defining a Kubernetes cluster on each supported operating system and platform.
Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
@ -50,6 +50,14 @@ Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/
| AWS | Flatcar Linux (ARM64) | [aws/flatcar-linux/kubernetes](advanced/arm64.md) | alpha |
| Azure | Flatcar Linux (ARM64) | [azure/flatcar-linux/kubernetes](advanced/arm64.md) | alpha |
Typhoon also provides Terraform Modules for optionally managing individual components applied onto clusters.
| Name | Terraform Module | Status |
|---------|------------------|--------|
| CoreDNS | [addons/coredns](addons/coredns) | beta |
| Cilium | [addons/cilium](addons/cilium) | beta |
| flannel | [addons/flannel](addons/flannel) | beta |
## Documentation
* Architecture [concepts](architecture/concepts.md) and [operating-systems](architecture/operating-systems.md)
@ -62,7 +70,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
# Google Cloud
cluster_name = "yavin"
@ -79,8 +87,9 @@ module "yavin" {
# Obtain cluster kubeconfig
resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
file_permission = "0600"
}
```
@ -100,9 +109,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.28.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.28.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.28.3
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.31.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.31.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.31.3
```
List the pods.
@ -149,4 +158,3 @@ Poseidon's Github [Sponsors](https://github.com/sponsors/poseidon) support the i
<br>
If you'd like your company here, please contact dghubble at psdn.io.

View File

@ -13,12 +13,12 @@ Typhoon provides tagged releases to allow clusters to be versioned using ordinar
```
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
...
}
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.28.3"
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.31.3"
...
}
```
@ -192,7 +192,7 @@ Applying edits to most worker fields will start an instance refresh:
However, changing `os_stream`/`os_channel` or new AMIs becoming available will NOT change the launch configuration or trigger an Instance Refresh. This allows Fedora CoreOS or Flatcar Linux to auto-update themselves via reboots and avoids unexpected terraform diffs for new AMIs.
!!! note
Before Typhoon v1.28.3, worker nodes only used new launch configurations when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
Before Typhoon v1.31.3, worker nodes only used new launch configurations when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
### Google Cloud
@ -233,7 +233,7 @@ Applying edits to most worker fields will start an instance refresh:
However, changing `os_stream`/`os_channel` or new compute images becoming available will NOT change the launch template or update instances. This allows Fedora CoreOS or Flatcar Linux to auto-update themselves via reboots and avoids unexpected terraform diffs for new AMIs.
!!! note
Before Typhoon v1.28.3, worker nodes only used new launch templates when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
Before Typhoon v1.31.3, worker nodes only used new launch templates when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
## Upgrade poseidon/ct