Allow custom initial node taints on worker pool nodes

* Add `node_taints` variable to worker modules to set custom
initial node taints on cloud platforms that support auto-scaling
worker pools of heterogeneous nodes (i.e. AWS, Azure, GCP)
* Worker pools could use custom `node_labels` to allowed workloads
to select among differentiated nodes, while custom `node_taints`
allows a worker pool's nodes to be tainted as special to prevent
scheduling, except by workloads that explicitly tolerate the
taint
* Expose `daemonset_tolerations` in AWS, Azure, and GCP kubernetes
cluster modules, to determine whether `kube-system` components
should tolerate the custom taint (advanced use covered in docs)

Rel: #550, #663
Closes #429
This commit is contained in:
Dalton Hubble
2021-04-11 12:08:56 -07:00
parent d73621c838
commit 084e8bea49
31 changed files with 246 additions and 11 deletions

View File

@ -21,7 +21,7 @@ Create a cluster with ARM64 controller and worker nodes. Container workloads mus
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.19.4"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.21.0"
# AWS
cluster_name = "gravitas"
@ -47,9 +47,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes.
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-12-178 Ready <none> 101s v1.19.4 10.0.12.178 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
ip-10-0-18-93 Ready <none> 102s v1.19.4 10.0.18.93 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
ip-10-0-90-10 Ready <none> 104s v1.19.4 10.0.90.10 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
ip-10-0-12-178 Ready <none> 101s v1.21.0 10.0.12.178 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
ip-10-0-18-93 Ready <none> 102s v1.21.0 10.0.18.93 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
ip-10-0-90-10 Ready <none> 104s v1.21.0 10.0.90.10 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
```
## Hybrid
@ -60,7 +60,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf
module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.19.4"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.21.0"
# AWS
cluster_name = "gravitas"
@ -83,7 +83,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf
module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.19.4"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.21.0"
# AWS
vpc_id = module.gravitas.vpc_id
@ -108,9 +108,9 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
```
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-14-73 Ready <none> 116s v1.19.4 10.0.14.73 <none> Fedora CoreOS 32.20201018.3.0 5.8.15-201.fc32.x86_64 docker://19.3.11
ip-10-0-17-167 Ready <none> 104s v1.19.4 10.0.17.167 <none> Fedora CoreOS 32.20201018.3.0 5.8.15-201.fc32.x86_64 docker://19.3.11
ip-10-0-47-166 Ready <none> 110s v1.19.4 10.0.47.166 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
ip-10-0-7-237 Ready <none> 111s v1.19.4 10.0.7.237 <none> Fedora CoreOS 32.20201018.3.0 5.8.15-201.fc32.x86_64 docker://19.3.11
ip-10-0-14-73 Ready <none> 116s v1.21.0 10.0.14.73 <none> Fedora CoreOS 32.20201018.3.0 5.8.15-201.fc32.x86_64 docker://19.3.11
ip-10-0-17-167 Ready <none> 104s v1.21.0 10.0.17.167 <none> Fedora CoreOS 32.20201018.3.0 5.8.15-201.fc32.x86_64 docker://19.3.11
ip-10-0-47-166 Ready <none> 110s v1.21.0 10.0.47.166 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
ip-10-0-7-237 Ready <none> 111s v1.21.0 10.0.7.237 <none> Fedora CoreOS 32.20201018.3.0 5.8.15-201.fc32.x86_64 docker://19.3.11
```

134
docs/advanced/nodes.md Normal file
View File

@ -0,0 +1,134 @@
# Nodes
Typhoon clusters consist of controller node(s) and a (default) set of worker nodes.
## Overview
Typhoon nodes use the standard set of Kubernetes node labels.
```yaml
Labels: kubernetes.io/arch=amd64
kubernetes.io/hostname=node-name
kubernetes.io/os=linux
```
Controller node(s) are labeled to allow node selection (for rare components that run on controllers) and tainted to prevent ordinary workloads running on controllers.
```yaml
Labels: node.kubernetes.io/controller=true
Taints: node-role.kubernetes.io/controller:NoSchedule
```
Worker nodes are labeled to allow node selection and untainted. Workloads will schedule on worker nodes by default, baring any contraindications.
```yaml
Labels: node.kubernetes.io/node=
Taints: <none>
```
On auto-scaling cloud platforms, you may add [worker pools](/advanced/worker-pools) with different groups of nodes with their own labels and taints. On platforms like bare-metal, with heterogeneous machines, you may manage node labels and taints per node.
## Node Labels
Add custom initial worker node labels to default workers or worker pool nodes to allow workloads to select among nodes that differ.
=== "Cluster"
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.21.0"
# Google Cloud
cluster_name = "yavin"
region = "us-central1"
dns_zone = "example.com"
dns_zone_name = "example-zone"
# configuration
ssh_authorized_key = local.ssh_key
# optional
worker_count = 2
worker_node_labels = ["pool=default"]
}
```
=== "Worker Pool"
```tf
module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.21.0"
# Google Cloud
cluster_name = "yavin"
region = "europe-west2"
network = module.yavin.network_name
# configuration
name = "yavin-16x"
kubeconfig = module.yavin.kubeconfig
ssh_authorized_key = local.ssh_key
# optional
worker_count = 1
machine_type = "n1-standard-16"
node_labels = ["pool=big"]
}
```
In the example above, the two default workers would be labeled `pool: default` and the additional worker would be labeled `pool: big`.
## Node Taints
Add custom initial taints on worker pool nodes to indicate a node is unique and should only schedule workloads that explicitly tolerate a given taint key.
!!! warning
Since taints prevent workloads scheduling onto a node, you must decide whether `kube-system` DaemonSets (e.g. flannel, Calico, Cilium) should tolerate your custom taint by setting `daemonset_tolerations`. If you don't list your custom taint(s), important components won't run on these nodes.
=== "Cluster"
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.21.0"
# Google Cloud
cluster_name = "yavin"
region = "us-central1"
dns_zone = "example.com"
dns_zone_name = "example-zone"
# configuration
ssh_authorized_key = local.ssh_key
# optional
worker_count = 2
daemonset_tolerations = ["role"]
}
```
=== "Worker Pool"
```tf
module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.21.0"
# Google Cloud
cluster_name = "yavin"
region = "europe-west2"
network = module.yavin.network_name
# configuration
name = "yavin-16x"
kubeconfig = module.yavin.kubeconfig
ssh_authorized_key = local.ssh_key
# optional
worker_count = 1
accelerator_type = "nvidia-tesla-p100"
accelerator_count = 1
node_taints = ["role=gpu:NoSchedule"]
}
```
In the example above, the the additional worker would be tainted with `role=gpu:NoSchedule` to prevent workloads scheduling, but `kube-system` components like flannel, Calico, or Cilium would tolerate that custom taint to run there.

View File

@ -99,6 +99,7 @@ The AWS internal `workers` module supports a number of [variables](https://githu
| snippets | Fedora CoreOS or Container Linux Config snippets | [] | [examples](/advanced/customization/) |
| service_cidr | Must match `service_cidr` of cluster | "10.3.0.0/16" | "10.3.0.0/24" |
| node_labels | List of initial node labels | [] | ["worker-pool=foo"] |
| node_taints | List of initial node taints | [] | ["role=gpu:NoSchedule"] |
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/) or per-region and per-type [spot prices](https://aws.amazon.com/ec2/spot/pricing/).
@ -194,6 +195,7 @@ The Azure internal `workers` module supports a number of [variables](https://git
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| node_labels | List of initial node labels | [] | ["worker-pool=foo"] |
| node_taints | List of initial node taints | [] | ["role=gpu:NoSchedule"] |
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
@ -297,6 +299,7 @@ Check the list of regions [docs](https://cloud.google.com/compute/docs/regions-z
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
| service_cidr | Must match `service_cidr` of cluster | "10.3.0.0/16" | "10.3.0.0/24" |
| node_labels | List of initial node labels | [] | ["worker-pool=foo"] |
| node_taints | List of initial node taints | [] | ["role=gpu:NoSchedule"] |
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).