mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 17:51:33 +02:00
Compare commits
109 Commits
Author | SHA1 | Date | |
---|---|---|---|
a205922d06 | |||
b5ba65d4c2 | |||
e696fd2b22 | |||
3ff9b792ca | |||
c4f1d2d1c8 | |||
a1d7b5cd1e | |||
e7591030e0 | |||
f2bf5ac3fb | |||
9cd1c5b17a | |||
d6f739dedb | |||
6bb7a36cf2 | |||
0afe9d65ed | |||
11e540000f | |||
d6cbcf9f96 | |||
ce52a2cd35 | |||
bd9a908125 | |||
0dc8740c77 | |||
a9b12b6bca | |||
d419c58ab1 | |||
da76d32aba | |||
f0e5982b3c | |||
a8990b3045 | |||
f597f7cda3 | |||
b4857c123e | |||
50bffaae8f | |||
a193762eed | |||
adf33df99b | |||
29a005b7b4 | |||
ccebc2313d | |||
1f86592d13 | |||
6a521257d0 | |||
26dbc7e91d | |||
de668e696a | |||
d3b2217444 | |||
937acc4b5a | |||
b0a6dc8115 | |||
420ff6ff04 | |||
9b733d79c7 | |||
35a9e22b1f | |||
0f38a6d405 | |||
a535581ef2 | |||
08d13e7215 | |||
3ff2d38fa5 | |||
d6d8eb8d79 | |||
f04e1d25a8 | |||
b68f8bb2a9 | |||
651151805d | |||
8d2c8b8db6 | |||
675ac63159 | |||
b4c8b1729c | |||
e82241169a | |||
ffe4929ff6 | |||
88b3925318 | |||
29876dc85a | |||
7e29e35457 | |||
3ee462a24c | |||
f833b7205d | |||
558e293f78 | |||
90782ea820 | |||
8dc7cc614c | |||
74d4d56dbd | |||
5abe84b520 | |||
951209d113 | |||
09751cc0e8 | |||
c14300f0be | |||
37de9ca2ae | |||
1786e34f33 | |||
5f612c82e2 | |||
e60a321185 | |||
5ad74883fe | |||
4ad473cd3c | |||
393a38deff | |||
76d92e9c2d | |||
275fc0f9e8 | |||
3fb59a3289 | |||
a31dbceac6 | |||
1dcf56127b | |||
bf06412dfd | |||
505818b7d5 | |||
0d27811265 | |||
c13d060b38 | |||
e87d5aabc3 | |||
760b4cd5ee | |||
fcd8ff2b17 | |||
ef2d2af0c7 | |||
8e2027ed2d | |||
52427a4271 | |||
20b76d6e00 | |||
6facfca4ed | |||
ed8c6a5aeb | |||
003af72cc8 | |||
b321b90a4f | |||
e5d0e2d48b | |||
679f8b878f | |||
87a8278c9d | |||
93b7f2554e | |||
62d47ad3f0 | |||
6eb7861f96 | |||
ffbacbccf7 | |||
16c2785878 | |||
4a469513dd | |||
47d8431fe0 | |||
256b87812e | |||
ca6eef365f | |||
c6794f1007 | |||
de6f27e119 | |||
6a9c32d3a9 | |||
a7e9e423f5 | |||
83236eab57 |
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
site/
|
192
CHANGES.md
192
CHANGES.md
@ -4,6 +4,198 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
## v1.26.1
|
||||
|
||||
* Kubernetes [v1.26.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1261)
|
||||
* Update etcd from v3.5.6 to [v3.5.7](https://github.com/etcd-io/etcd/releases/tag/v3.5.7)
|
||||
* Update Cilium from v1.12.4 to [v1.12.5](https://github.com/cilium/cilium/releases/tag/v1.12.5)
|
||||
* Update Calico from v3.24.5 to [v3.25.0](https://github.com/projectcalico/calico/releases/tag/v3.25.0)
|
||||
* Update CoreDNS from v1.9.3 to [v1.9.4](https://github.com/poseidon/terraform-render-bootstrap/pull/341)
|
||||
|
||||
## v1.26.0
|
||||
|
||||
* Kubernetes [v1.26.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260)
|
||||
* Update etcd from v3.5.5 to [v3.5.6](https://github.com/etcd-io/etcd/releases/tag/v3.5.6)
|
||||
* Update Cilium from v1.12.3 to [v1.12.4](https://github.com/cilium/cilium/releases/tag/v1.12.4)
|
||||
* Update flannel from v0.15.1 to [v0.20.2](https://github.com/flannel-io/flannel/releases/tag/v0.20.2)
|
||||
* Reminder: Modules are no longer published to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon) ([#1282](https://github.com/poseidon/typhoon/pull/1282))
|
||||
* See [#1282](https://github.com/poseidon/typhoon/pull/1282) and [v1.25.4](https://github.com/poseidon/typhoon/releases/tag/v1.25.4) for details
|
||||
|
||||
### AWS
|
||||
|
||||
* Migrate AWS launch configurations to launch templates ([#1275](https://github.com/poseidon/typhoon/pull/1275))
|
||||
* Starting Dec 31, 2022 AWS won't add new instance types/families to launch configurations
|
||||
|
||||
### Addons
|
||||
|
||||
* Update ingress-nginx from v1.3.1 to [v1.5.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.5.1)
|
||||
* Update Prometheus from v2.40.1 to [v2.40.5](https://github.com/prometheus/prometheus/releases/tag/v2.40.5)
|
||||
* Update node-exporter from v1.3.1 to [v1.5.0](https://github.com/prometheus/node_exporter/releases/tag/v1.5.0)
|
||||
* Update kube-state-metrics from v2.6.0 to [v2.7.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.7.0)
|
||||
* Update Grafana from v9.2.4 to [v9.3.1](https://github.com/grafana/grafana/releases/tag/v9.3.1)
|
||||
|
||||
## v1.25.4
|
||||
|
||||
* Kubernetes [v1.25.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1254)
|
||||
* Update Calico from v3.24.1 to [v3.24.5](https://github.com/projectcalico/calico/releases/tag/v3.24.5)
|
||||
* Allow Kubelet kubeconfig to drain nodes, if desired ([#330](https://github.com/poseidon/terraform-render-bootstrap/pull/330))
|
||||
* Re-enable Kubelet Graceful Node Shutdown ([#1261](https://github.com/poseidon/typhoon/pull/1261))
|
||||
* Introduce companion project [poseidon/scuttle](https://github.com/poseidon/scuttle)
|
||||
* Link to new Mastodon account for release announcements
|
||||
* [@typhoon@fosstodon.org](https://fosstodon.org/@typhoon)
|
||||
* [@poseidon@fosstodon.org](https://fosstodon.org/@poseidon)
|
||||
* Deprecate publishing to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon)
|
||||
* Typhoon docs have always shown using Git-based module sources, not the Terraform Module Registry
|
||||
* Module usage should be `source = "git::https://github.com/poseidon/typhoon/...` not `source = poseidon/kubernetes/...`
|
||||
* Terraform's Module Registry requires subtree mirroring typhoon to special terraform-platform-kubernetes repos, only supports release versions (no commit SHAs or forks), only ever contained Flatcar Linux modules (not Fedora CoreOS) for historical reasons
|
||||
* Note, this does not affect Terraform Providers like `poseidon/matchbox` or `poseidon/ct`, the registry works well for providers
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Remove unused `Wants=network.target` from `etcd-member.service` ([#1254](https://github.com/poseidon/typhoon/pull/1254))
|
||||
|
||||
### Cloud
|
||||
|
||||
* Remove defunct `delete-node.service` from worker node configurations ([#1256](https://github.com/poseidon/typhoon/pull/1256))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.39.1 to [v2.40.1](https://github.com/prometheus/prometheus/releases/tag/v2.40.1)
|
||||
* Update Grafana from v9.1.7 to [v9.2.4](https://github.com/grafana/grafana/releases/tag/v9.2.4)
|
||||
|
||||
## v1.25.3
|
||||
|
||||
* Kubernetes [v1.25.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1253)
|
||||
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` for addons ([#1246](https://github.com/poseidon/typhoon/pull/1246))
|
||||
* Update Cilium from v1.12.2 to [v1.12.3](https://github.com/cilium/cilium/releases/tag/v1.12.3) ([#1253](https://github.com/poseidon/typhoon/pull/1253))
|
||||
|
||||
### Azure
|
||||
|
||||
* Change default Azure `worker_type` from [`Standard_DS1_v2`](https://learn.microsoft.com/en-us/azure/virtual-machines/dv2-dsv2-series#dsv2-series) to [`Standard_D2as_v5`](https://learn.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series#dasv5-series) ([#1248](https://github.com/poseidon/typhoon/pull/1248))
|
||||
* Get 2 VCPU, 7 GiB, 12500Mbps (vs 1 VCPU, 3.5GiB, 750 Mbps)
|
||||
* Small increase in pay-as-you-go price ($53.29 -> $62.78)
|
||||
* Small increase in spot price ($5.64/mo -> $7.37/mo)
|
||||
* Change from Intel to AMD EPYC (`D2as_v5` cheaper than `D2s_v5`)
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
* Add Flatcar Linux ARM64 support on Azure ([docs](https://typhoon.psdn.io/advanced/arm64/), [#1251](https://github.com/poseidon/typhoon/pull/1251))
|
||||
* Switch from Azure Hypervisor gen1 to gen2 (**action required**) ([#1248](https://github.com/poseidon/typhoon/pull/1248))
|
||||
* Run `az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2`
|
||||
|
||||
### Docs
|
||||
|
||||
* Remove old docs note about not supporting ARM64 with Calico
|
||||
* Typhoon supports ARM64 with `cilium`, `calico`, and `flannel`
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.38.0 to [v2.39.1](https://github.com/prometheus/prometheus/releases/tag/v2.39.1)
|
||||
* Update Grafana from v9.1.6 to [v9.1.7](https://github.com/grafana/grafana/releases/tag/v9.1.7)
|
||||
|
||||
## v1.25.2
|
||||
|
||||
Kubernetes v1.25.2 was skipped since there were minimal changes upstream.
|
||||
|
||||
## v1.25.1
|
||||
|
||||
* Kubernetes [v1.25.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1251)
|
||||
* Update etcd from v3.5.4 to [v3.5.5](https://github.com/etcd-io/etcd/releases/tag/v3.5.5)
|
||||
* Update Cilium from v1.12.1 to [v1.12.2](https://github.com/cilium/cilium/releases/tag/v1.12.2)
|
||||
* Update Calico from v3.23.3 to [v3.24.1](https://github.com/projectcalico/calico/releases/tag/v3.24.1)
|
||||
* Revert Kubelet Graceful Node Shutdown on worker nodes ([#1227](https://github.com/poseidon/typhoon/pull/1227))
|
||||
* Fix issue where non-critical pods are left in Error/Completed state on node shutdown
|
||||
* Remove feature flag disable workaround for [kubernetes/kubernetes#112081](https://github.com/kubernetes/kubernetes/issues/112081)
|
||||
* Kubernetes [reverted](https://github.com/kubernetes/kubernetes/pull/112078) `LocalStorageCapacityIsolationFSQuotaMonitoring` back to alpha
|
||||
* Remove workaround for preventing `search .` propagation in [kubernetes/kubernetes#112135](https://github.com/kubernetes/kubernetes/issues/112135)
|
||||
* Upstream Kubernetes [fix](https://github.com/kubernetes/kubernetes/pull/112157)
|
||||
|
||||
### Addons
|
||||
|
||||
* Update kube-state-metrics from v2.5.0 to [v2.6.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.6.0)
|
||||
* Update ingress-nginx from v1.3.0 to [v1.3.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.3.1)
|
||||
* Update Grafana from v9.1.0 to [v9.1.6](https://github.com/grafana/grafana/releases/tag/v9.1.6)
|
||||
|
||||
## v1.25.0
|
||||
|
||||
* Kubernetes [v1.25.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1250)
|
||||
* Disable LocalStorageCapacityIsolationFSQuotaMonitoring feature gate ([#1220](https://github.com/poseidon/typhoon/pull/1220), fixes [kubernetes#112081](https://github.com/kubernetes/kubernetes/issues/112081))
|
||||
* Add workaround to revert adding "search ." to containers' `/etc/resolv.conf` ([#1224](https://github.com/poseidon/typhoon/pull/1224), fixes [kubernetes#112135](https://github.com/kubernetes/kubernetes/issues/112135))
|
||||
* Migrate most Kubelet flags to KubeletConfiguration file ([#1219](https://github.com/poseidon/typhoon/pull/1219))
|
||||
* Configure Kubelet Graceful Node Shutdown ([#1222](https://github.com/poseidon/typhoon/pull/1222))
|
||||
* Allow up to 30s for critical pods to gracefully shutdown on node shutdown
|
||||
* Allow up to 15s for regular pods to gracefully shutdown on node shutdown
|
||||
* Mark node NotReady promptly on node shutdown
|
||||
* Lengthen systemd inhibitor lock max delay from 5s to 45s
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Change Podman `log-driver` from `journald` to `k8s-file` ([#1221](https://github.com/poseidon/typhoon/pull/1221))
|
||||
* Fix `etcd-member` and Kubelet systemd service log lines appearing twice in journal logs
|
||||
|
||||
## v1.24.4
|
||||
|
||||
* Kubernetes [v1.24.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1244)
|
||||
* Update CoreDNS from v1.8.6 to [v1.9.3](https://github.com/poseidon/terraform-render-bootstrap/pull/318)
|
||||
* Update Cilium from v1.11.7 to [v1.12.1](https://github.com/cilium/cilium/releases/tag/v1.12.1)
|
||||
* Update Calico from v3.23.1 to [v3.23.3](https://github.com/projectcalico/calico/releases/tag/v3.23.3)
|
||||
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` ([#1206](https://github.com/poseidon/typhoon/pull/1206))
|
||||
* Remove use of deprecated Terraform [template](https://registry.terraform.io/providers/hashicorp/template) provider ([#1194](https://github.com/poseidon/typhoon/pull/1194))
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Remove ineffective `/etc/fedora-coreos/iptables-legacy.stamp` ([#1201](https://github.com/poseidon/typhoon/pull/1201))
|
||||
* Typhoon already uses iptables v1.8.7 (nf_tables) since FCOS 36
|
||||
* Staying on legacy iptables required a file in `/etc/coreos` instead
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
* Migrate Flatcar Linux from Ignition spec v2.3.0 to v3.3.0 ([#1196](https://github.com/poseidon/typhoon/pull/1196)) (**action required**)
|
||||
* Flatcar Linux 3185.0.0+ [supports](https://flatcar-linux.org/docs/latest/provisioning/ignition/specification/#ignition-v3) Ignition v3.x specs (which are rendered from Butane Configs, like Fedora CoreOS)
|
||||
* `poseidon/ct` v0.11.0 [supports](https://github.com/poseidon/terraform-provider-ct/pull/131) the `flatcar` Butane Config variant
|
||||
* Require poseidon/ct v0.11+ and Flatcar Linux 3185.0.0+
|
||||
* Please modify any Flatcar Linux snippets to use the [Butane Config](https://coreos.github.io/butane/config-flatcar-v1_0/) format (**action required**)
|
||||
|
||||
```tf
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
...
|
||||
```
|
||||
|
||||
### AWS
|
||||
|
||||
* [Refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) instances in autoscaling group when launch configuration changes ([#1208](https://github.com/poseidon/typhoon/pull/1208)) ([docs](https://typhoon.psdn.io/topics/maintenance/#node-configuration-updates), **important**)
|
||||
* Worker launch configuration changes start an autoscaling group instance refresh to replace instances
|
||||
* Instance refresh creates surge instances, waits for a warm-up period, then deletes old instances
|
||||
* Changing `worker_type`, `disk_*`, `worker_price`, `worker_target_groups`, or Butane `worker_snippets` on existing worker nodes will replace instances
|
||||
* New AMIs or changing `os_stream` will be ignored, to allow Fedora CoreOS or Flatcar Linux to keep themselves updated
|
||||
* Previously, new launch configurations were made in the same way, but not applied to instances unless manually replaced
|
||||
* Rename worker autoscaling group `${cluster_name}-worker` ([#1202](https://github.com/poseidon/typhoon/pull/1202))
|
||||
* Rename launch configuration `${cluster_name}-worker` instead of a random id
|
||||
|
||||
### Google
|
||||
|
||||
* [Roll](https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups) instance template changes to worker managed instance groups ([#1207](https://github.com/poseidon/typhoon/pull/1207)) ([docs](https://typhoon.psdn.io/topics/maintenance/#node-configuration-updates), **important**)
|
||||
* Worker instance template changes roll out by gradually replacing instances
|
||||
* Automatic rollouts create surge instances, wait for health checks, then delete old instances (0 unavailable instances)
|
||||
* Changing `worker_type`, `disk_size`, `worker_preemptible`, or Butane `worker_snippets` on existing worker nodes will replace instances
|
||||
* New compute images or changing `os_stream` will be ignored, to allow Fedora CoreOS or Flatcar Linux to keep themselves updated
|
||||
* Previously, new instance templates were made in the same way, but not applied to instances unless manually replaced
|
||||
* Add health checks to worker managed instance groups (i.e. "autohealing") ([#1207](https://github.com/poseidon/typhoon/pull/1207))
|
||||
* Use health checks to probe kube-proxy every 30s
|
||||
* Replace worker nodes that fail the health check 6 times (3min)
|
||||
* Name `kube-apiserver` and `worker` health checks consistently ([#1207](https://github.com/poseidon/typhoon/pull/1207))
|
||||
* Use name `${cluster_name}-apiserver-health` and `${cluster_name}-worker-health`
|
||||
* Rename managed instance group from `${cluster_name}-worker-group` to `${cluster_name}-worker` ([#1207](https://github.com/poseidon/typhoon/pull/1207))
|
||||
* Fix bug provisioning clusters with multiple controller nodes ([#1195](https://github.com/poseidon/typhoon/pull/1195))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.37.0 to [v2.38.0](https://github.com/prometheus/prometheus/releases/tag/v2.38.0)
|
||||
* Update Grafana from v9.0.3 to [v9.1.0](https://github.com/grafana/grafana/releases/tag/v9.1.0)
|
||||
|
||||
## v1.24.3
|
||||
|
||||
* Kubernetes [v1.24.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1243)
|
||||
* Update Cilium from v1.11.6 to [v1.11.7](https://github.com/cilium/cilium/releases/tag/v1.11.7)
|
||||
|
||||
|
22
README.md
22
README.md
@ -1,4 +1,6 @@
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
# Typhoon [](https://github.com/poseidon/typhoon/releases) [](https://github.com/poseidon/typhoon/stargazers) [](https://github.com/sponsors/poseidon) [](https://fosstodon.org/@typhoon)
|
||||
|
||||
<img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
@ -11,7 +13,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
@ -48,6 +50,7 @@ Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Flatcar Linux (ARM64) | [aws/flatcar-linux/kubernetes](aws/flatcar-linux/kubernetes) | alpha |
|
||||
| Azure | Flatcar Linux (ARM64) | [azure/flatcar-linux/kubernetes](azure/flatcar-linux/kubernetes) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
@ -62,7 +65,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -101,9 +104,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.26.1
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -156,5 +159,12 @@ Poseidon's Github [Sponsors](https://github.com/sponsors/poseidon) support the i
|
||||
<img src="https://opensource.nyc3.cdn.digitaloceanspaces.com/attribution/assets/SVG/DO_Logo_horizontal_blue.svg" width="201px">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
<a href="https://deploy.equinix.com/">
|
||||
<img src="https://storage.googleapis.com/poseidon/equinix.png" width="201px">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
If you'd like your company here, please contact dghubble at psdn.io.
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: grafana
|
||||
image: docker.io/grafana/grafana:9.0.3
|
||||
image: docker.io/grafana/grafana:9.3.1
|
||||
env:
|
||||
- name: GF_PATHS_CONFIG
|
||||
value: "/etc/grafana/custom.ini"
|
||||
@ -32,15 +32,22 @@ spec:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /metrics
|
||||
tcpSocket:
|
||||
port: 8080
|
||||
initialDelaySeconds: 10
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 1
|
||||
failureThreshold: 5
|
||||
successThreshold: 1
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/health
|
||||
scheme: HTTP
|
||||
path: /robots.txt
|
||||
port: 8080
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 30
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -21,7 +21,7 @@ spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.37.0
|
||||
image: quay.io/prometheus/prometheus:v2.40.5
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -25,7 +25,7 @@ spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.5.0
|
||||
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.7.0
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
|
@ -22,19 +22,19 @@ spec:
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
runAsGroup: 65534
|
||||
fsGroup: 65534
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
containers:
|
||||
- name: node-exporter
|
||||
image: quay.io/prometheus/node-exporter:v1.3.1
|
||||
image: quay.io/prometheus/node-exporter:v1.5.0
|
||||
args:
|
||||
- --path.procfs=/host/proc
|
||||
- --path.sysfs=/host/sys
|
||||
- --path.rootfs=/host/root
|
||||
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+)($|/)
|
||||
- --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 9100
|
||||
@ -46,6 +46,9 @@ spec:
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
securityContext:
|
||||
seLinuxOptions:
|
||||
type: spc_t
|
||||
volumeMounts:
|
||||
- name: proc
|
||||
mountPath: /host/proc
|
||||
@ -55,9 +58,12 @@ spec:
|
||||
readOnly: true
|
||||
- name: root
|
||||
mountPath: /host/root
|
||||
mountPropagation: HostToContainer
|
||||
readOnly: true
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
- key: node-role.kubernetes.io/controller
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/control-plane
|
||||
operator: Exists
|
||||
- key: node.kubernetes.io/not-ready
|
||||
operator: Exists
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -56,7 +57,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -66,6 +67,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -85,28 +87,13 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -129,7 +116,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.24.3
|
||||
quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -144,6 +131,33 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -180,6 +194,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -224,7 +243,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -23,7 +23,7 @@ resource "aws_instance" "controllers" {
|
||||
|
||||
instance_type = var.controller_type
|
||||
ami = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
@ -31,6 +31,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
tags = {}
|
||||
}
|
||||
|
||||
# network
|
||||
@ -46,41 +47,22 @@ resource "aws_instance" "controllers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,10 +3,8 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
|
@ -1,3 +1,7 @@
|
||||
locals {
|
||||
ami_id = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||
}
|
||||
|
||||
data "aws_ami" "fedora-coreos" {
|
||||
most_recent = true
|
||||
owners = ["125523088429"]
|
||||
|
@ -29,7 +29,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -39,6 +39,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -58,19 +59,9 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -79,31 +70,13 @@ systemd:
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID}
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -113,6 +86,38 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -136,7 +141,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -3,9 +3,7 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Workers AutoScaling Group
|
||||
resource "aws_autoscaling_group" "workers" {
|
||||
name = "${var.name}-worker ${aws_launch_configuration.worker.name}"
|
||||
name = "${var.name}-worker"
|
||||
|
||||
# count
|
||||
desired_capacity = var.worker_count
|
||||
@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
vpc_zone_identifier = var.subnet_ids
|
||||
|
||||
# template
|
||||
launch_configuration = aws_launch_configuration.worker.name
|
||||
launch_template {
|
||||
id = aws_launch_template.worker.id
|
||||
version = aws_launch_template.worker.latest_version
|
||||
}
|
||||
|
||||
# target groups to which instances should be added
|
||||
target_group_arns = flatten([
|
||||
@ -22,6 +25,14 @@ resource "aws_autoscaling_group" "workers" {
|
||||
var.target_groups,
|
||||
])
|
||||
|
||||
instance_refresh {
|
||||
strategy = "Rolling"
|
||||
preferences {
|
||||
instance_warmup = 120
|
||||
min_healthy_percentage = 90
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
# override the default destroy and replace update behavior
|
||||
create_before_destroy = true
|
||||
@ -41,24 +52,42 @@ resource "aws_autoscaling_group" "workers" {
|
||||
}
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
image_id = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||
instance_type = var.instance_type
|
||||
spot_price = var.spot_price > 0 ? var.spot_price : null
|
||||
enable_monitoring = false
|
||||
resource "aws_launch_template" "worker" {
|
||||
name_prefix = "${var.name}-worker"
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
monitoring {
|
||||
enabled = false
|
||||
}
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
ebs_optimized = true
|
||||
block_device_mappings {
|
||||
device_name = "/dev/xvda"
|
||||
ebs {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
security_groups = var.security_groups
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
content {
|
||||
market_type = "spot"
|
||||
spot_options {
|
||||
max_price = var.spot_price
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
// Override the default destroy and replace update behavior
|
||||
@ -67,24 +96,16 @@ resource "aws_launch_configuration" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Fedora CoreOS worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -57,7 +58,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -83,26 +84,13 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -121,7 +109,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -134,18 +122,42 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -168,7 +180,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -182,14 +193,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -24,7 +24,7 @@ resource "aws_instance" "controllers" {
|
||||
instance_type = var.controller_type
|
||||
|
||||
ami = local.ami_id
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
@ -32,6 +32,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
tags = {}
|
||||
}
|
||||
|
||||
# network
|
||||
@ -47,41 +48,22 @@ resource "aws_instance" "controllers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,13 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -29,7 +30,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -58,17 +59,9 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -77,12 +70,7 @@ systemd:
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID}
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -90,29 +78,46 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -3,12 +3,10 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Workers AutoScaling Group
|
||||
resource "aws_autoscaling_group" "workers" {
|
||||
name = "${var.name}-worker ${aws_launch_configuration.worker.name}"
|
||||
name = "${var.name}-worker"
|
||||
|
||||
# count
|
||||
desired_capacity = var.worker_count
|
||||
@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
vpc_zone_identifier = var.subnet_ids
|
||||
|
||||
# template
|
||||
launch_configuration = aws_launch_configuration.worker.name
|
||||
launch_template {
|
||||
id = aws_launch_template.worker.id
|
||||
version = aws_launch_template.worker.latest_version
|
||||
}
|
||||
|
||||
# target groups to which instances should be added
|
||||
target_group_arns = flatten([
|
||||
@ -22,6 +25,14 @@ resource "aws_autoscaling_group" "workers" {
|
||||
var.target_groups,
|
||||
])
|
||||
|
||||
instance_refresh {
|
||||
strategy = "Rolling"
|
||||
preferences {
|
||||
instance_warmup = 120
|
||||
min_healthy_percentage = 90
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
# override the default destroy and replace update behavior
|
||||
create_before_destroy = true
|
||||
@ -41,24 +52,42 @@ resource "aws_autoscaling_group" "workers" {
|
||||
}
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
spot_price = var.spot_price > 0 ? var.spot_price : null
|
||||
enable_monitoring = false
|
||||
resource "aws_launch_template" "worker" {
|
||||
name_prefix = "${var.name}-worker"
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
monitoring {
|
||||
enabled = false
|
||||
}
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
ebs_optimized = true
|
||||
block_device_mappings {
|
||||
device_name = "/dev/xvda"
|
||||
ebs {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
security_groups = var.security_groups
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
content {
|
||||
market_type = "spot"
|
||||
spot_options {
|
||||
max_price = var.spot_price
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
// Override the default destroy and replace update behavior
|
||||
@ -67,24 +96,16 @@ resource "aws_launch_configuration" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Flatcar Linux worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -53,7 +54,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -62,6 +63,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -81,27 +83,12 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -124,7 +111,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.24.3
|
||||
quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -139,6 +126,33 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -175,6 +189,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -219,7 +238,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -35,7 +35,7 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
availability_set_id = azurerm_availability_set.controllers.id
|
||||
|
||||
size = var.controller_type
|
||||
custom_data = base64encode(data.ct_config.controller-ignitions.*.rendered[count.index])
|
||||
custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
|
||||
|
||||
# storage
|
||||
source_image_id = var.os_image
|
||||
@ -111,41 +111,22 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -43,7 +43,7 @@ variable "controller_type" {
|
||||
variable "worker_type" {
|
||||
type = string
|
||||
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
|
@ -3,10 +3,8 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
|
@ -26,7 +26,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -35,6 +35,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -54,51 +55,23 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
%{~ endfor ~}
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -108,6 +81,38 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -131,7 +136,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -41,7 +41,7 @@ variable "worker_count" {
|
||||
variable "vm_type" {
|
||||
type = string
|
||||
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
|
@ -3,9 +3,7 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
|
@ -9,7 +9,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
# instance name prefix for instances in the set
|
||||
computer_name_prefix = "${var.name}-worker"
|
||||
single_placement_group = false
|
||||
custom_data = base64encode(data.ct_config.worker-ignition.rendered)
|
||||
custom_data = base64encode(data.ct_config.worker.rendered)
|
||||
|
||||
# storage
|
||||
source_image_id = var.os_image
|
||||
@ -70,24 +70,17 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS configs
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Fedora CoreOS worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -55,7 +56,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -80,25 +81,12 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -117,7 +105,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -130,18 +118,42 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -164,7 +176,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -178,14 +189,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -17,7 +17,9 @@ resource "azurerm_dns_a_record" "etcds" {
|
||||
locals {
|
||||
# Container Linux derivative
|
||||
# flatcar-stable -> Flatcar Linux Stable
|
||||
channel = split("-", var.os_image)[1]
|
||||
channel = split("-", var.os_image)[1]
|
||||
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
|
||||
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
|
||||
}
|
||||
|
||||
# Controller availability set to spread controllers
|
||||
@ -41,7 +43,7 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
availability_set_id = azurerm_availability_set.controllers.id
|
||||
|
||||
size = var.controller_type
|
||||
custom_data = base64encode(data.ct_config.controller-ignitions.*.rendered[count.index])
|
||||
custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
|
||||
|
||||
# storage
|
||||
os_disk {
|
||||
@ -53,16 +55,19 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
|
||||
# Flatcar Container Linux
|
||||
source_image_reference {
|
||||
publisher = "Kinvolk"
|
||||
offer = "flatcar-container-linux-free"
|
||||
sku = local.channel
|
||||
publisher = "kinvolk"
|
||||
offer = "flatcar-container-linux-${local.offer_suffix}"
|
||||
sku = local.urn
|
||||
version = "latest"
|
||||
}
|
||||
|
||||
plan {
|
||||
name = local.channel
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-free"
|
||||
dynamic "plan" {
|
||||
for_each = var.arch == "arm64" ? [] : [1]
|
||||
content {
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-${local.offer_suffix}"
|
||||
name = local.urn
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
@ -130,41 +135,22 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -43,7 +43,7 @@ variable "controller_type" {
|
||||
variable "worker_type" {
|
||||
type = string
|
||||
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
@ -133,12 +133,15 @@ variable "worker_node_labels" {
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
variable "arch" {
|
||||
type = string
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
description = "Container architecture (amd64 or arm64)"
|
||||
default = "amd64"
|
||||
|
||||
validation {
|
||||
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||
error_message = "The arch must be amd64 or arm64."
|
||||
}
|
||||
}
|
||||
|
||||
variable "daemonset_tolerations" {
|
||||
@ -146,3 +149,11 @@ variable "daemonset_tolerations" {
|
||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
type = string
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -3,13 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -21,4 +21,5 @@ module "workers" {
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
snippets = var.worker_snippets
|
||||
node_labels = var.worker_node_labels
|
||||
arch = var.arch
|
||||
}
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -27,7 +28,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -55,30 +56,17 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
%{~ endfor ~}
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -86,29 +74,46 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -41,7 +41,7 @@ variable "worker_count" {
|
||||
variable "vm_type" {
|
||||
type = string
|
||||
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
@ -100,6 +100,17 @@ variable "node_taints" {
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "arch" {
|
||||
type = string
|
||||
description = "Container architecture (amd64 or arm64)"
|
||||
default = "amd64"
|
||||
|
||||
validation {
|
||||
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||
error_message = "The arch must be amd64 or arm64."
|
||||
}
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
|
@ -3,12 +3,10 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,8 @@
|
||||
locals {
|
||||
# flatcar-stable -> Flatcar Linux Stable
|
||||
channel = split("-", var.os_image)[1]
|
||||
channel = split("-", var.os_image)[1]
|
||||
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
|
||||
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
|
||||
}
|
||||
|
||||
# Workers scale set
|
||||
@ -14,7 +16,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
# instance name prefix for instances in the set
|
||||
computer_name_prefix = "${var.name}-worker"
|
||||
single_placement_group = false
|
||||
custom_data = base64encode(data.ct_config.worker-ignition.rendered)
|
||||
custom_data = base64encode(data.ct_config.worker.rendered)
|
||||
|
||||
# storage
|
||||
os_disk {
|
||||
@ -24,16 +26,19 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
|
||||
# Flatcar Container Linux
|
||||
source_image_reference {
|
||||
publisher = "Kinvolk"
|
||||
offer = "flatcar-container-linux-free"
|
||||
sku = local.channel
|
||||
publisher = "kinvolk"
|
||||
offer = "flatcar-container-linux-${local.offer_suffix}"
|
||||
sku = local.urn
|
||||
version = "latest"
|
||||
}
|
||||
|
||||
plan {
|
||||
name = local.channel
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-free"
|
||||
dynamic "plan" {
|
||||
for_each = var.arch == "arm64" ? [] : [1]
|
||||
content {
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-${local.offer_suffix}"
|
||||
name = local.urn
|
||||
}
|
||||
}
|
||||
|
||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
||||
@ -88,24 +93,16 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux configs
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Flatcar Linux worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -52,7 +53,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -61,6 +62,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -80,28 +82,13 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -126,7 +113,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=-/usr/bin/podman rm bootstrap
|
||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||
--network host \
|
||||
@ -149,6 +136,33 @@ storage:
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -185,6 +199,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -229,7 +248,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -25,7 +25,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -34,6 +34,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -53,33 +54,18 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in compact(split(",", node_labels)) ~}
|
||||
--node-labels=${label} \
|
||||
%{~ endfor ~}
|
||||
%{~ for taint in compact(split(",", node_taints)) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -104,6 +90,38 @@ storage:
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -127,7 +145,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -1,34 +1,26 @@
|
||||
locals {
|
||||
remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
remote_initrd = [
|
||||
"https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
"https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img"
|
||||
"--name main https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
]
|
||||
|
||||
remote_args = [
|
||||
"ip=dhcp",
|
||||
"rd.neednet=1",
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"coreos.inst.image_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
]
|
||||
|
||||
cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
cached_initrd = [
|
||||
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img"
|
||||
]
|
||||
|
||||
cached_args = [
|
||||
"ip=dhcp",
|
||||
"rd.neednet=1",
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"coreos.inst.image_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
]
|
||||
|
||||
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
|
||||
@ -46,29 +38,22 @@ resource "matchbox_profile" "controllers" {
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.controllers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "controller-ignitions" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = length(var.controllers)
|
||||
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
count = length(var.controllers)
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
domain_name = var.controllers.*.domain[count.index]
|
||||
etcd_name = var.controllers.*.name[count.index]
|
||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
// Fedora CoreOS worker profile
|
||||
@ -80,28 +65,20 @@ resource "matchbox_profile" "workers" {
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.workers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "worker-ignitions" {
|
||||
# Fedora CoreOS workers
|
||||
data "ct_config" "workers" {
|
||||
count = length(var.workers)
|
||||
|
||||
content = data.template_file.worker-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "worker-configs" {
|
||||
count = length(var.workers)
|
||||
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.workers.*.domain[count.index]
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
|
||||
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -63,7 +64,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -88,26 +89,13 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -126,7 +114,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -139,21 +127,44 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -176,7 +187,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -190,14 +200,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: installer.service
|
||||
@ -25,12 +26,11 @@ systemd:
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/installer
|
||||
filesystem: root
|
||||
mode: 0500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -ex
|
||||
curl --retry 10 "${ignition_endpoint}?{{.request.raw_query}}&os=installed" -o ignition.json
|
||||
curl --retry 10 "${ignition_endpoint}?mac=${mac}&os=installed" -o ignition.json
|
||||
flatcar-install \
|
||||
-d ${install_disk} \
|
||||
-C ${os_channel} \
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -35,7 +36,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -63,17 +64,9 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
@ -83,11 +76,7 @@ systemd:
|
||||
%{~ for taint in compact(split(",", node_taints)) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -99,17 +88,46 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -18,12 +18,10 @@ resource "matchbox_profile" "flatcar-install" {
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
var.kernel_args,
|
||||
])
|
||||
|
||||
container_linux_config = data.template_file.install-configs.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.install.*.rendered[count.index]
|
||||
}
|
||||
|
||||
// Flatcar Linux Install profile (from matchbox /assets cache)
|
||||
@ -42,101 +40,84 @@ resource "matchbox_profile" "cached-flatcar-install" {
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
var.kernel_args,
|
||||
])
|
||||
|
||||
container_linux_config = data.template_file.cached-install-configs.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.cached-install.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "template_file" "install-configs" {
|
||||
# Flatcar Linux install
|
||||
data "ct_config" "install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
|
||||
template = file("${path.module}/cl/install.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/install.yaml", {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
# only cached profile adds -b baseurl
|
||||
baseurl_flag = ""
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
}
|
||||
|
||||
data "template_file" "cached-install-configs" {
|
||||
# Flatcar Linux cached install
|
||||
data "ct_config" "cached-install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
|
||||
template = file("${path.module}/cl/install.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/install.yaml", {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
# profile uses -b baseurl to install from matchbox cache
|
||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/flatcar"
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
}
|
||||
|
||||
|
||||
// Kubernetes Controller profiles
|
||||
resource "matchbox_profile" "controllers" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-controller-%s", var.cluster_name, var.controllers.*.name[count.index])
|
||||
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.controllers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = length(var.controllers)
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = length(var.controllers)
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
domain_name = var.controllers.*.domain[count.index]
|
||||
etcd_name = var.controllers.*.name[count.index]
|
||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
// Kubernetes Worker profiles
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
|
||||
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.workers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "worker-ignitions" {
|
||||
count = length(var.workers)
|
||||
content = data.template_file.worker-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "worker-configs" {
|
||||
# Flatcar Linux workers
|
||||
data "ct_config" "workers" {
|
||||
count = length(var.workers)
|
||||
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.workers.*.domain[count.index]
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
|
||||
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -54,7 +55,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -64,6 +65,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -83,28 +85,13 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -136,7 +123,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.24.3
|
||||
quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -146,6 +133,33 @@ storage:
|
||||
- path: /etc/kubernetes
|
||||
- path: /opt/bootstrap
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -182,6 +196,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -226,7 +245,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -28,7 +28,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -38,6 +38,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -61,23 +62,11 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -93,23 +82,42 @@ systemd:
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -133,7 +141,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -41,11 +41,11 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -62,39 +62,20 @@ resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = ">= 2.12, < 3.0"
|
||||
|
@ -37,11 +37,11 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = data.ct_config.worker.rendered
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -58,20 +58,12 @@ resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
# Fedora CoreOS worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
})
|
||||
strict = true
|
||||
snippets = var.worker_snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -65,7 +66,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -91,26 +92,13 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -129,7 +117,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -142,15 +130,39 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -173,7 +185,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -187,14 +198,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -37,7 +38,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -66,25 +67,12 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -92,27 +80,44 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -46,11 +46,11 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -67,39 +67,20 @@ resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = ">= 2.12, < 3.0"
|
||||
|
@ -35,11 +35,11 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# only official DigitalOcean images support IPv6
|
||||
ipv6 = local.is_official_image
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = data.ct_config.worker.rendered
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -56,20 +56,12 @@ resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
# Flatcar Linux worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
})
|
||||
strict = true
|
||||
snippets = var.worker_snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
}
|
||||
|
||||
|
1
docs/CNAME
Normal file
1
docs/CNAME
Normal file
@ -0,0 +1 @@
|
||||
typhoon.psdn.io
|
@ -1,19 +1,21 @@
|
||||
# ARM64
|
||||
|
||||
Typhoon has experimental support for ARM64 on AWS, with Fedora CoreOS or Flatcar Linux. Clusters can be created with ARM64 controller and worker nodes. Or worker pools of ARM64 nodes can be attached to an AMD64 cluster to create a hybrid/mixed architecture cluster.
|
||||
Typhoon supports ARM64 Kubernetes clusters with ARM64 controller and worker nodes (full-cluster) or adding worker pools of ARM64 nodes to clusters with an x86/amd64 control plane for a hybdrid (mixed-arch) cluster.
|
||||
|
||||
!!! note
|
||||
Currently, CNI networking must be set to `flannel` or `cilium`.
|
||||
Typhoon ARM64 clusters (full-cluster or mixed-arch) are available on:
|
||||
|
||||
* AWS with Fedora CoreOS or Flatcar Linux
|
||||
* Azure with Flatcar Linux
|
||||
|
||||
## Cluster
|
||||
|
||||
Create a cluster with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` container images.
|
||||
Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
||||
|
||||
=== "Fedora CoreOS Cluster (arm64)"
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -38,7 +40,7 @@ Create a cluster with ARM64 controller and worker nodes. Container workloads mus
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -64,9 +66,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-21-119 Ready <none> 77s v1.24.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.24.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.24.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-21-119 Ready <none> 77s v1.26.1 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.26.1 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.26.1 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Hybrid
|
||||
@ -77,7 +79,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -100,7 +102,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -123,7 +125,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -147,7 +149,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -172,9 +174,34 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-1-73 Ready <none> 111m v1.24.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.24.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.24.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.24.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-1-73 Ready <none> 111m v1.26.1 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.26.1 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.26.1 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.26.1 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Azure
|
||||
|
||||
Create a cluster on Azure with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
region = "centralus"
|
||||
dns_zone = "azure.example.com"
|
||||
dns_zone_group = "example-group"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
arch = "arm64"
|
||||
controller_type = "Standard_D2pls_v5"
|
||||
worker_type = "Standard_D2pls_v5"
|
||||
worker_count = 2
|
||||
host_cidr = "10.0.0.0/20"
|
||||
}
|
||||
```
|
||||
|
@ -12,9 +12,11 @@ Clusters are kept to a minimal Kubernetes control plane by offering components l
|
||||
|
||||
## Hosts
|
||||
|
||||
Typhoon uses the [Ignition](https://github.com/coreos/ignition) system of Fedora CoreOS and Flatcar Linux to immutably declare a system via first-boot disk provisioning. Fedora CoreOS uses a [Butane Config](https://coreos.github.io/butane/specs/) and Flatcar Linux uses a [Container Linux Config](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/examples.md) (CLC). These define disk partitions, filesystems, systemd units, dropins, config files, mount units, raid arrays, and users.
|
||||
### Background
|
||||
|
||||
Controller and worker instances form a minimal and secure Kubernetes cluster on each platform. Typhoon provides the **snippets** feature to accept Butane or Container Linux Configs to validate and additively merge into instance declarations. This allows advanced host customization and experimentation.
|
||||
Typhoon uses the [Ignition](https://github.com/coreos/ignition) system of Fedora CoreOS and Flatcar Linux to immutably declare a system via first-boot disk provisioning. Human-friendly [Butane Configs](https://coreos.github.io/butane/specs/) define disk partitions, filesystems, systemd units, dropins, config files, mount units, raid arrays, users, and more before being converted to Ignition.
|
||||
|
||||
Controller and worker instances form a minimal and secure Kubernetes cluster on each platform. Typhoon provides the **snippets** feature to accept custom Butane Configs that are merged with instance declarations. This allows advanced host customization and experimentation.
|
||||
|
||||
!!! note
|
||||
Snippets cannot be used to modify an already existing instance, the antithesis of immutable provisioning. Ignition fully declares a system on first boot only.
|
||||
@ -25,127 +27,104 @@ Controller and worker instances form a minimal and secure Kubernetes cluster on
|
||||
!!! danger
|
||||
Edits to snippets for controller instances can (correctly) cause Terraform to observe a diff (if not otherwise suppressed) and propose destroying and recreating controller(s). Recognize that this is destructive since controllers run etcd and are stateful. See [blue/green](/topics/maintenance/#upgrades) clusters.
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
!!! note
|
||||
Fedora CoreOS snippets require `terraform-provider-ct` v0.5+
|
||||
### Usage
|
||||
|
||||
Define a Butane Config ([docs](https://coreos.github.io/butane/specs/), [config](https://github.com/coreos/butane/blob/main/docs/config-fcos-v1_4.md)) in version control near your Terraform workspace directory (e.g. perhaps in a `snippets` subdirectory). You may organize snippets into multiple files, if desired.
|
||||
|
||||
For example, ensure an `/opt/hello` file is created with permissions 0644.
|
||||
For example, ensure an `/opt/hello` file is created with permissions 0644 before boot.
|
||||
|
||||
```yaml
|
||||
# custom-files
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
contents:
|
||||
inline: |
|
||||
Hello World
|
||||
mode: 0644
|
||||
```
|
||||
=== "Fedora CoreOS"
|
||||
|
||||
Reference the FCC contents by location (e.g. `file("./custom-units.yaml")`). On [AWS](/fedora-coreos/aws/#cluster) or [Google Cloud](/fedora-coreos/google-cloud/#cluster) extend the `controller_snippets` or `worker_snippets` list variables.
|
||||
```yaml
|
||||
# custom-files.yaml
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
contents:
|
||||
inline: |
|
||||
Hello World
|
||||
mode: 0644
|
||||
```
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
...
|
||||
=== "Flatcar Linux"
|
||||
|
||||
controller_count = 1
|
||||
worker_count = 2
|
||||
controller_snippets = [
|
||||
file("./custom-files"),
|
||||
file("./custom-units"),
|
||||
]
|
||||
worker_snippets = [
|
||||
file("./custom-files"),
|
||||
file("./custom-units")",
|
||||
]
|
||||
...
|
||||
}
|
||||
```
|
||||
```yaml
|
||||
# custom-files.yaml
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
contents:
|
||||
inline: |
|
||||
Hello World
|
||||
mode: 0644
|
||||
```
|
||||
|
||||
On [Bare-Metal](/fedora-coreos/bare-metal/#cluster), different FCCs may be used for each node (since hardware may be heterogeneous). Extend the `snippets` map variable by mapping a controller or worker name key to a list of snippets.
|
||||
Or ensure a systemd unit `hello.service` is created.
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
...
|
||||
snippets = {
|
||||
"node2" = [file("./units/hello.yaml")]
|
||||
"node3" = [
|
||||
file("./units/world.yaml"),
|
||||
file("./units/hello.yaml"),
|
||||
]
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
=== "Fedora CoreOS"
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
Define a Container Linux Config (CLC) ([config](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/configuration.md), [examples](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/examples.md)) in version control near your Terraform workspace directory (e.g. perhaps in a `snippets` subdirectory). You may organize snippets into multiple files, if desired.
|
||||
|
||||
For example, ensure an `/opt/hello` file is created with permissions 0644.
|
||||
|
||||
```yaml
|
||||
# custom-files
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
Hello World
|
||||
mode: 0644
|
||||
```
|
||||
|
||||
Or ensure a systemd unit `hello.service` is created and a dropin `50-etcd-cluster.conf` is added for `etcd-member.service`.
|
||||
|
||||
```yaml
|
||||
# custom-units
|
||||
systemd:
|
||||
units:
|
||||
- name: hello.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Hello World
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/bin/echo Hello World!
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: etcd-member.service
|
||||
enable: true
|
||||
dropins:
|
||||
- name: 50-etcd-cluster.conf
|
||||
```yaml
|
||||
# custom-units.yaml
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: hello.service
|
||||
enabled: true
|
||||
contents: |
|
||||
Environment="ETCD_LOG_PACKAGE_LEVELS=etcdserver=WARNING,security=DEBUG"
|
||||
```
|
||||
[Unit]
|
||||
Description=Hello World
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/bin/echo Hello World!
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
=== "Flatcar Linux"
|
||||
|
||||
```yaml
|
||||
# custom-units.yaml
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: hello.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Hello World
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/bin/echo Hello World!
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
Reference the Butane contents by location (e.g. `file("./custom-units.yaml")`). On [AWS](/fedora-coreos/aws/#cluster), [Azure](/fedora-coreos/azure/#cluster), [DigitalOcean](/fedora-coreos/digital-ocean/#cluster), or [Google Cloud](/fedora-coreos/google-cloud/#cluster) extend the `controller_snippets` or `worker_snippets` list variables.
|
||||
|
||||
Reference the CLC contents by location (e.g. `file("./custom-units.yaml")`). On [AWS](/flatcar-linux/aws/#cluster), [Azure](/flatcar-linux/azure/#cluster), [DigitalOcean](/flatcar-linux/digital-ocean/#cluster), or [Google Cloud](/flatcar-linux/google-cloud/#cluster) extend the `controller_snippets` or `worker_snippets` list variables.
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
...
|
||||
|
||||
controller_count = 1
|
||||
worker_count = 2
|
||||
controller_snippets = [
|
||||
file("./custom-files"),
|
||||
file("./custom-units"),
|
||||
file("./custom-files.yaml"),
|
||||
file("./custom-units.yaml"),
|
||||
]
|
||||
worker_snippets = [
|
||||
file("./custom-files"),
|
||||
file("./custom-units")",
|
||||
file("./custom-files.yaml"),
|
||||
file("./custom-units.yaml")",
|
||||
]
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
On [Bare-Metal](/flatcar-linux/bare-metal/#cluster), different CLCs may be used for each node (since hardware may be heterogeneous). Extend the `snippets` map variable by mapping a controller or worker name key to a list of snippets.
|
||||
On [Bare-Metal](/fedora-coreos/bare-metal/#cluster), different Butane configs may be used for each node (since hardware may be heterogeneous). Extend the `snippets` map variable by mapping a controller or worker name key to a list of snippets.
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
|
@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
|
@ -19,7 +19,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.tempest.vpc_id
|
||||
@ -42,7 +42,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.tempest.vpc_id
|
||||
@ -111,7 +111,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Azure
|
||||
region = module.ramius.region
|
||||
@ -137,7 +137,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Azure
|
||||
region = module.ramius.region
|
||||
@ -189,7 +189,7 @@ The Azure internal `workers` module supports a number of [variables](https://git
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| worker_count | Number of instances | 1 | 3 |
|
||||
| vm_type | Machine type for instances | "Standard_DS1_v2" | See below |
|
||||
| vm_type | Machine type for instances | "Standard_D2as_v5" | See below |
|
||||
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||
| priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | "Regular" | "Spot" |
|
||||
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
|
||||
@ -207,7 +207,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
region = "europe-west2"
|
||||
@ -231,7 +231,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
region = "europe-west2"
|
||||
@ -262,11 +262,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.24.3
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.24.3
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.24.3
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.24.3
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.24.3
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.26.1
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.26.1
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.26.1
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.26.1
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.26.1
|
||||
```
|
||||
|
||||
### Variables
|
||||
|
@ -9,8 +9,8 @@ Typhoon supports [Fedora CoreOS](https://getfedora.org/coreos/) and [Flatcar Lin
|
||||
|
||||
Together, they diversify Typhoon to support a range of container technologies.
|
||||
|
||||
* Fedora CoreOS: rpm-ostree, podman, moby
|
||||
* Flatcar Linux: Gentoo core, rkt-fly, docker
|
||||
* Fedora CoreOS: rpm-ostree, podman, containerd
|
||||
* Flatcar Linux: Gentoo core, docker, containerd
|
||||
|
||||
## Host Properties
|
||||
|
||||
@ -19,7 +19,7 @@ Together, they diversify Typhoon to support a range of container technologies.
|
||||
| Kernel | ~5.10.x | ~5.16.x |
|
||||
| systemd | 249 | 249 |
|
||||
| Username | core | core |
|
||||
| Ignition system | Ignition v2.x spec | Ignition v3.x spec |
|
||||
| Ignition system | Ignition v3.x spec | Ignition v3.x spec |
|
||||
| storage driver | overlay2 (extfs) | overlay2 (xfs) |
|
||||
| logging driver | json-file | journald |
|
||||
| cgroup driver | systemd | systemd |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on AWS with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on AWS with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
@ -51,11 +51,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "4.22.0"
|
||||
version = "4.31.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
@ -111,7 +111,7 @@ Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 81 to add, 0 to change, 0 to destroy.
|
||||
Plan: 109 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
@ -123,7 +123,7 @@ module.tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
|
||||
module.tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
|
||||
module.tempest.null_resource.bootstrap: Creation complete after 5m8s (ID: 3961816482286168143)
|
||||
|
||||
Apply complete! Resources: 98 added, 0 changed, 0 destroyed.
|
||||
Apply complete! Resources: 109 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
@ -145,9 +145,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready <none> 10m v1.24.3
|
||||
ip-10-0-26-65 Ready <none> 10m v1.24.3
|
||||
ip-10-0-41-21 Ready <none> 10m v1.24.3
|
||||
ip-10-0-3-155 Ready <none> 10m v1.26.1
|
||||
ip-10-0-26-65 Ready <none> 10m v1.26.1
|
||||
ip-10-0-41-21 Ready <none> 10m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Azure
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on Azure with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on Azure with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
@ -48,11 +48,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
azurerm = {
|
||||
source = "hashicorp/azurerm"
|
||||
version = "3.14.0"
|
||||
version = "3.23.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -64,18 +64,18 @@ Additional configuration options are described in the `azurerm` provider [docs](
|
||||
|
||||
Fedora CoreOS publishes images for Azure, but does not yet upload them. Azure allows custom images to be uploaded to a storage account bucket and imported.
|
||||
|
||||
[Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow).
|
||||
[Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image, decompress it, and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow).
|
||||
|
||||
```
|
||||
xz -d fedora-coreos-31.20200323.3.2-azure.x86_64.vhd.xz
|
||||
xz -d fedora-coreos-36.20220716.3.1-azure.x86_64.vhd.xz
|
||||
```
|
||||
|
||||
Create an Azure disk (note disk ID) and create an Azure image from it (note image ID).
|
||||
|
||||
```
|
||||
az disk create --name fedora-coreos-31.20200323.3.2 -g GROUP --source https://BUCKET.blob.core.windows.net/fedora-coreos/fedora-coreos-31.20200323.3.2-azure.x86_64.vhd
|
||||
az disk create --name fedora-coreos-36.20220716.3.1 -g GROUP --source https://BUCKET.blob.core.windows.net/fedora-coreos/fedora-coreos-36.20220716.3.1-azure.x86_64.vhd
|
||||
|
||||
az image create --name fedora-coreos-31.20200323.3.2 -g GROUP --os-type=linux --source /subscriptions/some/path/providers/Microsoft.Compute/disks/fedora-coreos-31.20200323.3.2
|
||||
az image create --name fedora-coreos-36.20220716.3.1 -g GROUP --os-type=linux --source /subscriptions/some/path/providers/Microsoft.Compute/disks/fedora-coreos-36.20220716.3.1
|
||||
```
|
||||
|
||||
Set the [os_image](#variables) in the next step.
|
||||
@ -86,7 +86,7 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
@ -95,7 +95,7 @@ module "ramius" {
|
||||
dns_zone_group = "example-group"
|
||||
|
||||
# configuration
|
||||
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-31.20200323.3.2"
|
||||
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-36.20220716.3.1"
|
||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
@ -161,9 +161,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready <none> 24m v1.24.3
|
||||
ramius-worker-000001 Ready <none> 25m v1.24.3
|
||||
ramius-worker-000002 Ready <none> 24m v1.24.3
|
||||
ramius-controller-0 Ready <none> 24m v1.26.1
|
||||
ramius-worker-000001 Ready <none> 25m v1.26.1
|
||||
ramius-worker-000002 Ready <none> 24m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -240,7 +240,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
|
||||
| worker_type | Machine type for workers | "Standard_DS1_v2" | See below |
|
||||
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
|
||||
| disk_size | Size of the disk in GB | 30 | 100 |
|
||||
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
||||
| controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.24.3 cluster on bare-metal with Fedora CoreOS.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.26.1 cluster on bare-metal with Fedora CoreOS.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
@ -138,11 +138,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "0.5.0"
|
||||
version = "0.5.2"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -283,9 +283,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready <none> 10m v1.24.3
|
||||
node2.example.com Ready <none> 10m v1.24.3
|
||||
node3.example.com Ready <none> 10m v1.24.3
|
||||
node1.example.com Ready <none> 10m v1.26.1
|
||||
node2.example.com Ready <none> 10m v1.26.1
|
||||
node3.example.com Ready <none> 10m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# DigitalOcean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on DigitalOcean with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on DigitalOcean with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
@ -51,11 +51,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = "2.21.0"
|
||||
version = "2.22.3"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Digital Ocean
|
||||
cluster_name = "nemo"
|
||||
@ -155,9 +155,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
10.132.110.130 Ready <none> 10m v1.24.3
|
||||
10.132.115.81 Ready <none> 10m v1.24.3
|
||||
10.132.124.107 Ready <none> 10m v1.24.3
|
||||
10.132.110.130 Ready <none> 10m v1.26.1
|
||||
10.132.115.81 Ready <none> 10m v1.26.1
|
||||
10.132.124.107 Ready <none> 10m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on Google Compute Engine with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on Google Compute Engine with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
@ -52,11 +52,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = "4.29.0"
|
||||
version = "4.37.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kuberne
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=development-sha"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -112,7 +112,7 @@ Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 64 to add, 0 to change, 0 to destroy.
|
||||
Plan: 78 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
@ -125,7 +125,7 @@ module.yavin.null_resource.bootstrap: Still creating... (5m30s elapsed)
|
||||
module.yavin.null_resource.bootstrap: Still creating... (5m40s elapsed)
|
||||
module.yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358)
|
||||
|
||||
Apply complete! Resources: 62 added, 0 changed, 0 destroyed.
|
||||
Apply complete! Resources: 78 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
@ -147,9 +147,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.26.1
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on AWS with Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on AWS with Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
@ -51,11 +51,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "4.22.0"
|
||||
version = "4.31.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/flatcar-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
@ -111,7 +111,7 @@ Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 80 to add, 0 to change, 0 to destroy.
|
||||
Plan: 109 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
@ -123,7 +123,7 @@ module.tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
|
||||
module.tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
|
||||
module.tempest.null_resource.bootstrap: Creation complete after 11m8s (ID: 3961816482286168143)
|
||||
|
||||
Apply complete! Resources: 98 added, 0 changed, 0 destroyed.
|
||||
Apply complete! Resources: 109 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
@ -145,9 +145,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready <none> 10m v1.24.3
|
||||
ip-10-0-26-65 Ready <none> 10m v1.24.3
|
||||
ip-10-0-41-21 Ready <none> 10m v1.24.3
|
||||
ip-10-0-3-155 Ready <none> 10m v1.26.1
|
||||
ip-10-0-26-65 Ready <none> 10m v1.26.1
|
||||
ip-10-0-41-21 Ready <none> 10m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Azure
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on Azure with Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on Azure with Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
@ -48,11 +48,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
azurerm = {
|
||||
source = "hashicorp/azurerm"
|
||||
version = "3.14.0"
|
||||
version = "3.23.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -65,8 +65,8 @@ Additional configuration options are described in the `azurerm` provider [docs](
|
||||
Flatcar Linux publishes images to the Azure Marketplace and requires accepting terms.
|
||||
|
||||
```
|
||||
az vm image terms show --publish kinvolk --offer flatcar-container-linux-free --plan stable
|
||||
az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable
|
||||
az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2
|
||||
```
|
||||
|
||||
## Cluster
|
||||
@ -75,7 +75,7 @@ Define a Kubernetes cluster using the module `azure/flatcar-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
@ -149,9 +149,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready <none> 24m v1.24.3
|
||||
ramius-worker-000001 Ready <none> 25m v1.24.3
|
||||
ramius-worker-000002 Ready <none> 24m v1.24.3
|
||||
ramius-controller-0 Ready <none> 24m v1.26.1
|
||||
ramius-worker-000001 Ready <none> 25m v1.26.1
|
||||
ramius-worker-000002 Ready <none> 24m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -227,7 +227,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
|
||||
| worker_type | Machine type for workers | "Standard_DS1_v2" | See below |
|
||||
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
|
||||
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||
| disk_size | Size of the disk in GB | 30 | 100 |
|
||||
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.24.3 cluster on bare-metal with Flatcar Linux.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.26.1 cluster on bare-metal with Flatcar Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
@ -138,11 +138,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "0.5.0"
|
||||
version = "0.5.2"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/flatcar-linux/kubernete
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -269,10 +269,10 @@ To watch the bootstrap process in detail, SSH to the first controller and journa
|
||||
```
|
||||
$ ssh core@node1.example.com
|
||||
$ journalctl -f -u bootstrap
|
||||
rkt[1750]: The connection to the server cluster.example.com:6443 was refused - did you specify the right host or port?
|
||||
rkt[1750]: Waiting for static pod control plane
|
||||
The connection to the server cluster.example.com:6443 was refused - did you specify the right host or port?
|
||||
Waiting for static pod control plane
|
||||
...
|
||||
rkt[1750]: serviceaccount/calico-node unchanged
|
||||
serviceaccount/calico-node unchanged
|
||||
systemd[1]: Started Kubernetes control plane.
|
||||
```
|
||||
|
||||
@ -293,9 +293,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready <none> 10m v1.24.3
|
||||
node2.example.com Ready <none> 10m v1.24.3
|
||||
node3.example.com Ready <none> 10m v1.24.3
|
||||
node1.example.com Ready <none> 10m v1.26.1
|
||||
node2.example.com Ready <none> 10m v1.26.1
|
||||
node3.example.com Ready <none> 10m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# DigitalOcean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on DigitalOcean with Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on DigitalOcean with Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
@ -51,11 +51,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = "2.21.0"
|
||||
version = "2.22.3"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -63,13 +63,13 @@ terraform {
|
||||
|
||||
### Flatcar Linux Images
|
||||
|
||||
Flatcar Linux publishes DigitalOcean images, but does not yet upload them. DigitalOcean allows [custom images](https://blog.digitalocean.com/custom-images/) to be uploaded via URLor file.
|
||||
Flatcar Linux publishes DigitalOcean images, but does not yet upload them. DigitalOcean allows [custom images](https://blog.digitalocean.com/custom-images/) to be uploaded via a URL or file.
|
||||
|
||||
[Download](https://www.flatcar-linux.org/releases/) the Flatcar Linux DigitalOcean bin image. Rename the image with the channel and version (to refer to these images over time) and [upload](https://cloud.digitalocean.com/images/custom_images) it as a custom image.
|
||||
Choose a Flatcar Linux [release](https://www.flatcar-linux.org/releases/) from Flatcar's file [server](https://stable.release.flatcar-linux.net/amd64-usr/). Copy the URL to the `flatcar_production_digitalocean_image.bin.bz2`, import it into DigitalOcean, and name it as a custom image. Add a data reference to the image in Terraform:
|
||||
|
||||
```tf
|
||||
data "digitalocean_image" "flatcar-stable-2303-4-0" {
|
||||
name = "flatcar-stable-2303.4.0.bin.bz2"
|
||||
data "digitalocean_image" "flatcar-stable-3227-2-0" {
|
||||
name = "flatcar-stable-3227.2.0.bin.bz2"
|
||||
}
|
||||
```
|
||||
|
||||
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/flatcar-linux/kubern
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Digital Ocean
|
||||
cluster_name = "nemo"
|
||||
@ -155,9 +155,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
10.132.110.130 Ready <none> 10m v1.24.3
|
||||
10.132.115.81 Ready <none> 10m v1.24.3
|
||||
10.132.124.107 Ready <none> 10m v1.24.3
|
||||
10.132.110.130 Ready <none> 10m v1.26.1
|
||||
10.132.115.81 Ready <none> 10m v1.26.1
|
||||
10.132.124.107 Ready <none> 10m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on Google Compute Engine with Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on Google Compute Engine with Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
@ -52,11 +52,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
google = {
|
||||
source = "hashicorp/google"
|
||||
version = "4.29.0"
|
||||
version = "4.37.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/flatcar-linux/kuberne
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -112,7 +112,7 @@ Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 64 to add, 0 to change, 0 to destroy.
|
||||
Plan: 78 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
@ -125,7 +125,7 @@ module.yavin.null_resource.bootstrap: Still creating... (5m30s elapsed)
|
||||
module.yavin.null_resource.bootstrap: Still creating... (5m40s elapsed)
|
||||
module.yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358)
|
||||
|
||||
Apply complete! Resources: 62 added, 0 changed, 0 destroyed.
|
||||
Apply complete! Resources: 78 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
@ -147,9 +147,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.26.1
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#hosts) customization
|
||||
@ -48,6 +48,7 @@ Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Flatcar Linux (ARM64) | [aws/flatcar-linux/kubernetes](advanced/arm64.md) | alpha |
|
||||
| Azure | Flatcar Linux (ARM64) | [azure/flatcar-linux/kubernetes](advanced/arm64.md) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
@ -61,7 +62,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -99,9 +100,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.26.1
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -13,19 +13,37 @@ Typhoon provides tagged releases to allow clusters to be versioned using ordinar
|
||||
|
||||
```
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
...
|
||||
}
|
||||
|
||||
module "mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Master is updated regularly, so it is recommended to [pin](https://www.terraform.io/docs/modules/sources.html) modules to a [release tag](https://github.com/poseidon/typhoon/releases) or [commit](https://github.com/poseidon/typhoon/commits/master) hash. Pinning ensures `terraform get --update` only fetches the desired version.
|
||||
Main is updated regularly, so it is recommended to [pin](https://www.terraform.io/docs/modules/sources.html) modules to a [release tag](https://github.com/poseidon/typhoon/releases) or [commit](https://github.com/poseidon/typhoon/commits/main) hash. Pinning ensures `terraform get --update` only fetches the desired version.
|
||||
|
||||
## Upgrades
|
||||
## Terraform Versions
|
||||
|
||||
Typhoon modules support Terraform v0.13.x and higher. Poseidon publishes [providers](/topics/security/#terraform-providers) to the Terraform Provider Registry for automatic install via `terraform init`.
|
||||
|
||||
| Typhoon Release | Terraform version |
|
||||
|-------------------|---------------------|
|
||||
| v1.21.2 - ? | v0.13.x, v0.14.4+, v0.15.x, v1.0.x |
|
||||
| v1.21.1 - v1.21.1 | v0.13.x, v0.14.4+, v0.15.x |
|
||||
| v1.20.2 - v1.21.0 | v0.13.x, v0.14.4+ |
|
||||
| v1.20.0 - v1.20.2 | v0.13.x |
|
||||
| v1.18.8 - v1.19.4 | v0.12.26+, v0.13.x |
|
||||
| v1.15.0 - v1.18.8 | v0.12.x |
|
||||
| v1.10.3 - v1.15.0 | v0.11.x |
|
||||
| v1.9.2 - v1.10.2 | v0.10.4+ or v0.11.x |
|
||||
| v1.7.3 - v1.9.1 | v0.10.x |
|
||||
| v1.6.4 - v1.7.2 | v0.9.x |
|
||||
|
||||
|
||||
## Cluster Upgrades
|
||||
|
||||
Typhoon recommends upgrading clusters using a blue-green replacement strategy and migrating workloads.
|
||||
|
||||
@ -127,9 +145,99 @@ Typhoon supports multi-controller clusters, so it is possible to upgrade a clust
|
||||
!!! warning
|
||||
Typhoon does not support or document node replacement as an upgrade strategy. It limits Typhoon's ability to make infrastructure and architectural changes between tagged releases.
|
||||
|
||||
### Upgrade terraform-provider-ct
|
||||
## Node Configuration Updates
|
||||
|
||||
The [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin parses, validates, and converts Fedora CoreOS or Flatcar Linux Configs into Ignition user-data for provisioning instances. Since Typhoon v1.12.2+, the plugin can be updated in-place so that on apply, only workers will be replaced.
|
||||
Typhoon worker instance groups (default workers and [worker pools](../advanced/worker-pools.md)) on AWS and Google Cloud gradually rolling replace worker instances when configuration changes are applied.
|
||||
|
||||
### AWS
|
||||
|
||||
On AWS, worker instances belong to an auto-scaling group. When an auto-scaling group's launch configuration changes, an AWS [Instance Refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) gradually replaces worker instances.
|
||||
|
||||
Instance refresh creates surge instances, waits for a warm-up period, then deletes old instances.
|
||||
|
||||
```diff
|
||||
module "tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/VARIANT/kubernetes?ref=VERSION"
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
...
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
- worker_type = "t3.small"
|
||||
+ worker_type = "t3a.small"
|
||||
|
||||
# change from on-demand to spot
|
||||
+ worker_price = "0.0309"
|
||||
|
||||
# default is 30GB
|
||||
+ disk_size = 50
|
||||
|
||||
# change worker snippets
|
||||
+ worker_snippets = [
|
||||
+ file("butane/feature.yaml"),
|
||||
+ ]
|
||||
}
|
||||
```
|
||||
|
||||
Applying edits to most worker fields will start an instance refresh:
|
||||
|
||||
* `worker_type`
|
||||
* `disk_*`
|
||||
* `worker_price` (i.e. spot)
|
||||
* `worker_target_groups`
|
||||
* `worker_snippets`
|
||||
|
||||
However, changing `os_stream`/`os_channel` or new AMIs becoming available will NOT change the launch configuration or trigger an Instance Refresh. This allows Fedora CoreOS or Flatcar Linux to auto-update themselves via reboots and avoids unexpected terraform diffs for new AMIs.
|
||||
|
||||
!!! note
|
||||
Before Typhoon v1.26.1, worker nodes only used new launch configurations when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
|
||||
|
||||
### Google Cloud
|
||||
|
||||
On Google Cloud, worker instances belong to a [managed instance group](https://cloud.google.com/compute/docs/instance-groups#managed_instance_groups). When a group's launch template changes, a [rolling update](https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups) gradually replaces worker instances.
|
||||
|
||||
The rolling update creates surge instances, waits for instances to be healthy, then deletes old instances.
|
||||
|
||||
```diff
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/VARIANT/kubernetes?ref=VERSION"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
...
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
+ worker_type = "n2-standard-2"
|
||||
+ worker_preemptible = true
|
||||
|
||||
# default is 30GB
|
||||
+ disk_size = 50
|
||||
|
||||
# change worker snippets
|
||||
+ worker_snippets = [
|
||||
+ file("butane/feature.yaml"),
|
||||
+ ]
|
||||
}
|
||||
```
|
||||
|
||||
Applying edits to most worker fields will start an instance refresh:
|
||||
|
||||
* `worker_type`
|
||||
* `disk_*`
|
||||
* `worker_preemptible` (i.e. spot)
|
||||
* `worker_snippets`
|
||||
|
||||
However, changing `os_stream`/`os_channel` or new compute images becoming available will NOT change the launch template or update instances. This allows Fedora CoreOS or Flatcar Linux to auto-update themselves via reboots and avoids unexpected terraform diffs for new AMIs.
|
||||
|
||||
!!! note
|
||||
Before Typhoon v1.26.1, worker nodes only used new launch templates when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
|
||||
|
||||
## Upgrade poseidon/ct
|
||||
|
||||
The [poseidon/ct](https://github.com/poseidon/terraform-provider-ct) Terraform provider plugin parses, validates, and converts Butane Configs to Ignition user-data for provisioning instances. Since Typhoon v1.12.2+, the plugin can be updated in-place so that on apply, only workers will be replaced.
|
||||
|
||||
Update the version of the `ct` plugin in each Terraform working directory. Typhoon clusters managed in the working directory **must** be v1.12.2 or higher.
|
||||
|
||||
@ -140,8 +248,8 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
- version = "0.8.0"
|
||||
+ version = "0.9.0"
|
||||
- version = "0.10.0"
|
||||
+ version = "0.11.0"
|
||||
}
|
||||
...
|
||||
}
|
||||
@ -155,11 +263,11 @@ terraform init
|
||||
terraform plan
|
||||
```
|
||||
|
||||
Apply the change. Worker nodes' user-data will be changed and workers will be replaced. Rollout happens slightly differently on each platform:
|
||||
Apply the change. If worker nodes' user-data is changed and workers will be replaced. Rollout happens slightly differently on each platform:
|
||||
|
||||
#### AWS
|
||||
|
||||
AWS creates a new worker ASG, then removes the old ASG. New workers join the cluster and old workers disappear. `terraform apply` will hang during this process.
|
||||
See AWS node [config updates](#aws).
|
||||
|
||||
#### Azure
|
||||
|
||||
@ -187,24 +295,4 @@ Expect downtime.
|
||||
|
||||
#### Google Cloud
|
||||
|
||||
Google Cloud creates a new worker template and edits the worker instance group instantly. Manually terminate workers and replacement workers will use the user-data.
|
||||
|
||||
## Terraform Versions
|
||||
|
||||
Terraform [v0.13](https://www.hashicorp.com/blog/announcing-hashicorp-terraform-0-13) introduced major changes to the provider plugin system. Terraform `init` can automatically install both `hashicorp` and `poseidon` provider plugins, eliminating the need to manually install plugin binaries.
|
||||
|
||||
Typhoon modules have been updated for v0.13.x. Poseidon publishes [providers](/topics/security/#terraform-providers) to the Terraform Provider Registry for usage with v0.13+.
|
||||
|
||||
| Typhoon Release | Terraform version |
|
||||
|-------------------|---------------------|
|
||||
| v1.21.2 - ? | v0.13.x, v0.14.4+, v0.15.x, v1.0.x |
|
||||
| v1.21.1 - v1.21.1 | v0.13.x, v0.14.4+, v0.15.x |
|
||||
| v1.20.2 - v1.21.0 | v0.13.x, v0.14.4+ |
|
||||
| v1.20.0 - v1.20.2 | v0.13.x |
|
||||
| v1.18.8 - v1.19.4 | v0.12.26+, v0.13.x |
|
||||
| v1.15.0 - v1.18.8 | v0.12.x |
|
||||
| v1.10.3 - v1.15.0 | v0.11.x |
|
||||
| v1.9.2 - v1.10.2 | v0.10.4+ or v0.11.x |
|
||||
| v1.7.3 - v1.9.1 | v0.10.x |
|
||||
| v1.6.4 - v1.7.2 | v0.9.x |
|
||||
|
||||
See Google Cloud node [config updates](#google-cloud).
|
||||
|
@ -81,6 +81,31 @@ Typhoon publishes Terraform providers to the Terraform Registry, GPG signed by 0
|
||||
| ct | [github](https://github.com/poseidon/terraform-provider-ct) | [poseidon/ct](https://registry.terraform.io/providers/poseidon/ct/latest) |
|
||||
| matchbox | [github](https://github.com/poseidon/terraform-provider-matchbox) | [poseidon/matchbox](https://registry.terraform.io/providers/poseidon/matchbox/latest) |
|
||||
|
||||
## kube-system
|
||||
|
||||
| Name | user | hostNet | privileged |
|
||||
|----------------|--------|---------|------------|
|
||||
| kube-apiserver | nobody | true | false |
|
||||
| kube-controller-manager | nobody | true | false |
|
||||
| kube-scheduler | nobody | true | false |
|
||||
| coredns | NA | false | false |
|
||||
| kube-proxy | root | true | true |
|
||||
| cilium | root | true | true |
|
||||
| calico | root | true | true |
|
||||
| flannel | root | true | true |
|
||||
|
||||
|
||||
| Name | priorityClassName |
|
||||
|-------------------------|-------------------|
|
||||
| kube-apiserver | system-cluster-critical |
|
||||
| kube-controller-manager | system-cluster-critical |
|
||||
| kube-scheduler | system-cluster-critical |
|
||||
| coredns | system-cluster-critical |
|
||||
| kube-proxy | system-node-critical |
|
||||
| cilium | system-node-critical |
|
||||
| calico | system-node-critical |
|
||||
| flannel | system-node-critical |
|
||||
|
||||
## Disclosures
|
||||
|
||||
If you find security issues, please email `security@psdn.io`. If the issue lies in upstream Kubernetes, please inform upstream Kubernetes as well.
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -75,10 +75,10 @@ resource "google_compute_instance_group" "controllers" {
|
||||
)
|
||||
}
|
||||
|
||||
# TCP health check for apiserver
|
||||
# Health check for kube-apiserver
|
||||
resource "google_compute_health_check" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver-tcp-health"
|
||||
description = "TCP health check for kube-apiserver"
|
||||
name = "${var.cluster_name}-apiserver-health"
|
||||
description = "Health check for kube-apiserver"
|
||||
|
||||
timeout_sec = 5
|
||||
check_interval_sec = 5
|
||||
@ -86,7 +86,7 @@ resource "google_compute_health_check" "apiserver" {
|
||||
healthy_threshold = 1
|
||||
unhealthy_threshold = 3
|
||||
|
||||
tcp_health_check {
|
||||
ssl_health_check {
|
||||
port = "6443"
|
||||
}
|
||||
}
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = google_dns_record_set.etcds.*.name
|
||||
etcd_servers = [for fqdn in google_dns_record_set.etcds.*.name : trimsuffix(fqdn, ".")]
|
||||
networking = var.networking
|
||||
network_mtu = 1440
|
||||
pod_cidr = var.pod_cidr
|
||||
|
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -53,7 +54,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -62,6 +63,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -81,27 +83,12 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -124,7 +111,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.24.3
|
||||
quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -139,6 +126,32 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -175,6 +188,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -219,7 +237,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user