mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 15:31:35 +02:00
Compare commits
167 Commits
Author | SHA1 | Date | |
---|---|---|---|
634deaf92e | |||
cd699ee1aa | |||
d29e6e3de1 | |||
be37170e59 | |||
0a6183f859 | |||
1888c272eb | |||
880821391a | |||
9314807dfd | |||
56d71c0eca | |||
9a28fe79a1 | |||
7255f82d71 | |||
6f4b4cc508 | |||
094811dc73 | |||
2a5a43f3a4 | |||
784f60f624 | |||
58e0ff9f5e | |||
9e63f1247a | |||
ecc9a73df4 | |||
1665cfb613 | |||
1919ff1355 | |||
8ebf31073c | |||
867ca6a94e | |||
819dd111ed | |||
c16cc08375 | |||
64472d5bf7 | |||
ae82c57eee | |||
fe23fca72b | |||
4ef1908299 | |||
2272472d59 | |||
fc444d25f8 | |||
5feb4c63f7 | |||
501e6d25e0 | |||
1e76e1a200 | |||
4322857bec | |||
e3bfa1c89b | |||
47213a8e8f | |||
8943c0f55e | |||
44d84cf324 | |||
ec2e0b2fd7 | |||
6bd2a1a528 | |||
5f303212d2 | |||
bcee364b4c | |||
3670ec7ed7 | |||
1e3af87392 | |||
2b3cd451d2 | |||
ff937b0b7e | |||
4891a66e29 | |||
3ff6c2fdf7 | |||
517863c31a | |||
76ebc08fd2 | |||
86e8484e0a | |||
cf20e686c0 | |||
420ddd2154 | |||
435b3d4c88 | |||
f3c327007d | |||
406fb444f0 | |||
1caea3388c | |||
d04d88023d | |||
a205922d06 | |||
b5ba65d4c2 | |||
e696fd2b22 | |||
3ff9b792ca | |||
c4f1d2d1c8 | |||
a1d7b5cd1e | |||
e7591030e0 | |||
f2bf5ac3fb | |||
9cd1c5b17a | |||
d6f739dedb | |||
6bb7a36cf2 | |||
0afe9d65ed | |||
11e540000f | |||
d6cbcf9f96 | |||
ce52a2cd35 | |||
bd9a908125 | |||
0dc8740c77 | |||
a9b12b6bca | |||
d419c58ab1 | |||
da76d32aba | |||
f0e5982b3c | |||
a8990b3045 | |||
f597f7cda3 | |||
b4857c123e | |||
50bffaae8f | |||
a193762eed | |||
adf33df99b | |||
29a005b7b4 | |||
ccebc2313d | |||
1f86592d13 | |||
6a521257d0 | |||
26dbc7e91d | |||
de668e696a | |||
d3b2217444 | |||
937acc4b5a | |||
b0a6dc8115 | |||
420ff6ff04 | |||
9b733d79c7 | |||
35a9e22b1f | |||
0f38a6d405 | |||
a535581ef2 | |||
08d13e7215 | |||
3ff2d38fa5 | |||
d6d8eb8d79 | |||
f04e1d25a8 | |||
b68f8bb2a9 | |||
651151805d | |||
8d2c8b8db6 | |||
675ac63159 | |||
b4c8b1729c | |||
e82241169a | |||
ffe4929ff6 | |||
88b3925318 | |||
29876dc85a | |||
7e29e35457 | |||
3ee462a24c | |||
f833b7205d | |||
558e293f78 | |||
90782ea820 | |||
8dc7cc614c | |||
74d4d56dbd | |||
5abe84b520 | |||
951209d113 | |||
09751cc0e8 | |||
c14300f0be | |||
37de9ca2ae | |||
1786e34f33 | |||
5f612c82e2 | |||
e60a321185 | |||
5ad74883fe | |||
4ad473cd3c | |||
393a38deff | |||
76d92e9c2d | |||
275fc0f9e8 | |||
3fb59a3289 | |||
a31dbceac6 | |||
1dcf56127b | |||
bf06412dfd | |||
505818b7d5 | |||
0d27811265 | |||
c13d060b38 | |||
e87d5aabc3 | |||
760b4cd5ee | |||
fcd8ff2b17 | |||
ef2d2af0c7 | |||
8e2027ed2d | |||
52427a4271 | |||
20b76d6e00 | |||
6facfca4ed | |||
ed8c6a5aeb | |||
003af72cc8 | |||
b321b90a4f | |||
e5d0e2d48b | |||
679f8b878f | |||
87a8278c9d | |||
93b7f2554e | |||
62d47ad3f0 | |||
6eb7861f96 | |||
ffbacbccf7 | |||
16c2785878 | |||
4a469513dd | |||
47d8431fe0 | |||
256b87812e | |||
ca6eef365f | |||
c6794f1007 | |||
de6f27e119 | |||
6a9c32d3a9 | |||
a7e9e423f5 | |||
83236eab57 |
3
.github/dependabot.yaml
vendored
3
.github/dependabot.yaml
vendored
@ -4,6 +4,3 @@ updates:
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: weekly
|
||||
pull-request-branch-name:
|
||||
separator: "-"
|
||||
open-pull-requests-limit: 3
|
||||
|
12
.github/workflows/publish.yaml
vendored
Normal file
12
.github/workflows/publish.yaml
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
name: publish
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- release-docs
|
||||
jobs:
|
||||
mkdocs:
|
||||
name: mkdocs
|
||||
uses: poseidon/matchbox/.github/workflows/mkdocs-pages.yaml@main
|
||||
# Add content write for GitHub Pages
|
||||
permissions:
|
||||
contents: write
|
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
site/
|
256
CHANGES.md
256
CHANGES.md
@ -4,6 +4,262 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
* Update Cilium from v1.13.4 to [v1.14.0](https://github.com/cilium/cilium/releases/tag/v1.14.0)
|
||||
* Update flannel from v0.22.0 to [v0.22.1](https://github.com/flannel-io/flannel/releases/tag/v0.22.1)
|
||||
|
||||
## v1.27.4
|
||||
|
||||
* Kubernetes [v1.27.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1274)
|
||||
|
||||
## v1.27.3
|
||||
|
||||
* Kubernetes [v1.27.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1273)
|
||||
* Update etcd from v3.5.7 to [v3.5.9](https://github.com/etcd-io/etcd/releases/tag/v3.5.9)
|
||||
* Update Cilium from v1.13.2 to [v1.13.4](https://github.com/cilium/cilium/releases/tag/v1.13.4)
|
||||
* Update Calico from v3.25.1 to [v3.26.1](https://github.com/projectcalico/calico/releases/tag/v3.26.1)
|
||||
* Update flannel from v0.21.2 to [v0.22.0](https://github.com/flannel-io/flannel/releases/tag/v0.22.0)
|
||||
|
||||
### AWS
|
||||
|
||||
* Allow upgrading AWS Terraform provider to v5.x ([#1353](https://github.com/poseidon/typhoon/pull/1353))
|
||||
|
||||
### Azure
|
||||
|
||||
* Enable boot diagnostics for controller and worker VMs ([#1351](https://github.com/poseidon/typhoon/pull/1351))
|
||||
|
||||
## v1.27.2
|
||||
|
||||
* Kubernetes [v1.27.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1272)
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Update Butane Config version from v1.4.0 to v1.5.0
|
||||
* Require any custom Butane [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) update to v1.5.0
|
||||
* Require Fedora CoreOS `37.20230303.3.0` or newer (with ignition v2.15)
|
||||
* Require poseidon/ct v0.13+ (**action required**)
|
||||
|
||||
## v1.27.1
|
||||
|
||||
* Kubernetes [v1.27.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1271)
|
||||
* Update etcd from v3.5.7 to [v3.5.8](https://github.com/etcd-io/etcd/releases/tag/v3.5.8)
|
||||
* Update Cilium from v1.13.1 to [v1.13.2](https://github.com/cilium/cilium/releases/tag/v1.13.2)
|
||||
* Update Calico from v3.25.0 to [v3.25.1](https://github.com/projectcalico/calico/releases/tag/v3.25.1)
|
||||
|
||||
## v1.26.3
|
||||
|
||||
* Kubernetes [v1.26.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1263)
|
||||
* Update Cilium from v1.12.6 to [v1.13.1](https://github.com/cilium/cilium/releases/tag/v1.13.1)
|
||||
|
||||
### Bare-Metal
|
||||
|
||||
* Add `oem_type` variable for Flatcar Linux ([#1302](https://github.com/poseidon/typhoon/pull/1302))
|
||||
|
||||
## v1.26.2
|
||||
|
||||
* Kubernetes [v1.26.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1262)
|
||||
* Update Cilium from v1.12.5 to [v1.12.6](https://github.com/cilium/cilium/releases/tag/v1.12.6)
|
||||
* Update flannel from v0.20.2 to [v0.21.2](https://github.com/flannel-io/flannel/releases/tag/v0.21.2)
|
||||
|
||||
### Bare-Metal
|
||||
|
||||
* Add a `worker` module to allow customizing individual worker nodes ([#1295](https://github.com/poseidon/typhoon/pull/1295))
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Fedora CoreOS [issue](https://github.com/coreos/fedora-coreos-tracker/issues/1423) fix is progressing through channels
|
||||
|
||||
## v1.26.1
|
||||
|
||||
* Kubernetes [v1.26.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1261)
|
||||
* Update etcd from v3.5.6 to [v3.5.7](https://github.com/etcd-io/etcd/releases/tag/v3.5.7)
|
||||
* Update Cilium from v1.12.4 to [v1.12.5](https://github.com/cilium/cilium/releases/tag/v1.12.5)
|
||||
* Update Calico from v3.24.5 to [v3.25.0](https://github.com/projectcalico/calico/releases/tag/v3.25.0)
|
||||
* Update CoreDNS from v1.9.3 to [v1.9.4](https://github.com/poseidon/terraform-render-bootstrap/pull/341)
|
||||
|
||||
## v1.26.0
|
||||
|
||||
* Kubernetes [v1.26.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260)
|
||||
* Update etcd from v3.5.5 to [v3.5.6](https://github.com/etcd-io/etcd/releases/tag/v3.5.6)
|
||||
* Update Cilium from v1.12.3 to [v1.12.4](https://github.com/cilium/cilium/releases/tag/v1.12.4)
|
||||
* Update flannel from v0.15.1 to [v0.20.2](https://github.com/flannel-io/flannel/releases/tag/v0.20.2)
|
||||
* Reminder: Modules are no longer published to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon) ([#1282](https://github.com/poseidon/typhoon/pull/1282))
|
||||
* See [#1282](https://github.com/poseidon/typhoon/pull/1282) and [v1.25.4](https://github.com/poseidon/typhoon/releases/tag/v1.25.4) for details
|
||||
|
||||
### AWS
|
||||
|
||||
* Migrate AWS launch configurations to launch templates ([#1275](https://github.com/poseidon/typhoon/pull/1275))
|
||||
* Starting Dec 31, 2022 AWS won't add new instance types/families to launch configurations
|
||||
|
||||
### Addons
|
||||
|
||||
* Update ingress-nginx from v1.3.1 to [v1.5.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.5.1)
|
||||
* Update Prometheus from v2.40.1 to [v2.40.5](https://github.com/prometheus/prometheus/releases/tag/v2.40.5)
|
||||
* Update node-exporter from v1.3.1 to [v1.5.0](https://github.com/prometheus/node_exporter/releases/tag/v1.5.0)
|
||||
* Update kube-state-metrics from v2.6.0 to [v2.7.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.7.0)
|
||||
* Update Grafana from v9.2.4 to [v9.3.1](https://github.com/grafana/grafana/releases/tag/v9.3.1)
|
||||
|
||||
## v1.25.4
|
||||
|
||||
* Kubernetes [v1.25.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1254)
|
||||
* Update Calico from v3.24.1 to [v3.24.5](https://github.com/projectcalico/calico/releases/tag/v3.24.5)
|
||||
* Allow Kubelet kubeconfig to drain nodes, if desired ([#330](https://github.com/poseidon/terraform-render-bootstrap/pull/330))
|
||||
* Re-enable Kubelet Graceful Node Shutdown ([#1261](https://github.com/poseidon/typhoon/pull/1261))
|
||||
* Introduce companion project [poseidon/scuttle](https://github.com/poseidon/scuttle)
|
||||
* Link to new Mastodon account for release announcements
|
||||
* [@typhoon@fosstodon.org](https://fosstodon.org/@typhoon)
|
||||
* [@poseidon@fosstodon.org](https://fosstodon.org/@poseidon)
|
||||
* Deprecate publishing to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon)
|
||||
* Typhoon docs have always shown using Git-based module sources, not the Terraform Module Registry
|
||||
* Module usage should be `source = "git::https://github.com/poseidon/typhoon/...` not `source = poseidon/kubernetes/...`
|
||||
* Terraform's Module Registry requires subtree mirroring typhoon to special terraform-platform-kubernetes repos, only supports release versions (no commit SHAs or forks), only ever contained Flatcar Linux modules (not Fedora CoreOS) for historical reasons
|
||||
* Note, this does not affect Terraform Providers like `poseidon/matchbox` or `poseidon/ct`, the registry works well for providers
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Remove unused `Wants=network.target` from `etcd-member.service` ([#1254](https://github.com/poseidon/typhoon/pull/1254))
|
||||
|
||||
### Cloud
|
||||
|
||||
* Remove defunct `delete-node.service` from worker node configurations ([#1256](https://github.com/poseidon/typhoon/pull/1256))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.39.1 to [v2.40.1](https://github.com/prometheus/prometheus/releases/tag/v2.40.1)
|
||||
* Update Grafana from v9.1.7 to [v9.2.4](https://github.com/grafana/grafana/releases/tag/v9.2.4)
|
||||
|
||||
## v1.25.3
|
||||
|
||||
* Kubernetes [v1.25.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1253)
|
||||
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` for addons ([#1246](https://github.com/poseidon/typhoon/pull/1246))
|
||||
* Update Cilium from v1.12.2 to [v1.12.3](https://github.com/cilium/cilium/releases/tag/v1.12.3) ([#1253](https://github.com/poseidon/typhoon/pull/1253))
|
||||
|
||||
### Azure
|
||||
|
||||
* Change default Azure `worker_type` from [`Standard_DS1_v2`](https://learn.microsoft.com/en-us/azure/virtual-machines/dv2-dsv2-series#dsv2-series) to [`Standard_D2as_v5`](https://learn.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series#dasv5-series) ([#1248](https://github.com/poseidon/typhoon/pull/1248))
|
||||
* Get 2 VCPU, 7 GiB, 12500Mbps (vs 1 VCPU, 3.5GiB, 750 Mbps)
|
||||
* Small increase in pay-as-you-go price ($53.29 -> $62.78)
|
||||
* Small increase in spot price ($5.64/mo -> $7.37/mo)
|
||||
* Change from Intel to AMD EPYC (`D2as_v5` cheaper than `D2s_v5`)
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
* Add Flatcar Linux ARM64 support on Azure ([docs](https://typhoon.psdn.io/advanced/arm64/), [#1251](https://github.com/poseidon/typhoon/pull/1251))
|
||||
* Switch from Azure Hypervisor gen1 to gen2 (**action required**) ([#1248](https://github.com/poseidon/typhoon/pull/1248))
|
||||
* Run `az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2`
|
||||
|
||||
### Docs
|
||||
|
||||
* Remove old docs note about not supporting ARM64 with Calico
|
||||
* Typhoon supports ARM64 with `cilium`, `calico`, and `flannel`
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.38.0 to [v2.39.1](https://github.com/prometheus/prometheus/releases/tag/v2.39.1)
|
||||
* Update Grafana from v9.1.6 to [v9.1.7](https://github.com/grafana/grafana/releases/tag/v9.1.7)
|
||||
|
||||
## v1.25.2
|
||||
|
||||
Kubernetes v1.25.2 was skipped since there were minimal changes upstream.
|
||||
|
||||
## v1.25.1
|
||||
|
||||
* Kubernetes [v1.25.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1251)
|
||||
* Update etcd from v3.5.4 to [v3.5.5](https://github.com/etcd-io/etcd/releases/tag/v3.5.5)
|
||||
* Update Cilium from v1.12.1 to [v1.12.2](https://github.com/cilium/cilium/releases/tag/v1.12.2)
|
||||
* Update Calico from v3.23.3 to [v3.24.1](https://github.com/projectcalico/calico/releases/tag/v3.24.1)
|
||||
* Revert Kubelet Graceful Node Shutdown on worker nodes ([#1227](https://github.com/poseidon/typhoon/pull/1227))
|
||||
* Fix issue where non-critical pods are left in Error/Completed state on node shutdown
|
||||
* Remove feature flag disable workaround for [kubernetes/kubernetes#112081](https://github.com/kubernetes/kubernetes/issues/112081)
|
||||
* Kubernetes [reverted](https://github.com/kubernetes/kubernetes/pull/112078) `LocalStorageCapacityIsolationFSQuotaMonitoring` back to alpha
|
||||
* Remove workaround for preventing `search .` propagation in [kubernetes/kubernetes#112135](https://github.com/kubernetes/kubernetes/issues/112135)
|
||||
* Upstream Kubernetes [fix](https://github.com/kubernetes/kubernetes/pull/112157)
|
||||
|
||||
### Addons
|
||||
|
||||
* Update kube-state-metrics from v2.5.0 to [v2.6.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.6.0)
|
||||
* Update ingress-nginx from v1.3.0 to [v1.3.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.3.1)
|
||||
* Update Grafana from v9.1.0 to [v9.1.6](https://github.com/grafana/grafana/releases/tag/v9.1.6)
|
||||
|
||||
## v1.25.0
|
||||
|
||||
* Kubernetes [v1.25.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1250)
|
||||
* Disable LocalStorageCapacityIsolationFSQuotaMonitoring feature gate ([#1220](https://github.com/poseidon/typhoon/pull/1220), fixes [kubernetes#112081](https://github.com/kubernetes/kubernetes/issues/112081))
|
||||
* Add workaround to revert adding "search ." to containers' `/etc/resolv.conf` ([#1224](https://github.com/poseidon/typhoon/pull/1224), fixes [kubernetes#112135](https://github.com/kubernetes/kubernetes/issues/112135))
|
||||
* Migrate most Kubelet flags to KubeletConfiguration file ([#1219](https://github.com/poseidon/typhoon/pull/1219))
|
||||
* Configure Kubelet Graceful Node Shutdown ([#1222](https://github.com/poseidon/typhoon/pull/1222))
|
||||
* Allow up to 30s for critical pods to gracefully shutdown on node shutdown
|
||||
* Allow up to 15s for regular pods to gracefully shutdown on node shutdown
|
||||
* Mark node NotReady promptly on node shutdown
|
||||
* Lengthen systemd inhibitor lock max delay from 5s to 45s
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Change Podman `log-driver` from `journald` to `k8s-file` ([#1221](https://github.com/poseidon/typhoon/pull/1221))
|
||||
* Fix `etcd-member` and Kubelet systemd service log lines appearing twice in journal logs
|
||||
|
||||
## v1.24.4
|
||||
|
||||
* Kubernetes [v1.24.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1244)
|
||||
* Update CoreDNS from v1.8.6 to [v1.9.3](https://github.com/poseidon/terraform-render-bootstrap/pull/318)
|
||||
* Update Cilium from v1.11.7 to [v1.12.1](https://github.com/cilium/cilium/releases/tag/v1.12.1)
|
||||
* Update Calico from v3.23.1 to [v3.23.3](https://github.com/projectcalico/calico/releases/tag/v3.23.3)
|
||||
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` ([#1206](https://github.com/poseidon/typhoon/pull/1206))
|
||||
* Remove use of deprecated Terraform [template](https://registry.terraform.io/providers/hashicorp/template) provider ([#1194](https://github.com/poseidon/typhoon/pull/1194))
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Remove ineffective `/etc/fedora-coreos/iptables-legacy.stamp` ([#1201](https://github.com/poseidon/typhoon/pull/1201))
|
||||
* Typhoon already uses iptables v1.8.7 (nf_tables) since FCOS 36
|
||||
* Staying on legacy iptables required a file in `/etc/coreos` instead
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
* Migrate Flatcar Linux from Ignition spec v2.3.0 to v3.3.0 ([#1196](https://github.com/poseidon/typhoon/pull/1196)) (**action required**)
|
||||
* Flatcar Linux 3185.0.0+ [supports](https://flatcar-linux.org/docs/latest/provisioning/ignition/specification/#ignition-v3) Ignition v3.x specs (which are rendered from Butane Configs, like Fedora CoreOS)
|
||||
* `poseidon/ct` v0.11.0 [supports](https://github.com/poseidon/terraform-provider-ct/pull/131) the `flatcar` Butane Config variant
|
||||
* Require poseidon/ct v0.11+ and Flatcar Linux 3185.0.0+
|
||||
* Please modify any Flatcar Linux snippets to use the [Butane Config](https://coreos.github.io/butane/config-flatcar-v1_0/) format (**action required**)
|
||||
|
||||
```tf
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
...
|
||||
```
|
||||
|
||||
### AWS
|
||||
|
||||
* [Refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) instances in autoscaling group when launch configuration changes ([#1208](https://github.com/poseidon/typhoon/pull/1208)) ([docs](https://typhoon.psdn.io/topics/maintenance/#node-configuration-updates), **important**)
|
||||
* Worker launch configuration changes start an autoscaling group instance refresh to replace instances
|
||||
* Instance refresh creates surge instances, waits for a warm-up period, then deletes old instances
|
||||
* Changing `worker_type`, `disk_*`, `worker_price`, `worker_target_groups`, or Butane `worker_snippets` on existing worker nodes will replace instances
|
||||
* New AMIs or changing `os_stream` will be ignored, to allow Fedora CoreOS or Flatcar Linux to keep themselves updated
|
||||
* Previously, new launch configurations were made in the same way, but not applied to instances unless manually replaced
|
||||
* Rename worker autoscaling group `${cluster_name}-worker` ([#1202](https://github.com/poseidon/typhoon/pull/1202))
|
||||
* Rename launch configuration `${cluster_name}-worker` instead of a random id
|
||||
|
||||
### Google
|
||||
|
||||
* [Roll](https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups) instance template changes to worker managed instance groups ([#1207](https://github.com/poseidon/typhoon/pull/1207)) ([docs](https://typhoon.psdn.io/topics/maintenance/#node-configuration-updates), **important**)
|
||||
* Worker instance template changes roll out by gradually replacing instances
|
||||
* Automatic rollouts create surge instances, wait for health checks, then delete old instances (0 unavailable instances)
|
||||
* Changing `worker_type`, `disk_size`, `worker_preemptible`, or Butane `worker_snippets` on existing worker nodes will replace instances
|
||||
* New compute images or changing `os_stream` will be ignored, to allow Fedora CoreOS or Flatcar Linux to keep themselves updated
|
||||
* Previously, new instance templates were made in the same way, but not applied to instances unless manually replaced
|
||||
* Add health checks to worker managed instance groups (i.e. "autohealing") ([#1207](https://github.com/poseidon/typhoon/pull/1207))
|
||||
* Use health checks to probe kube-proxy every 30s
|
||||
* Replace worker nodes that fail the health check 6 times (3min)
|
||||
* Name `kube-apiserver` and `worker` health checks consistently ([#1207](https://github.com/poseidon/typhoon/pull/1207))
|
||||
* Use name `${cluster_name}-apiserver-health` and `${cluster_name}-worker-health`
|
||||
* Rename managed instance group from `${cluster_name}-worker-group` to `${cluster_name}-worker` ([#1207](https://github.com/poseidon/typhoon/pull/1207))
|
||||
* Fix bug provisioning clusters with multiple controller nodes ([#1195](https://github.com/poseidon/typhoon/pull/1195))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.37.0 to [v2.38.0](https://github.com/prometheus/prometheus/releases/tag/v2.38.0)
|
||||
* Update Grafana from v9.0.3 to [v9.1.0](https://github.com/grafana/grafana/releases/tag/v9.1.0)
|
||||
|
||||
## v1.24.3
|
||||
|
||||
* Kubernetes [v1.24.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1243)
|
||||
* Update Cilium from v1.11.6 to [v1.11.7](https://github.com/cilium/cilium/releases/tag/v1.11.7)
|
||||
|
||||
|
22
README.md
22
README.md
@ -1,4 +1,6 @@
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
# Typhoon [](https://github.com/poseidon/typhoon/releases) [](https://github.com/poseidon/typhoon/stargazers) [](https://github.com/sponsors/poseidon) [](https://fosstodon.org/@typhoon)
|
||||
|
||||
<img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
@ -11,7 +13,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.27.4 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
@ -48,6 +50,7 @@ Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Flatcar Linux (ARM64) | [aws/flatcar-linux/kubernetes](aws/flatcar-linux/kubernetes) | alpha |
|
||||
| Azure | Flatcar Linux (ARM64) | [azure/flatcar-linux/kubernetes](azure/flatcar-linux/kubernetes) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
@ -62,7 +65,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.27.4"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -101,9 +104,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.27.4
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.27.4
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.27.4
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -156,5 +159,12 @@ Poseidon's Github [Sponsors](https://github.com/sponsors/poseidon) support the i
|
||||
<img src="https://opensource.nyc3.cdn.digitaloceanspaces.com/attribution/assets/SVG/DO_Logo_horizontal_blue.svg" width="201px">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
<a href="https://deploy.equinix.com/">
|
||||
<img src="https://storage.googleapis.com/poseidon/equinix.png" width="201px">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
If you'd like your company here, please contact dghubble at psdn.io.
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: grafana
|
||||
image: docker.io/grafana/grafana:9.0.3
|
||||
image: docker.io/grafana/grafana:9.3.1
|
||||
env:
|
||||
- name: GF_PATHS_CONFIG
|
||||
value: "/etc/grafana/custom.ini"
|
||||
@ -32,15 +32,22 @@ spec:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /metrics
|
||||
tcpSocket:
|
||||
port: 8080
|
||||
initialDelaySeconds: 10
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 1
|
||||
failureThreshold: 5
|
||||
successThreshold: 1
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/health
|
||||
scheme: HTTP
|
||||
path: /robots.txt
|
||||
port: 8080
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 30
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.3.0
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -21,7 +21,7 @@ spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.37.0
|
||||
image: quay.io/prometheus/prometheus:v2.40.5
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -25,7 +25,7 @@ spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.5.0
|
||||
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.7.0
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
|
@ -22,19 +22,19 @@ spec:
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
runAsGroup: 65534
|
||||
fsGroup: 65534
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
containers:
|
||||
- name: node-exporter
|
||||
image: quay.io/prometheus/node-exporter:v1.3.1
|
||||
image: quay.io/prometheus/node-exporter:v1.5.0
|
||||
args:
|
||||
- --path.procfs=/host/proc
|
||||
- --path.sysfs=/host/sys
|
||||
- --path.rootfs=/host/root
|
||||
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+)($|/)
|
||||
- --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 9100
|
||||
@ -46,6 +46,9 @@ spec:
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
securityContext:
|
||||
seLinuxOptions:
|
||||
type: spc_t
|
||||
volumeMounts:
|
||||
- name: proc
|
||||
mountPath: /host/proc
|
||||
@ -55,9 +58,12 @@ spec:
|
||||
readOnly: true
|
||||
- name: root
|
||||
mountPath: /host/root
|
||||
mountPropagation: HostToContainer
|
||||
readOnly: true
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
- key: node-role.kubernetes.io/controller
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/control-plane
|
||||
operator: Exists
|
||||
- key: node.kubernetes.io/not-ready
|
||||
operator: Exists
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.27.4 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=35848a50c6be694bc2084bc2696ffb78792c0be3"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.9
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -56,7 +57,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -66,6 +67,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -85,28 +87,13 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -129,7 +116,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.24.3
|
||||
quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -144,6 +131,33 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -180,6 +194,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -224,7 +243,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -23,7 +23,7 @@ resource "aws_instance" "controllers" {
|
||||
|
||||
instance_type = var.controller_type
|
||||
ami = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
@ -31,6 +31,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
tags = {}
|
||||
}
|
||||
|
||||
# network
|
||||
@ -46,41 +47,22 @@ resource "aws_instance" "controllers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,13 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
aws = ">= 2.23, <= 6.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,3 +1,7 @@
|
||||
locals {
|
||||
ami_id = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||
}
|
||||
|
||||
data "aws_ami" "fedora-coreos" {
|
||||
most_recent = true
|
||||
owners = ["125523088429"]
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: containerd.service
|
||||
@ -29,7 +29,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -39,6 +39,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -58,19 +59,9 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -79,31 +70,13 @@ systemd:
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID}
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -113,6 +86,38 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -136,7 +141,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -3,12 +3,10 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
aws = ">= 2.23, <= 6.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Workers AutoScaling Group
|
||||
resource "aws_autoscaling_group" "workers" {
|
||||
name = "${var.name}-worker ${aws_launch_configuration.worker.name}"
|
||||
name = "${var.name}-worker"
|
||||
|
||||
# count
|
||||
desired_capacity = var.worker_count
|
||||
@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
vpc_zone_identifier = var.subnet_ids
|
||||
|
||||
# template
|
||||
launch_configuration = aws_launch_configuration.worker.name
|
||||
launch_template {
|
||||
id = aws_launch_template.worker.id
|
||||
version = aws_launch_template.worker.latest_version
|
||||
}
|
||||
|
||||
# target groups to which instances should be added
|
||||
target_group_arns = flatten([
|
||||
@ -22,6 +25,14 @@ resource "aws_autoscaling_group" "workers" {
|
||||
var.target_groups,
|
||||
])
|
||||
|
||||
instance_refresh {
|
||||
strategy = "Rolling"
|
||||
preferences {
|
||||
instance_warmup = 120
|
||||
min_healthy_percentage = 90
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
# override the default destroy and replace update behavior
|
||||
create_before_destroy = true
|
||||
@ -41,24 +52,42 @@ resource "aws_autoscaling_group" "workers" {
|
||||
}
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
image_id = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||
instance_type = var.instance_type
|
||||
spot_price = var.spot_price > 0 ? var.spot_price : null
|
||||
enable_monitoring = false
|
||||
resource "aws_launch_template" "worker" {
|
||||
name_prefix = "${var.name}-worker"
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
monitoring {
|
||||
enabled = false
|
||||
}
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
ebs_optimized = true
|
||||
block_device_mappings {
|
||||
device_name = "/dev/xvda"
|
||||
ebs {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
security_groups = var.security_groups
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
content {
|
||||
market_type = "spot"
|
||||
spot_options {
|
||||
max_price = var.spot_price
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
// Override the default destroy and replace update behavior
|
||||
@ -67,24 +96,16 @@ resource "aws_launch_configuration" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Fedora CoreOS worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.27.4 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=35848a50c6be694bc2084bc2696ffb78792c0be3"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.9
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -57,7 +58,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -83,26 +84,13 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -121,7 +109,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -134,18 +122,42 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -168,7 +180,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -182,14 +193,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -24,7 +24,7 @@ resource "aws_instance" "controllers" {
|
||||
instance_type = var.controller_type
|
||||
|
||||
ami = local.ami_id
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
@ -32,6 +32,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
tags = {}
|
||||
}
|
||||
|
||||
# network
|
||||
@ -47,41 +48,22 @@ resource "aws_instance" "controllers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,13 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
aws = ">= 2.23, <= 6.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -29,7 +30,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -58,17 +59,9 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -77,12 +70,7 @@ systemd:
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID}
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -90,29 +78,46 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -3,12 +3,10 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
aws = ">= 2.23, <= 6.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Workers AutoScaling Group
|
||||
resource "aws_autoscaling_group" "workers" {
|
||||
name = "${var.name}-worker ${aws_launch_configuration.worker.name}"
|
||||
name = "${var.name}-worker"
|
||||
|
||||
# count
|
||||
desired_capacity = var.worker_count
|
||||
@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
vpc_zone_identifier = var.subnet_ids
|
||||
|
||||
# template
|
||||
launch_configuration = aws_launch_configuration.worker.name
|
||||
launch_template {
|
||||
id = aws_launch_template.worker.id
|
||||
version = aws_launch_template.worker.latest_version
|
||||
}
|
||||
|
||||
# target groups to which instances should be added
|
||||
target_group_arns = flatten([
|
||||
@ -22,6 +25,14 @@ resource "aws_autoscaling_group" "workers" {
|
||||
var.target_groups,
|
||||
])
|
||||
|
||||
instance_refresh {
|
||||
strategy = "Rolling"
|
||||
preferences {
|
||||
instance_warmup = 120
|
||||
min_healthy_percentage = 90
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
# override the default destroy and replace update behavior
|
||||
create_before_destroy = true
|
||||
@ -41,24 +52,42 @@ resource "aws_autoscaling_group" "workers" {
|
||||
}
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
spot_price = var.spot_price > 0 ? var.spot_price : null
|
||||
enable_monitoring = false
|
||||
resource "aws_launch_template" "worker" {
|
||||
name_prefix = "${var.name}-worker"
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
monitoring {
|
||||
enabled = false
|
||||
}
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
ebs_optimized = true
|
||||
block_device_mappings {
|
||||
device_name = "/dev/xvda"
|
||||
ebs {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
security_groups = var.security_groups
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
content {
|
||||
market_type = "spot"
|
||||
spot_options {
|
||||
max_price = var.spot_price
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
// Override the default destroy and replace update behavior
|
||||
@ -67,24 +96,16 @@ resource "aws_launch_configuration" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Flatcar Linux worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.27.4 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=35848a50c6be694bc2084bc2696ffb78792c0be3"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.9
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -53,7 +54,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -62,6 +63,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -81,27 +83,12 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -124,7 +111,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.24.3
|
||||
quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -139,6 +126,33 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -175,6 +189,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -219,7 +238,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -35,7 +35,7 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
availability_set_id = azurerm_availability_set.controllers.id
|
||||
|
||||
size = var.controller_type
|
||||
custom_data = base64encode(data.ct_config.controller-ignitions.*.rendered[count.index])
|
||||
custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
|
||||
|
||||
# storage
|
||||
source_image_id = var.os_image
|
||||
@ -111,41 +111,22 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -43,7 +43,7 @@ variable "controller_type" {
|
||||
variable "worker_type" {
|
||||
type = string
|
||||
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
|
@ -3,13 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: containerd.service
|
||||
@ -26,7 +26,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -35,6 +35,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -54,51 +55,23 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
%{~ endfor ~}
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -108,6 +81,38 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -131,7 +136,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -41,7 +41,7 @@ variable "worker_count" {
|
||||
variable "vm_type" {
|
||||
type = string
|
||||
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
|
@ -3,12 +3,10 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -9,7 +9,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
# instance name prefix for instances in the set
|
||||
computer_name_prefix = "${var.name}-worker"
|
||||
single_placement_group = false
|
||||
custom_data = base64encode(data.ct_config.worker-ignition.rendered)
|
||||
custom_data = base64encode(data.ct_config.worker.rendered)
|
||||
|
||||
# storage
|
||||
source_image_id = var.os_image
|
||||
@ -70,24 +70,17 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS configs
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Fedora CoreOS worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.27.4 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=35848a50c6be694bc2084bc2696ffb78792c0be3"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.9
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -55,7 +56,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -80,25 +81,12 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -117,7 +105,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -130,18 +118,42 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -164,7 +176,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -178,14 +189,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -17,7 +17,9 @@ resource "azurerm_dns_a_record" "etcds" {
|
||||
locals {
|
||||
# Container Linux derivative
|
||||
# flatcar-stable -> Flatcar Linux Stable
|
||||
channel = split("-", var.os_image)[1]
|
||||
channel = split("-", var.os_image)[1]
|
||||
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
|
||||
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
|
||||
}
|
||||
|
||||
# Controller availability set to spread controllers
|
||||
@ -41,7 +43,10 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
availability_set_id = azurerm_availability_set.controllers.id
|
||||
|
||||
size = var.controller_type
|
||||
custom_data = base64encode(data.ct_config.controller-ignitions.*.rendered[count.index])
|
||||
custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
|
||||
boot_diagnostics {
|
||||
# defaults to a managed storage account
|
||||
}
|
||||
|
||||
# storage
|
||||
os_disk {
|
||||
@ -53,21 +58,24 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
|
||||
# Flatcar Container Linux
|
||||
source_image_reference {
|
||||
publisher = "Kinvolk"
|
||||
offer = "flatcar-container-linux-free"
|
||||
sku = local.channel
|
||||
publisher = "kinvolk"
|
||||
offer = "flatcar-container-linux-${local.offer_suffix}"
|
||||
sku = local.urn
|
||||
version = "latest"
|
||||
}
|
||||
|
||||
plan {
|
||||
name = local.channel
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-free"
|
||||
dynamic "plan" {
|
||||
for_each = var.arch == "arm64" ? [] : [1]
|
||||
content {
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-${local.offer_suffix}"
|
||||
name = local.urn
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
network_interface_ids = [
|
||||
azurerm_network_interface.controllers.*.id[count.index]
|
||||
azurerm_network_interface.controllers[count.index].id
|
||||
]
|
||||
|
||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
||||
@ -130,41 +138,22 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -43,7 +43,7 @@ variable "controller_type" {
|
||||
variable "worker_type" {
|
||||
type = string
|
||||
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
@ -133,12 +133,15 @@ variable "worker_node_labels" {
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
variable "arch" {
|
||||
type = string
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
description = "Container architecture (amd64 or arm64)"
|
||||
default = "amd64"
|
||||
|
||||
validation {
|
||||
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||
error_message = "The arch must be amd64 or arm64."
|
||||
}
|
||||
}
|
||||
|
||||
variable "daemonset_tolerations" {
|
||||
@ -146,3 +149,11 @@ variable "daemonset_tolerations" {
|
||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
type = string
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -3,13 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -21,4 +21,5 @@ module "workers" {
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
snippets = var.worker_snippets
|
||||
node_labels = var.worker_node_labels
|
||||
arch = var.arch
|
||||
}
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -27,7 +28,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -55,30 +56,17 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
%{~ endfor ~}
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -86,29 +74,46 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -41,7 +41,7 @@ variable "worker_count" {
|
||||
variable "vm_type" {
|
||||
type = string
|
||||
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
@ -100,6 +100,17 @@ variable "node_taints" {
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "arch" {
|
||||
type = string
|
||||
description = "Container architecture (amd64 or arm64)"
|
||||
default = "amd64"
|
||||
|
||||
validation {
|
||||
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||
error_message = "The arch must be amd64 or arm64."
|
||||
}
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
|
@ -3,12 +3,10 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,8 @@
|
||||
locals {
|
||||
# flatcar-stable -> Flatcar Linux Stable
|
||||
channel = split("-", var.os_image)[1]
|
||||
channel = split("-", var.os_image)[1]
|
||||
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
|
||||
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
|
||||
}
|
||||
|
||||
# Workers scale set
|
||||
@ -14,7 +16,10 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
# instance name prefix for instances in the set
|
||||
computer_name_prefix = "${var.name}-worker"
|
||||
single_placement_group = false
|
||||
custom_data = base64encode(data.ct_config.worker-ignition.rendered)
|
||||
custom_data = base64encode(data.ct_config.worker.rendered)
|
||||
boot_diagnostics {
|
||||
# defaults to a managed storage account
|
||||
}
|
||||
|
||||
# storage
|
||||
os_disk {
|
||||
@ -24,16 +29,19 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
|
||||
# Flatcar Container Linux
|
||||
source_image_reference {
|
||||
publisher = "Kinvolk"
|
||||
offer = "flatcar-container-linux-free"
|
||||
sku = local.channel
|
||||
publisher = "kinvolk"
|
||||
offer = "flatcar-container-linux-${local.offer_suffix}"
|
||||
sku = local.urn
|
||||
version = "latest"
|
||||
}
|
||||
|
||||
plan {
|
||||
name = local.channel
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-free"
|
||||
dynamic "plan" {
|
||||
for_each = var.arch == "arm64" ? [] : [1]
|
||||
content {
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-${local.offer_suffix}"
|
||||
name = local.urn
|
||||
}
|
||||
}
|
||||
|
||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
||||
@ -64,6 +72,9 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
# eviction policy may only be set when priority is Spot
|
||||
priority = var.priority
|
||||
eviction_policy = var.priority == "Spot" ? "Delete" : null
|
||||
termination_notification {
|
||||
enabled = true
|
||||
}
|
||||
}
|
||||
|
||||
# Scale up or down to maintain desired number, tolerating deallocations.
|
||||
@ -88,24 +99,16 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux configs
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Flatcar Linux worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.27.4 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=35848a50c6be694bc2084bc2696ffb78792c0be3"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.9
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -52,7 +53,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -61,6 +62,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -80,28 +82,13 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -126,7 +113,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPre=-/usr/bin/podman rm bootstrap
|
||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||
--network host \
|
||||
@ -149,6 +136,33 @@ storage:
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -185,6 +199,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -229,7 +248,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -1,22 +0,0 @@
|
||||
# Match each controller or worker to a profile
|
||||
|
||||
resource "matchbox_group" "controller" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-%s", var.cluster_name, var.controllers.*.name[count.index])
|
||||
profile = matchbox_profile.controllers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.controllers.*.mac[count.index]
|
||||
}
|
||||
}
|
||||
|
||||
resource "matchbox_group" "worker" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-%s", var.cluster_name, var.workers.*.name[count.index])
|
||||
profile = matchbox_profile.workers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.workers.*.mac[count.index]
|
||||
}
|
||||
}
|
||||
|
@ -3,6 +3,13 @@ output "kubeconfig-admin" {
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
# Outputs for workers
|
||||
|
||||
output "kubeconfig" {
|
||||
value = module.bootstrap.kubeconfig-kubelet
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
# Outputs for debug
|
||||
|
||||
output "assets_dist" {
|
||||
|
@ -1,34 +1,26 @@
|
||||
locals {
|
||||
remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
remote_initrd = [
|
||||
"https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
"https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img"
|
||||
"--name main https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
]
|
||||
|
||||
remote_args = [
|
||||
"ip=dhcp",
|
||||
"rd.neednet=1",
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"coreos.inst.image_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
]
|
||||
|
||||
cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
cached_initrd = [
|
||||
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img"
|
||||
]
|
||||
|
||||
cached_args = [
|
||||
"ip=dhcp",
|
||||
"rd.neednet=1",
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"coreos.inst.image_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
]
|
||||
|
||||
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
|
||||
@ -36,6 +28,16 @@ locals {
|
||||
args = var.cached_install ? local.cached_args : local.remote_args
|
||||
}
|
||||
|
||||
# Match a controller to a profile by MAC
|
||||
resource "matchbox_group" "controller" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-%s", var.cluster_name, var.controllers.*.name[count.index])
|
||||
profile = matchbox_profile.controllers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.controllers.*.mac[count.index]
|
||||
}
|
||||
}
|
||||
|
||||
// Fedora CoreOS controller profile
|
||||
resource "matchbox_profile" "controllers" {
|
||||
@ -46,62 +48,20 @@ resource "matchbox_profile" "controllers" {
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.controllers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "controller-ignitions" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = length(var.controllers)
|
||||
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
count = length(var.controllers)
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
domain_name = var.controllers.*.domain[count.index]
|
||||
etcd_name = var.controllers.*.name[count.index]
|
||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
}
|
||||
|
||||
// Fedora CoreOS worker profile
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
|
||||
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "worker-ignitions" {
|
||||
count = length(var.workers)
|
||||
|
||||
content = data.template_file.worker-configs.*.rendered[count.index]
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "worker-configs" {
|
||||
count = length(var.workers)
|
||||
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
vars = {
|
||||
domain_name = var.workers.*.domain[count.index]
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
|
||||
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -15,7 +15,6 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
module.bootstrap,
|
||||
]
|
||||
|
||||
@ -45,37 +44,6 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy kubeconfig to all workers. Activates kubelet.service
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
count = length(var.workers)
|
||||
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = var.workers.*.domain[count.index]
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo touch /etc/kubernetes",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
# Without depends_on, this remote-exec may start before the kubeconfig copy.
|
||||
@ -83,7 +51,6 @@ resource "null_resource" "bootstrap" {
|
||||
# while no Kubelets are running.
|
||||
depends_on = [
|
||||
null_resource.copy-controller-secrets,
|
||||
null_resource.copy-worker-secrets,
|
||||
]
|
||||
|
||||
connection {
|
||||
|
@ -53,6 +53,7 @@ List of worker machine details (unique name, identifying MAC address, FQDN)
|
||||
{ name = "node3", mac = "52:54:00:c3:61:77", domain = "node3.example.com"}
|
||||
]
|
||||
EOD
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "snippets" {
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: containerd.service
|
||||
@ -25,7 +25,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -34,6 +34,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -53,33 +54,18 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in compact(split(",", node_labels)) ~}
|
||||
--node-labels=${label} \
|
||||
%{~ endfor ~}
|
||||
%{~ for taint in compact(split(",", node_taints)) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -104,6 +90,38 @@ storage:
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -127,7 +145,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
63
bare-metal/fedora-coreos/kubernetes/worker/matchbox.tf
Normal file
63
bare-metal/fedora-coreos/kubernetes/worker/matchbox.tf
Normal file
@ -0,0 +1,63 @@
|
||||
locals {
|
||||
remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
remote_initrd = [
|
||||
"--name main https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
]
|
||||
|
||||
remote_args = [
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
]
|
||||
|
||||
cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
cached_initrd = [
|
||||
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
]
|
||||
|
||||
cached_args = [
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
]
|
||||
|
||||
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
|
||||
initrd = var.cached_install ? local.cached_initrd : local.remote_initrd
|
||||
args = var.cached_install ? local.cached_args : local.remote_args
|
||||
}
|
||||
|
||||
// Match a worker to a profile by MAC
|
||||
resource "matchbox_group" "worker" {
|
||||
name = format("%s-%s", var.cluster_name, var.name)
|
||||
profile = matchbox_profile.worker.name
|
||||
selector = {
|
||||
mac = var.mac
|
||||
}
|
||||
}
|
||||
|
||||
// Fedora CoreOS worker profile
|
||||
resource "matchbox_profile" "worker" {
|
||||
name = format("%s-worker-%s", var.cluster_name, var.name)
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.worker.rendered
|
||||
}
|
||||
|
||||
# Fedora CoreOS workers
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.domain
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
27
bare-metal/fedora-coreos/kubernetes/worker/ssh.tf
Normal file
27
bare-metal/fedora-coreos/kubernetes/worker/ssh.tf
Normal file
@ -0,0 +1,27 @@
|
||||
# Secure copy kubeconfig to worker. Activates kubelet.service
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.worker,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = var.domain
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = var.kubeconfig
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo touch /etc/kubernetes",
|
||||
]
|
||||
}
|
||||
}
|
111
bare-metal/fedora-coreos/kubernetes/worker/variables.tf
Normal file
111
bare-metal/fedora-coreos/kubernetes/worker/variables.tf
Normal file
@ -0,0 +1,111 @@
|
||||
variable "cluster_name" {
|
||||
type = string
|
||||
description = "Must be set to the `cluster_name` of cluster"
|
||||
}
|
||||
|
||||
# bare-metal
|
||||
|
||||
variable "matchbox_http_endpoint" {
|
||||
type = string
|
||||
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
|
||||
}
|
||||
|
||||
variable "os_stream" {
|
||||
type = string
|
||||
description = "Fedora CoreOS release stream (e.g. stable, testing, next)"
|
||||
default = "stable"
|
||||
|
||||
validation {
|
||||
condition = contains(["stable", "testing", "next"], var.os_stream)
|
||||
error_message = "The os_stream must be stable, testing, or next."
|
||||
}
|
||||
}
|
||||
|
||||
variable "os_version" {
|
||||
type = string
|
||||
description = "Fedora CoreOS version to PXE and install (e.g. 31.20200310.3.0)"
|
||||
}
|
||||
|
||||
# machine
|
||||
|
||||
variable "name" {
|
||||
type = string
|
||||
description = "Unique name for the machine (e.g. node1)"
|
||||
}
|
||||
|
||||
variable "mac" {
|
||||
type = string
|
||||
description = "MAC address (e.g. 52:54:00:a1:9c:ae)"
|
||||
}
|
||||
|
||||
variable "domain" {
|
||||
type = string
|
||||
description = "Fully qualified domain name (e.g. node1.example.com)"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "kubeconfig" {
|
||||
type = string
|
||||
description = "Must be set to `kubeconfig` output by cluster"
|
||||
}
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = string
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "snippets" {
|
||||
type = list(string)
|
||||
description = "List of Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "node_labels" {
|
||||
type = list(string)
|
||||
description = "List of initial node labels"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "node_taints" {
|
||||
type = list(string)
|
||||
description = "List of initial node taints"
|
||||
default = []
|
||||
}
|
||||
|
||||
# optional
|
||||
|
||||
variable "cached_install" {
|
||||
type = bool
|
||||
description = "Whether Fedora CoreOS should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "install_disk" {
|
||||
type = string
|
||||
description = "Disk device to install Fedora CoreOS (e.g. sda)"
|
||||
default = "sda"
|
||||
}
|
||||
|
||||
variable "kernel_args" {
|
||||
type = list(string)
|
||||
description = "Additional kernel arguments to provide at PXE boot."
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "service_cidr" {
|
||||
type = string
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = string
|
||||
default = "cluster.local"
|
||||
}
|
17
bare-metal/fedora-coreos/kubernetes/worker/versions.tf
Normal file
17
bare-metal/fedora-coreos/kubernetes/worker/versions.tf
Normal file
@ -0,0 +1,17 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
30
bare-metal/fedora-coreos/kubernetes/workers.tf
Normal file
30
bare-metal/fedora-coreos/kubernetes/workers.tf
Normal file
@ -0,0 +1,30 @@
|
||||
module "workers" {
|
||||
count = length(var.workers)
|
||||
source = "./worker"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
|
||||
# metal
|
||||
matchbox_http_endpoint = var.matchbox_http_endpoint
|
||||
os_stream = var.os_stream
|
||||
os_version = var.os_version
|
||||
|
||||
# machine
|
||||
name = var.workers[count.index].name
|
||||
mac = var.workers[count.index].mac
|
||||
domain = var.workers[count.index].domain
|
||||
|
||||
# configuration
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
|
||||
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
|
||||
snippets = lookup(var.snippets, var.workers[count.index].name, [])
|
||||
|
||||
# optional
|
||||
cached_install = var.cached_install
|
||||
install_disk = var.install_disk
|
||||
kernel_args = var.kernel_args
|
||||
}
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.27.4 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=35848a50c6be694bc2084bc2696ffb78792c0be3"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.9
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -63,7 +64,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -88,26 +89,13 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -126,7 +114,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -139,21 +127,44 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -176,7 +187,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -190,14 +200,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: installer.service
|
||||
@ -25,16 +26,16 @@ systemd:
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/installer
|
||||
filesystem: root
|
||||
mode: 0500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -ex
|
||||
curl --retry 10 "${ignition_endpoint}?{{.request.raw_query}}&os=installed" -o ignition.json
|
||||
curl --retry 10 "${ignition_endpoint}?mac=${mac}&os=installed" -o ignition.json
|
||||
flatcar-install \
|
||||
-d ${install_disk} \
|
||||
-C ${os_channel} \
|
||||
-V ${os_version} \
|
||||
${oem_flag} \
|
||||
${baseurl_flag} \
|
||||
-i ignition.json
|
||||
udevadm settle
|
@ -1,35 +0,0 @@
|
||||
resource "matchbox_group" "install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
|
||||
name = format("install-%s", concat(var.controllers.*.name, var.workers.*.name)[count.index])
|
||||
|
||||
# pick Matchbox profile (Flatcar upstream or Matchbox image cache)
|
||||
profile = var.cached_install ? matchbox_profile.cached-flatcar-install.*.name[count.index] : matchbox_profile.flatcar-install.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
}
|
||||
}
|
||||
|
||||
resource "matchbox_group" "controller" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-%s", var.cluster_name, var.controllers[count.index].name)
|
||||
profile = matchbox_profile.controllers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.controllers[count.index].mac
|
||||
os = "installed"
|
||||
}
|
||||
}
|
||||
|
||||
resource "matchbox_group" "worker" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-%s", var.cluster_name, var.workers[count.index].name)
|
||||
profile = matchbox_profile.workers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.workers[count.index].mac
|
||||
os = "installed"
|
||||
}
|
||||
}
|
||||
|
@ -1,142 +1,97 @@
|
||||
locals {
|
||||
# flatcar-stable -> stable channel
|
||||
channel = split("-", var.os_channel)[1]
|
||||
}
|
||||
|
||||
// Flatcar Linux install profile (from release.flatcar-linux.net)
|
||||
resource "matchbox_profile" "flatcar-install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
name = format("%s-flatcar-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
|
||||
|
||||
kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
|
||||
initrd = [
|
||||
remote_kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
remote_initrd = [
|
||||
"${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
args = flatten([
|
||||
args = [
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
var.kernel_args,
|
||||
])
|
||||
]
|
||||
|
||||
container_linux_config = data.template_file.install-configs.*.rendered[count.index]
|
||||
}
|
||||
|
||||
// Flatcar Linux Install profile (from matchbox /assets cache)
|
||||
// Note: Admin must have downloaded os_version into matchbox assets/flatcar.
|
||||
resource "matchbox_profile" "cached-flatcar-install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
name = format("%s-cached-flatcar-linux-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
|
||||
|
||||
kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
|
||||
initrd = [
|
||||
cached_kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
cached_initrd = [
|
||||
"/assets/flatcar/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
args = flatten([
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
var.kernel_args,
|
||||
])
|
||||
|
||||
container_linux_config = data.template_file.cached-install-configs.*.rendered[count.index]
|
||||
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
|
||||
initrd = var.cached_install ? local.cached_initrd : local.remote_initrd
|
||||
}
|
||||
|
||||
data "template_file" "install-configs" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
# Match controllers to install profiles by MAC
|
||||
resource "matchbox_group" "install" {
|
||||
count = length(var.controllers)
|
||||
|
||||
template = file("${path.module}/cl/install.yaml")
|
||||
name = format("install-%s", var.controllers[count.index].name)
|
||||
profile = matchbox_profile.install[count.index].name
|
||||
selector = {
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
}
|
||||
}
|
||||
|
||||
vars = {
|
||||
// Flatcar Linux install
|
||||
resource "matchbox_profile" "install" {
|
||||
count = length(var.controllers)
|
||||
|
||||
name = format("%s-install-%s", var.cluster_name, var.controllers.*.name[count.index])
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.install[count.index].rendered
|
||||
}
|
||||
|
||||
# Flatcar Linux install
|
||||
data "ct_config" "install" {
|
||||
count = length(var.controllers)
|
||||
|
||||
content = templatefile("${path.module}/butane/install.yaml", {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
oem_flag = var.oem_type != "" ? "-o ${var.oem_type}" : ""
|
||||
# only cached profile adds -b baseurl
|
||||
baseurl_flag = ""
|
||||
}
|
||||
baseurl_flag = var.cached_install ? "-b ${var.matchbox_http_endpoint}/assets/flatcar" : ""
|
||||
})
|
||||
strict = true
|
||||
install_snippets = lookup(var.install_snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "cached-install-configs" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
|
||||
template = file("${path.module}/cl/install.yaml")
|
||||
|
||||
vars = {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
# profile uses -b baseurl to install from matchbox cache
|
||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/flatcar"
|
||||
# Match each controller by MAC
|
||||
resource "matchbox_group" "controller" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-%s", var.cluster_name, var.controllers[count.index].name)
|
||||
profile = matchbox_profile.controllers[count.index].name
|
||||
selector = {
|
||||
mac = var.controllers[count.index].mac
|
||||
os = "installed"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Kubernetes Controller profiles
|
||||
resource "matchbox_profile" "controllers" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-controller-%s", var.cluster_name, var.controllers.*.name[count.index])
|
||||
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.controllers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = length(var.controllers)
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = length(var.controllers)
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
domain_name = var.controllers.*.domain[count.index]
|
||||
etcd_name = var.controllers.*.name[count.index]
|
||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
}
|
||||
|
||||
// Kubernetes Worker profiles
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
|
||||
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "worker-ignitions" {
|
||||
count = length(var.workers)
|
||||
content = data.template_file.worker-configs.*.rendered[count.index]
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "worker-configs" {
|
||||
count = length(var.workers)
|
||||
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
domain_name = var.workers.*.domain[count.index]
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
|
||||
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
|
||||
}
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
@ -16,7 +16,6 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
depends_on = [
|
||||
matchbox_group.install,
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
module.bootstrap,
|
||||
]
|
||||
|
||||
@ -45,37 +44,6 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy kubeconfig to all workers. Activates kubelet.service
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
count = length(var.workers)
|
||||
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.install,
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = var.workers.*.domain[count.index]
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
# Without depends_on, this remote-exec may start before the kubeconfig copy.
|
||||
@ -83,7 +51,6 @@ resource "null_resource" "bootstrap" {
|
||||
# while no Kubelets are running.
|
||||
depends_on = [
|
||||
null_resource.copy-controller-secrets,
|
||||
null_resource.copy-worker-secrets,
|
||||
]
|
||||
|
||||
connection {
|
||||
@ -99,4 +66,3 @@ resource "null_resource" "bootstrap" {
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -52,6 +52,7 @@ List of worker machine details (unique name, identifying MAC address, FQDN)
|
||||
{ name = "node3", mac = "52:54:00:c3:61:77", domain = "node3.example.com"}
|
||||
]
|
||||
EOD
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "snippets" {
|
||||
@ -60,6 +61,12 @@ variable "snippets" {
|
||||
default = {}
|
||||
}
|
||||
|
||||
variable "install_snippets" {
|
||||
type = map(list(string))
|
||||
description = "Map from machine names to lists of Container Linux Config snippets to run during install phase"
|
||||
default = {}
|
||||
}
|
||||
|
||||
variable "worker_node_labels" {
|
||||
type = map(list(string))
|
||||
description = "Map from worker names to lists of initial node labels"
|
||||
@ -155,6 +162,17 @@ variable "enable_aggregation" {
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "oem_type" {
|
||||
type = string
|
||||
description = <<EOD
|
||||
An OEM type to install with flatcar-install. Find available types by looking for Flatcar image files
|
||||
ending in `image.bin.bz2`. The OEM identifier is contained in the filename.
|
||||
E.g., `flatcar_production_vmware_raw_image.bin.bz2` leads to `vmware_raw`.
|
||||
See: https://www.flatcar.org/docs/latest/installing/bare-metal/installing-to-disk/#choose-a-channel
|
||||
EOD
|
||||
default = ""
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
|
@ -0,0 +1,47 @@
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: installer.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/opt/installer
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
# Avoid using the standard SSH port so terraform apply cannot SSH until
|
||||
# post-install. But admins may SSH to debug disk install problems.
|
||||
# After install, sshd will use port 22 and users/terraform can connect.
|
||||
- name: sshd.socket
|
||||
dropins:
|
||||
- name: 10-sshd-port.conf
|
||||
contents: |
|
||||
[Socket]
|
||||
ListenStream=
|
||||
ListenStream=2222
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/installer
|
||||
mode: 0500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -ex
|
||||
curl --retry 10 "${ignition_endpoint}?mac=${mac}&os=installed" -o ignition.json
|
||||
flatcar-install \
|
||||
-d ${install_disk} \
|
||||
-C ${os_channel} \
|
||||
-V ${os_version} \
|
||||
${oem_flag} \
|
||||
${baseurl_flag} \
|
||||
-i ignition.json
|
||||
udevadm settle
|
||||
systemctl reboot
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
- "${ssh_authorized_key}"
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -35,7 +36,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -63,17 +64,9 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
@ -83,11 +76,7 @@ systemd:
|
||||
%{~ for taint in compact(split(",", node_taints)) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -99,17 +88,46 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
89
bare-metal/flatcar-linux/kubernetes/worker/matchbox.tf
Normal file
89
bare-metal/flatcar-linux/kubernetes/worker/matchbox.tf
Normal file
@ -0,0 +1,89 @@
|
||||
locals {
|
||||
# flatcar-stable -> stable channel
|
||||
channel = split("-", var.os_channel)[1]
|
||||
|
||||
remote_kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
remote_initrd = [
|
||||
"${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
args = flatten([
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
var.kernel_args,
|
||||
])
|
||||
|
||||
cached_kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
cached_initrd = [
|
||||
"/assets/flatcar/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
|
||||
initrd = var.cached_install ? local.cached_initrd : local.remote_initrd
|
||||
}
|
||||
|
||||
# Match machine to an install profile by MAC
|
||||
resource "matchbox_group" "install" {
|
||||
name = format("install-%s", var.name)
|
||||
profile = matchbox_profile.install.name
|
||||
selector = {
|
||||
mac = var.mac
|
||||
}
|
||||
}
|
||||
|
||||
// Flatcar Linux install profile (from release.flatcar-linux.net)
|
||||
resource "matchbox_profile" "install" {
|
||||
name = format("%s-install-%s", var.cluster_name, var.name)
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = local.args
|
||||
|
||||
raw_ignition = data.ct_config.install.rendered
|
||||
}
|
||||
|
||||
# Flatcar Linux install
|
||||
data "ct_config" "install" {
|
||||
content = templatefile("${path.module}/butane/install.yaml", {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
|
||||
mac = var.mac
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
oem_flag = var.oem_type != "" ? "-o ${var.oem_type}" : ""
|
||||
# only cached profile adds -b baseurl
|
||||
baseurl_flag = var.cached_install ? "-b ${var.matchbox_http_endpoint}/assets/flatcar" : ""
|
||||
})
|
||||
strict = true
|
||||
snippets = var.install_snippets
|
||||
}
|
||||
|
||||
# Match a worker to a profile by MAC
|
||||
resource "matchbox_group" "worker" {
|
||||
name = format("%s-%s", var.cluster_name, var.name)
|
||||
profile = matchbox_profile.worker.name
|
||||
selector = {
|
||||
mac = var.mac
|
||||
os = "installed"
|
||||
}
|
||||
}
|
||||
|
||||
// Flatcar Linux Worker profile
|
||||
resource "matchbox_profile" "worker" {
|
||||
name = format("%s-worker-%s", var.cluster_name, var.name)
|
||||
raw_ignition = data.ct_config.worker.rendered
|
||||
}
|
||||
|
||||
# Flatcar Linux workers
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.domain
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
27
bare-metal/flatcar-linux/kubernetes/worker/ssh.tf
Normal file
27
bare-metal/flatcar-linux/kubernetes/worker/ssh.tf
Normal file
@ -0,0 +1,27 @@
|
||||
# Secure copy kubeconfig to worker. Activates kubelet.service
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.install,
|
||||
matchbox_group.worker,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = var.domain
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = var.kubeconfig
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
132
bare-metal/flatcar-linux/kubernetes/worker/variables.tf
Normal file
132
bare-metal/flatcar-linux/kubernetes/worker/variables.tf
Normal file
@ -0,0 +1,132 @@
|
||||
variable "cluster_name" {
|
||||
type = string
|
||||
description = "Must be set to the `cluster_name` of cluster"
|
||||
}
|
||||
|
||||
# bare-metal
|
||||
|
||||
variable "matchbox_http_endpoint" {
|
||||
type = string
|
||||
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
|
||||
}
|
||||
|
||||
variable "os_channel" {
|
||||
type = string
|
||||
description = "Channel for a Flatcar Linux (flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||
|
||||
validation {
|
||||
condition = contains(["flatcar-stable", "flatcar-beta", "flatcar-alpha"], var.os_channel)
|
||||
error_message = "The os_channel must be flatcar-stable, flatcar-beta, or flatcar-alpha."
|
||||
}
|
||||
}
|
||||
|
||||
variable "os_version" {
|
||||
type = string
|
||||
description = "Version of Flatcar Linux to PXE and install (e.g. 2079.5.1)"
|
||||
}
|
||||
|
||||
# machine
|
||||
|
||||
variable "name" {
|
||||
type = string
|
||||
description = "Unique name for the machine (e.g. node1)"
|
||||
}
|
||||
|
||||
variable "mac" {
|
||||
type = string
|
||||
description = "MAC address (e.g. 52:54:00:a1:9c:ae)"
|
||||
}
|
||||
|
||||
variable "domain" {
|
||||
type = string
|
||||
description = "Fully qualified domain name (e.g. node1.example.com)"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "kubeconfig" {
|
||||
type = string
|
||||
description = "Must be set to `kubeconfig` output by cluster"
|
||||
}
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = string
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "snippets" {
|
||||
type = list(string)
|
||||
description = "List of Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "install_snippets" {
|
||||
type = list(string)
|
||||
description = "List of Butane snippets to run with the install command"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "node_labels" {
|
||||
type = list(string)
|
||||
description = "List of initial node labels"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "node_taints" {
|
||||
type = list(string)
|
||||
description = "List of initial node taints"
|
||||
default = []
|
||||
}
|
||||
|
||||
# optional
|
||||
|
||||
variable "download_protocol" {
|
||||
type = string
|
||||
description = "Protocol iPXE should use to download the kernel and initrd. Defaults to https, which requires iPXE compiled with crypto support. Unused if cached_install is true."
|
||||
default = "https"
|
||||
}
|
||||
|
||||
variable "cached_install" {
|
||||
type = bool
|
||||
description = "Whether Flatcar Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "install_disk" {
|
||||
type = string
|
||||
default = "/dev/sda"
|
||||
description = "Disk device to which the install profiles should install Flatcar Linux (e.g. /dev/sda)"
|
||||
}
|
||||
|
||||
variable "kernel_args" {
|
||||
type = list(string)
|
||||
description = "Additional kernel arguments to provide at PXE boot."
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "oem_type" {
|
||||
type = string
|
||||
default = ""
|
||||
description = "An OEM type to install with flatcar-install."
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "service_cidr" {
|
||||
type = string
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
type = string
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
|
16
bare-metal/flatcar-linux/kubernetes/worker/versions.tf
Normal file
16
bare-metal/flatcar-linux/kubernetes/worker/versions.tf
Normal file
@ -0,0 +1,16 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
}
|
||||
}
|
||||
}
|
34
bare-metal/flatcar-linux/kubernetes/workers.tf
Normal file
34
bare-metal/flatcar-linux/kubernetes/workers.tf
Normal file
@ -0,0 +1,34 @@
|
||||
module "workers" {
|
||||
count = length(var.workers)
|
||||
source = "./worker"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
|
||||
# metal
|
||||
matchbox_http_endpoint = var.matchbox_http_endpoint
|
||||
os_channel = var.os_channel
|
||||
os_version = var.os_version
|
||||
|
||||
# machine
|
||||
name = var.workers[count.index].name
|
||||
mac = var.workers[count.index].mac
|
||||
domain = var.workers[count.index].domain
|
||||
|
||||
# configuration
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
|
||||
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
|
||||
snippets = lookup(var.snippets, var.workers[count.index].name, [])
|
||||
install_snippets = lookup(var.install_snippets, var.workers[count.index].name, [])
|
||||
|
||||
# optional
|
||||
download_protocol = var.download_protocol
|
||||
cached_install = var.cached_install
|
||||
install_disk = var.install_disk
|
||||
kernel_args = var.kernel_args
|
||||
oem_type = var.oem_type
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.27.4 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=35848a50c6be694bc2084bc2696ffb78792c0be3"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.9
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -54,7 +55,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -64,6 +65,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -83,28 +85,13 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -136,7 +123,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.24.3
|
||||
quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -146,6 +133,33 @@ storage:
|
||||
- path: /etc/kubernetes
|
||||
- path: /opt/bootstrap
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -182,6 +196,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -226,7 +245,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: containerd.service
|
||||
@ -28,7 +28,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -38,6 +38,7 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
@ -61,23 +62,11 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -93,23 +82,42 @@ systemd:
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -133,7 +141,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -41,11 +41,11 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -62,39 +62,20 @@ resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = ">= 2.12, < 3.0"
|
||||
|
@ -37,11 +37,11 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = data.ct_config.worker.rendered
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -58,20 +58,12 @@ resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
# Fedora CoreOS worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
})
|
||||
strict = true
|
||||
snippets = var.worker_snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.24.3 (upstream)
|
||||
* Kubernetes v1.27.4 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=35848a50c6be694bc2084bc2696ffb78792c0be3"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.4
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.9
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -65,7 +66,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -91,26 +92,13 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -129,7 +117,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -142,15 +130,39 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -173,7 +185,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -187,14 +198,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -37,7 +38,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.27.4
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -66,25 +67,12 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -92,27 +80,44 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -46,11 +46,11 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -67,39 +67,20 @@ resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = ">= 2.12, < 3.0"
|
||||
|
@ -35,11 +35,11 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# only official DigitalOcean images support IPv6
|
||||
ipv6 = local.is_official_image
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = data.ct_config.worker.rendered
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -56,20 +56,12 @@ resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
# Flatcar Linux worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
})
|
||||
strict = true
|
||||
snippets = var.worker_snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
}
|
||||
|
||||
|
1
docs/CNAME
Normal file
1
docs/CNAME
Normal file
@ -0,0 +1 @@
|
||||
typhoon.psdn.io
|
@ -6,7 +6,7 @@ Declare a Zincati `fleet_lock` strategy when provisioning Fedora CoreOS nodes vi
|
||||
|
||||
```yaml
|
||||
variant: fcos
|
||||
version: 1.1.0
|
||||
version: 1.5.0
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/zincati/config.d/55-update-strategy.toml
|
||||
|
@ -1,19 +1,21 @@
|
||||
# ARM64
|
||||
|
||||
Typhoon has experimental support for ARM64 on AWS, with Fedora CoreOS or Flatcar Linux. Clusters can be created with ARM64 controller and worker nodes. Or worker pools of ARM64 nodes can be attached to an AMD64 cluster to create a hybrid/mixed architecture cluster.
|
||||
Typhoon supports ARM64 Kubernetes clusters with ARM64 controller and worker nodes (full-cluster) or adding worker pools of ARM64 nodes to clusters with an x86/amd64 control plane for a hybdrid (mixed-arch) cluster.
|
||||
|
||||
!!! note
|
||||
Currently, CNI networking must be set to `flannel` or `cilium`.
|
||||
Typhoon ARM64 clusters (full-cluster or mixed-arch) are available on:
|
||||
|
||||
* AWS with Fedora CoreOS or Flatcar Linux
|
||||
* Azure with Flatcar Linux
|
||||
|
||||
## Cluster
|
||||
|
||||
Create a cluster with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` container images.
|
||||
Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
||||
|
||||
=== "Fedora CoreOS Cluster (arm64)"
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.27.4"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -38,7 +40,7 @@ Create a cluster with ARM64 controller and worker nodes. Container workloads mus
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.27.4"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -64,9 +66,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-21-119 Ready <none> 77s v1.24.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.24.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.24.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-21-119 Ready <none> 77s v1.27.4 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.27.4 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.27.4 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Hybrid
|
||||
@ -77,7 +79,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.27.4"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -100,7 +102,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.27.4"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -123,7 +125,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.27.4"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -147,7 +149,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.24.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.27.4"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -172,9 +174,34 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-1-73 Ready <none> 111m v1.24.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.24.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.24.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.24.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-1-73 Ready <none> 111m v1.27.4 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.27.4 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.27.4 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.27.4 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Azure
|
||||
|
||||
Create a cluster on Azure with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.27.4"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
region = "centralus"
|
||||
dns_zone = "azure.example.com"
|
||||
dns_zone_group = "example-group"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
arch = "arm64"
|
||||
controller_type = "Standard_D2pls_v5"
|
||||
worker_type = "Standard_D2pls_v5"
|
||||
worker_count = 2
|
||||
host_cidr = "10.0.0.0/20"
|
||||
}
|
||||
```
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user