mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 19:01:34 +02:00
Compare commits
150 Commits
Author | SHA1 | Date | |
---|---|---|---|
a205922d06 | |||
b5ba65d4c2 | |||
e696fd2b22 | |||
3ff9b792ca | |||
c4f1d2d1c8 | |||
a1d7b5cd1e | |||
e7591030e0 | |||
f2bf5ac3fb | |||
9cd1c5b17a | |||
d6f739dedb | |||
6bb7a36cf2 | |||
0afe9d65ed | |||
11e540000f | |||
d6cbcf9f96 | |||
ce52a2cd35 | |||
bd9a908125 | |||
0dc8740c77 | |||
a9b12b6bca | |||
d419c58ab1 | |||
da76d32aba | |||
f0e5982b3c | |||
a8990b3045 | |||
f597f7cda3 | |||
b4857c123e | |||
50bffaae8f | |||
a193762eed | |||
adf33df99b | |||
29a005b7b4 | |||
ccebc2313d | |||
1f86592d13 | |||
6a521257d0 | |||
26dbc7e91d | |||
de668e696a | |||
d3b2217444 | |||
937acc4b5a | |||
b0a6dc8115 | |||
420ff6ff04 | |||
9b733d79c7 | |||
35a9e22b1f | |||
0f38a6d405 | |||
a535581ef2 | |||
08d13e7215 | |||
3ff2d38fa5 | |||
d6d8eb8d79 | |||
f04e1d25a8 | |||
b68f8bb2a9 | |||
651151805d | |||
8d2c8b8db6 | |||
675ac63159 | |||
b4c8b1729c | |||
e82241169a | |||
ffe4929ff6 | |||
88b3925318 | |||
29876dc85a | |||
7e29e35457 | |||
3ee462a24c | |||
f833b7205d | |||
558e293f78 | |||
90782ea820 | |||
8dc7cc614c | |||
74d4d56dbd | |||
5abe84b520 | |||
951209d113 | |||
09751cc0e8 | |||
c14300f0be | |||
37de9ca2ae | |||
1786e34f33 | |||
5f612c82e2 | |||
e60a321185 | |||
5ad74883fe | |||
4ad473cd3c | |||
393a38deff | |||
76d92e9c2d | |||
275fc0f9e8 | |||
3fb59a3289 | |||
a31dbceac6 | |||
1dcf56127b | |||
bf06412dfd | |||
505818b7d5 | |||
0d27811265 | |||
c13d060b38 | |||
e87d5aabc3 | |||
760b4cd5ee | |||
fcd8ff2b17 | |||
ef2d2af0c7 | |||
8e2027ed2d | |||
52427a4271 | |||
20b76d6e00 | |||
6facfca4ed | |||
ed8c6a5aeb | |||
003af72cc8 | |||
b321b90a4f | |||
e5d0e2d48b | |||
679f8b878f | |||
87a8278c9d | |||
93b7f2554e | |||
62d47ad3f0 | |||
6eb7861f96 | |||
ffbacbccf7 | |||
16c2785878 | |||
4a469513dd | |||
47d8431fe0 | |||
256b87812e | |||
ca6eef365f | |||
c6794f1007 | |||
de6f27e119 | |||
6a9c32d3a9 | |||
a7e9e423f5 | |||
83236eab57 | |||
7f445b0dba | |||
f42b45451b | |||
767a653baa | |||
0db5f86110 | |||
4908fdd247 | |||
42bf82b325 | |||
61cbfc044d | |||
07df0c2552 | |||
45d6ff2e38 | |||
8398182956 | |||
6d6b48b201 | |||
2a8915fee9 | |||
337b1eef3a | |||
fe28bd0783 | |||
5e2f9a5c44 | |||
31c7f0ba0e | |||
b8549a1e32 | |||
8e8bf305c3 | |||
a447494ccd | |||
c5573199db | |||
0be171cde7 | |||
e3b1e6c52e | |||
b0e0b132e4 | |||
4fba09e8f8 | |||
02f78fbd1a | |||
a122867748 | |||
91b38bf3fd | |||
9a4887d028 | |||
35bca6df90 | |||
d7f55c4e46 | |||
80c6e2e7e6 | |||
fddd8ac69d | |||
2f7d2a92e0 | |||
6cd6bb38de | |||
d91408258b | |||
2df1873b7f | |||
93ebfc7dd0 | |||
5365ce8204 | |||
2ad33cebaf | |||
a26abcf5b1 | |||
b8c4629548 |
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
site/
|
270
CHANGES.md
270
CHANGES.md
@ -4,6 +4,276 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
## v1.26.1
|
||||
|
||||
* Kubernetes [v1.26.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1261)
|
||||
* Update etcd from v3.5.6 to [v3.5.7](https://github.com/etcd-io/etcd/releases/tag/v3.5.7)
|
||||
* Update Cilium from v1.12.4 to [v1.12.5](https://github.com/cilium/cilium/releases/tag/v1.12.5)
|
||||
* Update Calico from v3.24.5 to [v3.25.0](https://github.com/projectcalico/calico/releases/tag/v3.25.0)
|
||||
* Update CoreDNS from v1.9.3 to [v1.9.4](https://github.com/poseidon/terraform-render-bootstrap/pull/341)
|
||||
|
||||
## v1.26.0
|
||||
|
||||
* Kubernetes [v1.26.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260)
|
||||
* Update etcd from v3.5.5 to [v3.5.6](https://github.com/etcd-io/etcd/releases/tag/v3.5.6)
|
||||
* Update Cilium from v1.12.3 to [v1.12.4](https://github.com/cilium/cilium/releases/tag/v1.12.4)
|
||||
* Update flannel from v0.15.1 to [v0.20.2](https://github.com/flannel-io/flannel/releases/tag/v0.20.2)
|
||||
* Reminder: Modules are no longer published to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon) ([#1282](https://github.com/poseidon/typhoon/pull/1282))
|
||||
* See [#1282](https://github.com/poseidon/typhoon/pull/1282) and [v1.25.4](https://github.com/poseidon/typhoon/releases/tag/v1.25.4) for details
|
||||
|
||||
### AWS
|
||||
|
||||
* Migrate AWS launch configurations to launch templates ([#1275](https://github.com/poseidon/typhoon/pull/1275))
|
||||
* Starting Dec 31, 2022 AWS won't add new instance types/families to launch configurations
|
||||
|
||||
### Addons
|
||||
|
||||
* Update ingress-nginx from v1.3.1 to [v1.5.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.5.1)
|
||||
* Update Prometheus from v2.40.1 to [v2.40.5](https://github.com/prometheus/prometheus/releases/tag/v2.40.5)
|
||||
* Update node-exporter from v1.3.1 to [v1.5.0](https://github.com/prometheus/node_exporter/releases/tag/v1.5.0)
|
||||
* Update kube-state-metrics from v2.6.0 to [v2.7.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.7.0)
|
||||
* Update Grafana from v9.2.4 to [v9.3.1](https://github.com/grafana/grafana/releases/tag/v9.3.1)
|
||||
|
||||
## v1.25.4
|
||||
|
||||
* Kubernetes [v1.25.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1254)
|
||||
* Update Calico from v3.24.1 to [v3.24.5](https://github.com/projectcalico/calico/releases/tag/v3.24.5)
|
||||
* Allow Kubelet kubeconfig to drain nodes, if desired ([#330](https://github.com/poseidon/terraform-render-bootstrap/pull/330))
|
||||
* Re-enable Kubelet Graceful Node Shutdown ([#1261](https://github.com/poseidon/typhoon/pull/1261))
|
||||
* Introduce companion project [poseidon/scuttle](https://github.com/poseidon/scuttle)
|
||||
* Link to new Mastodon account for release announcements
|
||||
* [@typhoon@fosstodon.org](https://fosstodon.org/@typhoon)
|
||||
* [@poseidon@fosstodon.org](https://fosstodon.org/@poseidon)
|
||||
* Deprecate publishing to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon)
|
||||
* Typhoon docs have always shown using Git-based module sources, not the Terraform Module Registry
|
||||
* Module usage should be `source = "git::https://github.com/poseidon/typhoon/...` not `source = poseidon/kubernetes/...`
|
||||
* Terraform's Module Registry requires subtree mirroring typhoon to special terraform-platform-kubernetes repos, only supports release versions (no commit SHAs or forks), only ever contained Flatcar Linux modules (not Fedora CoreOS) for historical reasons
|
||||
* Note, this does not affect Terraform Providers like `poseidon/matchbox` or `poseidon/ct`, the registry works well for providers
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Remove unused `Wants=network.target` from `etcd-member.service` ([#1254](https://github.com/poseidon/typhoon/pull/1254))
|
||||
|
||||
### Cloud
|
||||
|
||||
* Remove defunct `delete-node.service` from worker node configurations ([#1256](https://github.com/poseidon/typhoon/pull/1256))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.39.1 to [v2.40.1](https://github.com/prometheus/prometheus/releases/tag/v2.40.1)
|
||||
* Update Grafana from v9.1.7 to [v9.2.4](https://github.com/grafana/grafana/releases/tag/v9.2.4)
|
||||
|
||||
## v1.25.3
|
||||
|
||||
* Kubernetes [v1.25.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1253)
|
||||
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` for addons ([#1246](https://github.com/poseidon/typhoon/pull/1246))
|
||||
* Update Cilium from v1.12.2 to [v1.12.3](https://github.com/cilium/cilium/releases/tag/v1.12.3) ([#1253](https://github.com/poseidon/typhoon/pull/1253))
|
||||
|
||||
### Azure
|
||||
|
||||
* Change default Azure `worker_type` from [`Standard_DS1_v2`](https://learn.microsoft.com/en-us/azure/virtual-machines/dv2-dsv2-series#dsv2-series) to [`Standard_D2as_v5`](https://learn.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series#dasv5-series) ([#1248](https://github.com/poseidon/typhoon/pull/1248))
|
||||
* Get 2 VCPU, 7 GiB, 12500Mbps (vs 1 VCPU, 3.5GiB, 750 Mbps)
|
||||
* Small increase in pay-as-you-go price ($53.29 -> $62.78)
|
||||
* Small increase in spot price ($5.64/mo -> $7.37/mo)
|
||||
* Change from Intel to AMD EPYC (`D2as_v5` cheaper than `D2s_v5`)
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
* Add Flatcar Linux ARM64 support on Azure ([docs](https://typhoon.psdn.io/advanced/arm64/), [#1251](https://github.com/poseidon/typhoon/pull/1251))
|
||||
* Switch from Azure Hypervisor gen1 to gen2 (**action required**) ([#1248](https://github.com/poseidon/typhoon/pull/1248))
|
||||
* Run `az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2`
|
||||
|
||||
### Docs
|
||||
|
||||
* Remove old docs note about not supporting ARM64 with Calico
|
||||
* Typhoon supports ARM64 with `cilium`, `calico`, and `flannel`
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.38.0 to [v2.39.1](https://github.com/prometheus/prometheus/releases/tag/v2.39.1)
|
||||
* Update Grafana from v9.1.6 to [v9.1.7](https://github.com/grafana/grafana/releases/tag/v9.1.7)
|
||||
|
||||
## v1.25.2
|
||||
|
||||
Kubernetes v1.25.2 was skipped since there were minimal changes upstream.
|
||||
|
||||
## v1.25.1
|
||||
|
||||
* Kubernetes [v1.25.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1251)
|
||||
* Update etcd from v3.5.4 to [v3.5.5](https://github.com/etcd-io/etcd/releases/tag/v3.5.5)
|
||||
* Update Cilium from v1.12.1 to [v1.12.2](https://github.com/cilium/cilium/releases/tag/v1.12.2)
|
||||
* Update Calico from v3.23.3 to [v3.24.1](https://github.com/projectcalico/calico/releases/tag/v3.24.1)
|
||||
* Revert Kubelet Graceful Node Shutdown on worker nodes ([#1227](https://github.com/poseidon/typhoon/pull/1227))
|
||||
* Fix issue where non-critical pods are left in Error/Completed state on node shutdown
|
||||
* Remove feature flag disable workaround for [kubernetes/kubernetes#112081](https://github.com/kubernetes/kubernetes/issues/112081)
|
||||
* Kubernetes [reverted](https://github.com/kubernetes/kubernetes/pull/112078) `LocalStorageCapacityIsolationFSQuotaMonitoring` back to alpha
|
||||
* Remove workaround for preventing `search .` propagation in [kubernetes/kubernetes#112135](https://github.com/kubernetes/kubernetes/issues/112135)
|
||||
* Upstream Kubernetes [fix](https://github.com/kubernetes/kubernetes/pull/112157)
|
||||
|
||||
### Addons
|
||||
|
||||
* Update kube-state-metrics from v2.5.0 to [v2.6.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.6.0)
|
||||
* Update ingress-nginx from v1.3.0 to [v1.3.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.3.1)
|
||||
* Update Grafana from v9.1.0 to [v9.1.6](https://github.com/grafana/grafana/releases/tag/v9.1.6)
|
||||
|
||||
## v1.25.0
|
||||
|
||||
* Kubernetes [v1.25.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1250)
|
||||
* Disable LocalStorageCapacityIsolationFSQuotaMonitoring feature gate ([#1220](https://github.com/poseidon/typhoon/pull/1220), fixes [kubernetes#112081](https://github.com/kubernetes/kubernetes/issues/112081))
|
||||
* Add workaround to revert adding "search ." to containers' `/etc/resolv.conf` ([#1224](https://github.com/poseidon/typhoon/pull/1224), fixes [kubernetes#112135](https://github.com/kubernetes/kubernetes/issues/112135))
|
||||
* Migrate most Kubelet flags to KubeletConfiguration file ([#1219](https://github.com/poseidon/typhoon/pull/1219))
|
||||
* Configure Kubelet Graceful Node Shutdown ([#1222](https://github.com/poseidon/typhoon/pull/1222))
|
||||
* Allow up to 30s for critical pods to gracefully shutdown on node shutdown
|
||||
* Allow up to 15s for regular pods to gracefully shutdown on node shutdown
|
||||
* Mark node NotReady promptly on node shutdown
|
||||
* Lengthen systemd inhibitor lock max delay from 5s to 45s
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Change Podman `log-driver` from `journald` to `k8s-file` ([#1221](https://github.com/poseidon/typhoon/pull/1221))
|
||||
* Fix `etcd-member` and Kubelet systemd service log lines appearing twice in journal logs
|
||||
|
||||
## v1.24.4
|
||||
|
||||
* Kubernetes [v1.24.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1244)
|
||||
* Update CoreDNS from v1.8.6 to [v1.9.3](https://github.com/poseidon/terraform-render-bootstrap/pull/318)
|
||||
* Update Cilium from v1.11.7 to [v1.12.1](https://github.com/cilium/cilium/releases/tag/v1.12.1)
|
||||
* Update Calico from v3.23.1 to [v3.23.3](https://github.com/projectcalico/calico/releases/tag/v3.23.3)
|
||||
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` ([#1206](https://github.com/poseidon/typhoon/pull/1206))
|
||||
* Remove use of deprecated Terraform [template](https://registry.terraform.io/providers/hashicorp/template) provider ([#1194](https://github.com/poseidon/typhoon/pull/1194))
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Remove ineffective `/etc/fedora-coreos/iptables-legacy.stamp` ([#1201](https://github.com/poseidon/typhoon/pull/1201))
|
||||
* Typhoon already uses iptables v1.8.7 (nf_tables) since FCOS 36
|
||||
* Staying on legacy iptables required a file in `/etc/coreos` instead
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
* Migrate Flatcar Linux from Ignition spec v2.3.0 to v3.3.0 ([#1196](https://github.com/poseidon/typhoon/pull/1196)) (**action required**)
|
||||
* Flatcar Linux 3185.0.0+ [supports](https://flatcar-linux.org/docs/latest/provisioning/ignition/specification/#ignition-v3) Ignition v3.x specs (which are rendered from Butane Configs, like Fedora CoreOS)
|
||||
* `poseidon/ct` v0.11.0 [supports](https://github.com/poseidon/terraform-provider-ct/pull/131) the `flatcar` Butane Config variant
|
||||
* Require poseidon/ct v0.11+ and Flatcar Linux 3185.0.0+
|
||||
* Please modify any Flatcar Linux snippets to use the [Butane Config](https://coreos.github.io/butane/config-flatcar-v1_0/) format (**action required**)
|
||||
|
||||
```tf
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
...
|
||||
```
|
||||
|
||||
### AWS
|
||||
|
||||
* [Refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) instances in autoscaling group when launch configuration changes ([#1208](https://github.com/poseidon/typhoon/pull/1208)) ([docs](https://typhoon.psdn.io/topics/maintenance/#node-configuration-updates), **important**)
|
||||
* Worker launch configuration changes start an autoscaling group instance refresh to replace instances
|
||||
* Instance refresh creates surge instances, waits for a warm-up period, then deletes old instances
|
||||
* Changing `worker_type`, `disk_*`, `worker_price`, `worker_target_groups`, or Butane `worker_snippets` on existing worker nodes will replace instances
|
||||
* New AMIs or changing `os_stream` will be ignored, to allow Fedora CoreOS or Flatcar Linux to keep themselves updated
|
||||
* Previously, new launch configurations were made in the same way, but not applied to instances unless manually replaced
|
||||
* Rename worker autoscaling group `${cluster_name}-worker` ([#1202](https://github.com/poseidon/typhoon/pull/1202))
|
||||
* Rename launch configuration `${cluster_name}-worker` instead of a random id
|
||||
|
||||
### Google
|
||||
|
||||
* [Roll](https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups) instance template changes to worker managed instance groups ([#1207](https://github.com/poseidon/typhoon/pull/1207)) ([docs](https://typhoon.psdn.io/topics/maintenance/#node-configuration-updates), **important**)
|
||||
* Worker instance template changes roll out by gradually replacing instances
|
||||
* Automatic rollouts create surge instances, wait for health checks, then delete old instances (0 unavailable instances)
|
||||
* Changing `worker_type`, `disk_size`, `worker_preemptible`, or Butane `worker_snippets` on existing worker nodes will replace instances
|
||||
* New compute images or changing `os_stream` will be ignored, to allow Fedora CoreOS or Flatcar Linux to keep themselves updated
|
||||
* Previously, new instance templates were made in the same way, but not applied to instances unless manually replaced
|
||||
* Add health checks to worker managed instance groups (i.e. "autohealing") ([#1207](https://github.com/poseidon/typhoon/pull/1207))
|
||||
* Use health checks to probe kube-proxy every 30s
|
||||
* Replace worker nodes that fail the health check 6 times (3min)
|
||||
* Name `kube-apiserver` and `worker` health checks consistently ([#1207](https://github.com/poseidon/typhoon/pull/1207))
|
||||
* Use name `${cluster_name}-apiserver-health` and `${cluster_name}-worker-health`
|
||||
* Rename managed instance group from `${cluster_name}-worker-group` to `${cluster_name}-worker` ([#1207](https://github.com/poseidon/typhoon/pull/1207))
|
||||
* Fix bug provisioning clusters with multiple controller nodes ([#1195](https://github.com/poseidon/typhoon/pull/1195))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.37.0 to [v2.38.0](https://github.com/prometheus/prometheus/releases/tag/v2.38.0)
|
||||
* Update Grafana from v9.0.3 to [v9.1.0](https://github.com/grafana/grafana/releases/tag/v9.1.0)
|
||||
|
||||
## v1.24.3
|
||||
|
||||
* Kubernetes [v1.24.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1243)
|
||||
* Update Cilium from v1.11.6 to [v1.11.7](https://github.com/cilium/cilium/releases/tag/v1.11.7)
|
||||
|
||||
### Addons
|
||||
|
||||
* Update ingress-nginx from v1.2.1 to [v1.3.0](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.3.0)
|
||||
* Update Prometheus from v2.36.1 to [v2.37.0](https://github.com/prometheus/prometheus/releases/tag/v2.37.0)
|
||||
* Update Grafana from v8.5.6 to [v9.0.3](https://github.com/grafana/grafana/releases/tag/v9.0.3)
|
||||
|
||||
### Notes
|
||||
|
||||
* Poseidon repos will soon change their default branch from `master` to `main`
|
||||
|
||||
## v1.24.2
|
||||
|
||||
* Kubernetes [v1.24.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1242)
|
||||
* Update Cilium from v1.11.5 to [v1.11.6](https://github.com/cilium/cilium/releases/tag/v1.11.6)
|
||||
* Update Calico from v3.22.2 to [v3.23.1](https://github.com/projectcalico/calico/releases/tag/v3.23.1)
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.36.0 to [v2.36.1](https://github.com/prometheus/prometheus/releases/tag/v2.36.1)
|
||||
* Update Grafana from v8.5.3 to [v8.5.6](https://github.com/grafana/grafana/releases/tag/v8.5.6)
|
||||
* Update kube-state-metrics from v2.4.2 to [v2.5.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.5.0)
|
||||
|
||||
## Known Issues
|
||||
|
||||
* Skip AWS Terraform provider v4.17.0 to v4.19.0, which had a regression affecting workers joining ([#1173](https://github.com/poseidon/typhoon/issues/1173))
|
||||
|
||||
## v1.24.1
|
||||
|
||||
* Kubernetes [v1.24.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1241)
|
||||
* Update Cilium from v1.11.4 to [v1.11.5](https://github.com/cilium/cilium/releases/tag/v1.11.5)
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.35.0 to [v2.36.0](https://github.com/prometheus/prometheus/releases/tag/v2.36.0)
|
||||
* Update Grafana from v8.5.1 to [v8.5.3](https://github.com/grafana/grafana/releases/tag/v8.5.3)
|
||||
* Update nginx-ingress from v1.2.0 to [v1.2.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.2.1)
|
||||
|
||||
## v1.24.0
|
||||
|
||||
* Kubernetes [v1.24.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1240)
|
||||
* Update etcd from v3.5.2 to [v3.5.4](https://github.com/etcd-io/etcd/releases/tag/v3.5.4)
|
||||
* Add Kubelet mounts to enable relabeling workload volumes ([#1152](https://github.com/poseidon/typhoon/pull/1152))
|
||||
* StorageClass no longer require explicit SELinux mount contexts
|
||||
|
||||
### Addons
|
||||
|
||||
* Update nginx-ingress from v1.1.3 to [v1.2.0](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.2.0)
|
||||
* Update Prometheus from v2.34.0 to [v2.35.0](https://github.com/prometheus/prometheus/releases/tag/v2.35.0)
|
||||
* Update Grafana from v8.4.5 to [v8.5.1](https://github.com/grafana/grafana/releases/tag/v8.5.1)
|
||||
|
||||
## v1.23.6
|
||||
|
||||
* Kubernetes [v1.23.6](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1236)
|
||||
* Update Cilium from v1.11.2 to [v1.11.4](https://github.com/cilium/cilium/releases/tag/v1.11.4)
|
||||
* Rename Cilium DaemonSet from `cilium-agent` to `cilium` to match Cilium CLI tools ([#303](https://github.com/poseidon/terraform-render-bootstrap/pull/303))
|
||||
* Update Calico from v3.22.1 to [v3.22.2](https://github.com/projectcalico/calico/releases/tag/v3.22.2)
|
||||
* Mount /etc/machine-id from host into Kubelet ([#1143](https://github.com/poseidon/typhoon/pull/1143))
|
||||
* Remove deprecated use of `key_algorithm` in `hashicorp/tls` resources
|
||||
|
||||
### Azure
|
||||
|
||||
* Allow upgrading Azure Terraform provider to v3.x ([#1144](https://github.com/poseidon/typhoon/pull/1144))
|
||||
* Rename `worker_address_prefix` output to `worker_address_prefixes`
|
||||
|
||||
### Google Cloud
|
||||
|
||||
* Fix issue on Flatcar Linux with controller nodes not ignoring os image changes ([#1149](https://github.com/poseidon/typhoon/pull/1149))
|
||||
* Nodes will auto-update, Terraform should not attempt to delete/recreate them
|
||||
|
||||
### Addons
|
||||
|
||||
* Update nginx-ingress from v1.1.2 to [v1.1.3](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.1.3)
|
||||
* Update Prometheus from v2.33.5 to [v2.34.0](https://github.com/prometheus/prometheus/releases/tag/v2.34.0)
|
||||
* Update Grafana from v8.4.4 to [v8.4.5](https://github.com/grafana/grafana/releases/tag/v8.4.5)
|
||||
|
||||
## v1.23.5
|
||||
|
||||
* Kubernetes [v1.23.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1235)
|
||||
|
22
README.md
22
README.md
@ -1,4 +1,6 @@
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
# Typhoon [](https://github.com/poseidon/typhoon/releases) [](https://github.com/poseidon/typhoon/stargazers) [](https://github.com/sponsors/poseidon) [](https://fosstodon.org/@typhoon)
|
||||
|
||||
<img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
@ -11,7 +13,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.23.5 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
@ -48,6 +50,7 @@ Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Flatcar Linux (ARM64) | [aws/flatcar-linux/kubernetes](aws/flatcar-linux/kubernetes) | alpha |
|
||||
| Azure | Flatcar Linux (ARM64) | [azure/flatcar-linux/kubernetes](azure/flatcar-linux/kubernetes) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
@ -62,7 +65,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -101,9 +104,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.5
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.5
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.5
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.26.1
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -156,5 +159,12 @@ Poseidon's Github [Sponsors](https://github.com/sponsors/poseidon) support the i
|
||||
<img src="https://opensource.nyc3.cdn.digitaloceanspaces.com/attribution/assets/SVG/DO_Logo_horizontal_blue.svg" width="201px">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
<a href="https://deploy.equinix.com/">
|
||||
<img src="https://storage.googleapis.com/poseidon/equinix.png" width="201px">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
If you'd like your company here, please contact dghubble at psdn.io.
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: grafana
|
||||
image: docker.io/grafana/grafana:8.4.3
|
||||
image: docker.io/grafana/grafana:9.3.1
|
||||
env:
|
||||
- name: GF_PATHS_CONFIG
|
||||
value: "/etc/grafana/custom.ini"
|
||||
@ -32,15 +32,22 @@ spec:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /metrics
|
||||
tcpSocket:
|
||||
port: 8080
|
||||
initialDelaySeconds: 10
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 1
|
||||
failureThreshold: 5
|
||||
successThreshold: 1
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/health
|
||||
scheme: HTTP
|
||||
path: /robots.txt
|
||||
port: 8080
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 30
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -10,6 +10,7 @@ rules:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
@ -37,3 +38,11 @@ rules:
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- "coordination.k8s.io"
|
||||
resources:
|
||||
- leases
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- update
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -10,6 +10,7 @@ rules:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
@ -32,8 +33,11 @@ rules:
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
- "coordination.k8s.io"
|
||||
resources:
|
||||
- endpoints
|
||||
- leases
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- update
|
||||
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -10,6 +10,7 @@ rules:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
@ -32,8 +33,10 @@ rules:
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
- "coordination.k8s.io"
|
||||
resources:
|
||||
- endpoints
|
||||
- leases
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- update
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -10,6 +10,7 @@ rules:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
@ -32,8 +33,10 @@ rules:
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
- "coordination.k8s.io"
|
||||
resources:
|
||||
- endpoints
|
||||
- leases
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- update
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -10,6 +10,7 @@ rules:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
@ -32,8 +33,10 @@ rules:
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
- "coordination.k8s.io"
|
||||
resources:
|
||||
- endpoints
|
||||
- leases
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- update
|
||||
|
@ -21,7 +21,7 @@ spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.33.5
|
||||
image: quay.io/prometheus/prometheus:v2.40.5
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -25,7 +25,7 @@ spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.4.2
|
||||
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.7.0
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
|
@ -22,19 +22,19 @@ spec:
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
runAsGroup: 65534
|
||||
fsGroup: 65534
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
containers:
|
||||
- name: node-exporter
|
||||
image: quay.io/prometheus/node-exporter:v1.3.1
|
||||
image: quay.io/prometheus/node-exporter:v1.5.0
|
||||
args:
|
||||
- --path.procfs=/host/proc
|
||||
- --path.sysfs=/host/sys
|
||||
- --path.rootfs=/host/root
|
||||
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+)($|/)
|
||||
- --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 9100
|
||||
@ -46,6 +46,9 @@ spec:
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
securityContext:
|
||||
seLinuxOptions:
|
||||
type: spc_t
|
||||
volumeMounts:
|
||||
- name: proc
|
||||
mountPath: /host/proc
|
||||
@ -55,9 +58,12 @@ spec:
|
||||
readOnly: true
|
||||
- name: root
|
||||
mountPath: /host/root
|
||||
mountPropagation: HostToContainer
|
||||
readOnly: true
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
- key: node-role.kubernetes.io/controller
|
||||
operator: Exists
|
||||
- key: node-role.kubernetes.io/control-plane
|
||||
operator: Exists
|
||||
- key: node.kubernetes.io/not-ready
|
||||
operator: Exists
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.23.5 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.2
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -56,7 +57,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -66,15 +67,19 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /etc/machine-id:/etc/machine-id:ro \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /etc/selinux:/etc/selinux \
|
||||
--volume /sys/fs/selinux:/sys/fs/selinux \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
@ -82,28 +87,13 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -126,7 +116,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.23.5
|
||||
quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -141,6 +131,33 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -177,6 +194,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -221,7 +243,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -23,7 +23,7 @@ resource "aws_instance" "controllers" {
|
||||
|
||||
instance_type = var.controller_type
|
||||
ami = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
@ -31,6 +31,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
tags = {}
|
||||
}
|
||||
|
||||
# network
|
||||
@ -46,41 +47,22 @@ resource "aws_instance" "controllers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,10 +3,8 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
|
@ -1,3 +1,7 @@
|
||||
locals {
|
||||
ami_id = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||
}
|
||||
|
||||
data "aws_ami" "fedora-coreos" {
|
||||
most_recent = true
|
||||
owners = ["125523088429"]
|
||||
|
@ -29,7 +29,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -39,15 +39,19 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /etc/machine-id:/etc/machine-id:ro \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /etc/selinux:/etc/selinux \
|
||||
--volume /sys/fs/selinux:/sys/fs/selinux \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
@ -55,19 +59,9 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -76,31 +70,13 @@ systemd:
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID}
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -110,6 +86,38 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -133,7 +141,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -3,9 +3,7 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Workers AutoScaling Group
|
||||
resource "aws_autoscaling_group" "workers" {
|
||||
name = "${var.name}-worker ${aws_launch_configuration.worker.name}"
|
||||
name = "${var.name}-worker"
|
||||
|
||||
# count
|
||||
desired_capacity = var.worker_count
|
||||
@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
vpc_zone_identifier = var.subnet_ids
|
||||
|
||||
# template
|
||||
launch_configuration = aws_launch_configuration.worker.name
|
||||
launch_template {
|
||||
id = aws_launch_template.worker.id
|
||||
version = aws_launch_template.worker.latest_version
|
||||
}
|
||||
|
||||
# target groups to which instances should be added
|
||||
target_group_arns = flatten([
|
||||
@ -22,6 +25,14 @@ resource "aws_autoscaling_group" "workers" {
|
||||
var.target_groups,
|
||||
])
|
||||
|
||||
instance_refresh {
|
||||
strategy = "Rolling"
|
||||
preferences {
|
||||
instance_warmup = 120
|
||||
min_healthy_percentage = 90
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
# override the default destroy and replace update behavior
|
||||
create_before_destroy = true
|
||||
@ -41,24 +52,42 @@ resource "aws_autoscaling_group" "workers" {
|
||||
}
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
image_id = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||
instance_type = var.instance_type
|
||||
spot_price = var.spot_price > 0 ? var.spot_price : null
|
||||
enable_monitoring = false
|
||||
resource "aws_launch_template" "worker" {
|
||||
name_prefix = "${var.name}-worker"
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
monitoring {
|
||||
enabled = false
|
||||
}
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
ebs_optimized = true
|
||||
block_device_mappings {
|
||||
device_name = "/dev/xvda"
|
||||
ebs {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
security_groups = var.security_groups
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
content {
|
||||
market_type = "spot"
|
||||
spot_options {
|
||||
max_price = var.spot_price
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
// Override the default destroy and replace update behavior
|
||||
@ -67,24 +96,16 @@ resource "aws_launch_configuration" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Fedora CoreOS worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.23.5 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.2
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -57,7 +58,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -83,26 +84,13 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -121,7 +109,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -134,18 +122,42 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -168,7 +180,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -182,14 +193,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -24,7 +24,7 @@ resource "aws_instance" "controllers" {
|
||||
instance_type = var.controller_type
|
||||
|
||||
ami = local.ami_id
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
@ -32,6 +32,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
tags = {}
|
||||
}
|
||||
|
||||
# network
|
||||
@ -47,41 +48,22 @@ resource "aws_instance" "controllers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,13 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -29,7 +30,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -58,17 +59,9 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -77,12 +70,7 @@ systemd:
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID}
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -90,29 +78,46 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -3,12 +3,10 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
template = "~> 2.2"
|
||||
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Workers AutoScaling Group
|
||||
resource "aws_autoscaling_group" "workers" {
|
||||
name = "${var.name}-worker ${aws_launch_configuration.worker.name}"
|
||||
name = "${var.name}-worker"
|
||||
|
||||
# count
|
||||
desired_capacity = var.worker_count
|
||||
@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
vpc_zone_identifier = var.subnet_ids
|
||||
|
||||
# template
|
||||
launch_configuration = aws_launch_configuration.worker.name
|
||||
launch_template {
|
||||
id = aws_launch_template.worker.id
|
||||
version = aws_launch_template.worker.latest_version
|
||||
}
|
||||
|
||||
# target groups to which instances should be added
|
||||
target_group_arns = flatten([
|
||||
@ -22,6 +25,14 @@ resource "aws_autoscaling_group" "workers" {
|
||||
var.target_groups,
|
||||
])
|
||||
|
||||
instance_refresh {
|
||||
strategy = "Rolling"
|
||||
preferences {
|
||||
instance_warmup = 120
|
||||
min_healthy_percentage = 90
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
# override the default destroy and replace update behavior
|
||||
create_before_destroy = true
|
||||
@ -41,24 +52,42 @@ resource "aws_autoscaling_group" "workers" {
|
||||
}
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
spot_price = var.spot_price > 0 ? var.spot_price : null
|
||||
enable_monitoring = false
|
||||
resource "aws_launch_template" "worker" {
|
||||
name_prefix = "${var.name}-worker"
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
monitoring {
|
||||
enabled = false
|
||||
}
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
ebs_optimized = true
|
||||
block_device_mappings {
|
||||
device_name = "/dev/xvda"
|
||||
ebs {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
security_groups = var.security_groups
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
content {
|
||||
market_type = "spot"
|
||||
spot_options {
|
||||
max_price = var.spot_price
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
// Override the default destroy and replace update behavior
|
||||
@ -67,24 +96,16 @@ resource "aws_launch_configuration" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Flatcar Linux worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.23.5 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.2
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -53,7 +54,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -62,15 +63,19 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /etc/machine-id:/etc/machine-id:ro \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /etc/selinux:/etc/selinux \
|
||||
--volume /sys/fs/selinux:/sys/fs/selinux \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
@ -78,27 +83,12 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -121,7 +111,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.23.5
|
||||
quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -136,6 +126,33 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -172,6 +189,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -216,7 +238,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -35,7 +35,7 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
availability_set_id = azurerm_availability_set.controllers.id
|
||||
|
||||
size = var.controller_type
|
||||
custom_data = base64encode(data.ct_config.controller-ignitions.*.rendered[count.index])
|
||||
custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
|
||||
|
||||
# storage
|
||||
source_image_id = var.os_image
|
||||
@ -111,41 +111,22 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -53,8 +53,6 @@ resource "azurerm_lb" "cluster" {
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "apiserver" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "apiserver"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration_name = "apiserver"
|
||||
@ -67,8 +65,6 @@ resource "azurerm_lb_rule" "apiserver" {
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-http" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "ingress-http"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
@ -82,8 +78,6 @@ resource "azurerm_lb_rule" "ingress-http" {
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-https" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "ingress-https"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
@ -98,8 +92,6 @@ resource "azurerm_lb_rule" "ingress-https" {
|
||||
|
||||
# Worker outbound TCP/UDP SNAT
|
||||
resource "azurerm_lb_outbound_rule" "worker-outbound" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "worker"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration {
|
||||
@ -126,8 +118,6 @@ resource "azurerm_lb_backend_address_pool" "worker" {
|
||||
|
||||
# TCP health check for apiserver
|
||||
resource "azurerm_lb_probe" "apiserver" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "apiserver"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
protocol = "Tcp"
|
||||
@ -141,8 +131,6 @@ resource "azurerm_lb_probe" "apiserver" {
|
||||
|
||||
# HTTP health check for ingress
|
||||
resource "azurerm_lb_probe" "ingress" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "ingress"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
protocol = "Http"
|
||||
|
@ -41,4 +41,3 @@ resource "azurerm_subnet_network_security_group_association" "worker" {
|
||||
subnet_id = azurerm_subnet.worker.id
|
||||
network_security_group_id = azurerm_network_security_group.worker.id
|
||||
}
|
||||
|
||||
|
@ -43,9 +43,9 @@ output "worker_security_group_name" {
|
||||
value = azurerm_network_security_group.worker.name
|
||||
}
|
||||
|
||||
output "worker_address_prefix" {
|
||||
description = "Worker network subnet CIDR address (for source/destination)"
|
||||
value = azurerm_subnet.worker.address_prefix
|
||||
output "worker_address_prefixes" {
|
||||
description = "Worker network subnet CIDR addresses (for source/destination)"
|
||||
value = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Outputs for custom load balancing
|
||||
|
@ -10,171 +10,171 @@ resource "azurerm_network_security_group" "controller" {
|
||||
resource "azurerm_network_security_rule" "controller-icmp" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-etcd" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-etcd"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2379-2380"
|
||||
source_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-etcd"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2379-2380"
|
||||
source_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape etcd metrics
|
||||
resource "azurerm_network_security_rule" "controller-etcd-metrics" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-etcd-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2381"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-etcd-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2381"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-proxy metrics
|
||||
resource "azurerm_network_security_rule" "controller-kube-proxy" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-kube-proxy-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2011"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10249"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-kube-proxy-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2011"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10249"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
|
||||
resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-kube-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2012"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10257-10259"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-kube-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2012"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10257-10259"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-apiserver" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-apiserver"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "6443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-apiserver"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "6443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4789"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4789"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2021"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2021"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2025"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2025"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||
@ -191,8 +191,8 @@ resource "azurerm_network_security_rule" "controller-kubelet" {
|
||||
destination_port_range = "10250"
|
||||
|
||||
# allow Prometheus to scrape kubelet metrics too
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||
@ -240,139 +240,139 @@ resource "azurerm_network_security_group" "worker" {
|
||||
resource "azurerm_network_security_rule" "worker-icmp" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-http" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-http"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "80"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-http"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "80"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-https" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-https"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-https"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4789"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4789"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2016"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2016"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-proxy
|
||||
resource "azurerm_network_security_rule" "worker-kube-proxy" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-kube-proxy"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2024"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10249"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-kube-proxy"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2024"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10249"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||
@ -389,8 +389,8 @@ resource "azurerm_network_security_rule" "worker-kubelet" {
|
||||
destination_port_range = "10250"
|
||||
|
||||
# allow Prometheus to scrape kubelet metrics too
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||
|
@ -43,7 +43,7 @@ variable "controller_type" {
|
||||
variable "worker_type" {
|
||||
type = string
|
||||
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
|
@ -3,10 +3,8 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
|
@ -26,7 +26,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -35,15 +35,19 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /etc/machine-id:/etc/machine-id:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /etc/selinux:/etc/selinux \
|
||||
--volume /sys/fs/selinux:/sys/fs/selinux \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
@ -51,51 +55,23 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
%{~ endfor ~}
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -105,6 +81,38 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -128,7 +136,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -41,7 +41,7 @@ variable "worker_count" {
|
||||
variable "vm_type" {
|
||||
type = string
|
||||
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
|
@ -3,9 +3,7 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
template = "~> 2.2"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
|
@ -9,7 +9,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
# instance name prefix for instances in the set
|
||||
computer_name_prefix = "${var.name}-worker"
|
||||
single_placement_group = false
|
||||
custom_data = base64encode(data.ct_config.worker-ignition.rendered)
|
||||
custom_data = base64encode(data.ct_config.worker.rendered)
|
||||
|
||||
# storage
|
||||
source_image_id = var.os_image
|
||||
@ -70,24 +70,17 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS configs
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Fedora CoreOS worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.23.5 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.2
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -55,7 +56,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -80,25 +81,12 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -117,7 +105,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -130,18 +118,42 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -164,7 +176,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -178,14 +189,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -17,7 +17,9 @@ resource "azurerm_dns_a_record" "etcds" {
|
||||
locals {
|
||||
# Container Linux derivative
|
||||
# flatcar-stable -> Flatcar Linux Stable
|
||||
channel = split("-", var.os_image)[1]
|
||||
channel = split("-", var.os_image)[1]
|
||||
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
|
||||
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
|
||||
}
|
||||
|
||||
# Controller availability set to spread controllers
|
||||
@ -41,7 +43,7 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
availability_set_id = azurerm_availability_set.controllers.id
|
||||
|
||||
size = var.controller_type
|
||||
custom_data = base64encode(data.ct_config.controller-ignitions.*.rendered[count.index])
|
||||
custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
|
||||
|
||||
# storage
|
||||
os_disk {
|
||||
@ -53,16 +55,19 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
|
||||
# Flatcar Container Linux
|
||||
source_image_reference {
|
||||
publisher = "Kinvolk"
|
||||
offer = "flatcar-container-linux-free"
|
||||
sku = local.channel
|
||||
publisher = "kinvolk"
|
||||
offer = "flatcar-container-linux-${local.offer_suffix}"
|
||||
sku = local.urn
|
||||
version = "latest"
|
||||
}
|
||||
|
||||
plan {
|
||||
name = local.channel
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-free"
|
||||
dynamic "plan" {
|
||||
for_each = var.arch == "arm64" ? [] : [1]
|
||||
content {
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-${local.offer_suffix}"
|
||||
name = local.urn
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
@ -130,41 +135,22 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -53,8 +53,6 @@ resource "azurerm_lb" "cluster" {
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "apiserver" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "apiserver"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration_name = "apiserver"
|
||||
@ -67,8 +65,6 @@ resource "azurerm_lb_rule" "apiserver" {
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-http" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "ingress-http"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
@ -82,8 +78,6 @@ resource "azurerm_lb_rule" "ingress-http" {
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-https" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "ingress-https"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
@ -98,8 +92,6 @@ resource "azurerm_lb_rule" "ingress-https" {
|
||||
|
||||
# Worker outbound TCP/UDP SNAT
|
||||
resource "azurerm_lb_outbound_rule" "worker-outbound" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "worker"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration {
|
||||
@ -126,8 +118,6 @@ resource "azurerm_lb_backend_address_pool" "worker" {
|
||||
|
||||
# TCP health check for apiserver
|
||||
resource "azurerm_lb_probe" "apiserver" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "apiserver"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
protocol = "Tcp"
|
||||
@ -141,8 +131,6 @@ resource "azurerm_lb_probe" "apiserver" {
|
||||
|
||||
# HTTP health check for ingress
|
||||
resource "azurerm_lb_probe" "ingress" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "ingress"
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
protocol = "Http"
|
||||
|
@ -41,4 +41,3 @@ resource "azurerm_subnet_network_security_group_association" "worker" {
|
||||
subnet_id = azurerm_subnet.worker.id
|
||||
network_security_group_id = azurerm_network_security_group.worker.id
|
||||
}
|
||||
|
||||
|
@ -43,9 +43,9 @@ output "worker_security_group_name" {
|
||||
value = azurerm_network_security_group.worker.name
|
||||
}
|
||||
|
||||
output "worker_address_prefix" {
|
||||
description = "Worker network subnet CIDR address (for source/destination)"
|
||||
value = azurerm_subnet.worker.address_prefix
|
||||
output "worker_address_prefixes" {
|
||||
description = "Worker network subnet CIDR addresses (for source/destination)"
|
||||
value = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Outputs for custom load balancing
|
||||
|
@ -10,171 +10,171 @@ resource "azurerm_network_security_group" "controller" {
|
||||
resource "azurerm_network_security_rule" "controller-icmp" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-etcd" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-etcd"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2379-2380"
|
||||
source_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-etcd"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2379-2380"
|
||||
source_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape etcd metrics
|
||||
resource "azurerm_network_security_rule" "controller-etcd-metrics" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-etcd-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2381"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-etcd-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2381"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-proxy metrics
|
||||
resource "azurerm_network_security_rule" "controller-kube-proxy" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-kube-proxy-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2011"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10249"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-kube-proxy-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2011"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10249"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
|
||||
resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-kube-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2012"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10257-10259"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-kube-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2012"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10257-10259"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-apiserver" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-apiserver"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "6443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-apiserver"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "6443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4789"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4789"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2021"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2021"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2025"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2025"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||
@ -191,8 +191,8 @@ resource "azurerm_network_security_rule" "controller-kubelet" {
|
||||
destination_port_range = "10250"
|
||||
|
||||
# allow Prometheus to scrape kubelet metrics too
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||
@ -240,139 +240,139 @@ resource "azurerm_network_security_group" "worker" {
|
||||
resource "azurerm_network_security_rule" "worker-icmp" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-http" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-http"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "80"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-http"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "80"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-https" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-https"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-https"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4789"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4789"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2016"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2016"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-proxy
|
||||
resource "azurerm_network_security_rule" "worker-kube-proxy" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-kube-proxy"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2024"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10249"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
name = "allow-kube-proxy"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2024"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10249"
|
||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||
@ -389,8 +389,8 @@ resource "azurerm_network_security_rule" "worker-kubelet" {
|
||||
destination_port_range = "10250"
|
||||
|
||||
# allow Prometheus to scrape kubelet metrics too
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||
|
@ -43,7 +43,7 @@ variable "controller_type" {
|
||||
variable "worker_type" {
|
||||
type = string
|
||||
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
@ -133,12 +133,15 @@ variable "worker_node_labels" {
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
variable "arch" {
|
||||
type = string
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
description = "Container architecture (amd64 or arm64)"
|
||||
default = "amd64"
|
||||
|
||||
validation {
|
||||
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||
error_message = "The arch must be amd64 or arm64."
|
||||
}
|
||||
}
|
||||
|
||||
variable "daemonset_tolerations" {
|
||||
@ -146,3 +149,11 @@ variable "daemonset_tolerations" {
|
||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
type = string
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -3,13 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -21,4 +21,5 @@ module "workers" {
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
snippets = var.worker_snippets
|
||||
node_labels = var.worker_node_labels
|
||||
arch = var.arch
|
||||
}
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -27,7 +28,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -55,30 +56,17 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
%{~ endfor ~}
|
||||
%{~ for taint in split(",", node_taints) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -86,29 +74,46 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -41,7 +41,7 @@ variable "worker_count" {
|
||||
variable "vm_type" {
|
||||
type = string
|
||||
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_D2as_v5"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
@ -100,6 +100,17 @@ variable "node_taints" {
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "arch" {
|
||||
type = string
|
||||
description = "Container architecture (amd64 or arm64)"
|
||||
default = "amd64"
|
||||
|
||||
validation {
|
||||
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||
error_message = "The arch must be amd64 or arm64."
|
||||
}
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
|
@ -3,12 +3,10 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
template = "~> 2.2"
|
||||
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,8 @@
|
||||
locals {
|
||||
# flatcar-stable -> Flatcar Linux Stable
|
||||
channel = split("-", var.os_image)[1]
|
||||
channel = split("-", var.os_image)[1]
|
||||
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
|
||||
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
|
||||
}
|
||||
|
||||
# Workers scale set
|
||||
@ -14,7 +16,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
# instance name prefix for instances in the set
|
||||
computer_name_prefix = "${var.name}-worker"
|
||||
single_placement_group = false
|
||||
custom_data = base64encode(data.ct_config.worker-ignition.rendered)
|
||||
custom_data = base64encode(data.ct_config.worker.rendered)
|
||||
|
||||
# storage
|
||||
os_disk {
|
||||
@ -24,16 +26,19 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
|
||||
# Flatcar Container Linux
|
||||
source_image_reference {
|
||||
publisher = "Kinvolk"
|
||||
offer = "flatcar-container-linux-free"
|
||||
sku = local.channel
|
||||
publisher = "kinvolk"
|
||||
offer = "flatcar-container-linux-${local.offer_suffix}"
|
||||
sku = local.urn
|
||||
version = "latest"
|
||||
}
|
||||
|
||||
plan {
|
||||
name = local.channel
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-free"
|
||||
dynamic "plan" {
|
||||
for_each = var.arch == "arm64" ? [] : [1]
|
||||
content {
|
||||
publisher = "kinvolk"
|
||||
product = "flatcar-container-linux-${local.offer_suffix}"
|
||||
name = local.urn
|
||||
}
|
||||
}
|
||||
|
||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
||||
@ -88,24 +93,16 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux configs
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
# Flatcar Linux worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
kubeconfig = indent(10, var.kubeconfig)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.23.5 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.2
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -52,7 +53,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -61,15 +62,19 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /etc/machine-id:/etc/machine-id:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /etc/selinux:/etc/selinux \
|
||||
--volume /sys/fs/selinux:/sys/fs/selinux \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
@ -77,28 +82,13 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -123,7 +113,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=-/usr/bin/podman rm bootstrap
|
||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||
--network host \
|
||||
@ -146,6 +136,33 @@ storage:
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -182,6 +199,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -226,7 +248,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -25,7 +25,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -34,15 +34,19 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /etc/machine-id:/etc/machine-id:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /etc/selinux:/etc/selinux \
|
||||
--volume /sys/fs/selinux:/sys/fs/selinux \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
@ -50,33 +54,18 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in compact(split(",", node_labels)) ~}
|
||||
--node-labels=${label} \
|
||||
%{~ endfor ~}
|
||||
%{~ for taint in compact(split(",", node_taints)) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -101,6 +90,38 @@ storage:
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -124,7 +145,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -1,34 +1,26 @@
|
||||
locals {
|
||||
remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
remote_initrd = [
|
||||
"https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
"https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img"
|
||||
"--name main https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
]
|
||||
|
||||
remote_args = [
|
||||
"ip=dhcp",
|
||||
"rd.neednet=1",
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"coreos.inst.image_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
]
|
||||
|
||||
cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
cached_initrd = [
|
||||
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img"
|
||||
]
|
||||
|
||||
cached_args = [
|
||||
"ip=dhcp",
|
||||
"rd.neednet=1",
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"coreos.inst.image_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
]
|
||||
|
||||
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
|
||||
@ -46,29 +38,22 @@ resource "matchbox_profile" "controllers" {
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.controllers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "controller-ignitions" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = length(var.controllers)
|
||||
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
count = length(var.controllers)
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
domain_name = var.controllers.*.domain[count.index]
|
||||
etcd_name = var.controllers.*.name[count.index]
|
||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
// Fedora CoreOS worker profile
|
||||
@ -80,28 +65,20 @@ resource "matchbox_profile" "workers" {
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.workers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "worker-ignitions" {
|
||||
# Fedora CoreOS workers
|
||||
data "ct_config" "workers" {
|
||||
count = length(var.workers)
|
||||
|
||||
content = data.template_file.worker-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "worker-configs" {
|
||||
count = length(var.workers)
|
||||
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.workers.*.domain[count.index]
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
|
||||
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.23.5 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.2
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -63,7 +64,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -88,26 +89,13 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -126,7 +114,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -139,21 +127,44 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -176,7 +187,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -190,14 +200,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: installer.service
|
||||
@ -25,12 +26,11 @@ systemd:
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/installer
|
||||
filesystem: root
|
||||
mode: 0500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -ex
|
||||
curl --retry 10 "${ignition_endpoint}?{{.request.raw_query}}&os=installed" -o ignition.json
|
||||
curl --retry 10 "${ignition_endpoint}?mac=${mac}&os=installed" -o ignition.json
|
||||
flatcar-install \
|
||||
-d ${install_disk} \
|
||||
-C ${os_channel} \
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -35,7 +36,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -63,17 +64,9 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
@ -83,11 +76,7 @@ systemd:
|
||||
%{~ for taint in compact(split(",", node_taints)) ~}
|
||||
--register-with-taints=${taint} \
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -99,17 +88,46 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -18,12 +18,10 @@ resource "matchbox_profile" "flatcar-install" {
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
var.kernel_args,
|
||||
])
|
||||
|
||||
container_linux_config = data.template_file.install-configs.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.install.*.rendered[count.index]
|
||||
}
|
||||
|
||||
// Flatcar Linux Install profile (from matchbox /assets cache)
|
||||
@ -42,101 +40,84 @@ resource "matchbox_profile" "cached-flatcar-install" {
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
var.kernel_args,
|
||||
])
|
||||
|
||||
container_linux_config = data.template_file.cached-install-configs.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.cached-install.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "template_file" "install-configs" {
|
||||
# Flatcar Linux install
|
||||
data "ct_config" "install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
|
||||
template = file("${path.module}/cl/install.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/install.yaml", {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
# only cached profile adds -b baseurl
|
||||
baseurl_flag = ""
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
}
|
||||
|
||||
data "template_file" "cached-install-configs" {
|
||||
# Flatcar Linux cached install
|
||||
data "ct_config" "cached-install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
|
||||
template = file("${path.module}/cl/install.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/install.yaml", {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
# profile uses -b baseurl to install from matchbox cache
|
||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/flatcar"
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
}
|
||||
|
||||
|
||||
// Kubernetes Controller profiles
|
||||
resource "matchbox_profile" "controllers" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-controller-%s", var.cluster_name, var.controllers.*.name[count.index])
|
||||
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.controllers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = length(var.controllers)
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = length(var.controllers)
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
domain_name = var.controllers.*.domain[count.index]
|
||||
etcd_name = var.controllers.*.name[count.index]
|
||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
// Kubernetes Worker profiles
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
|
||||
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index]
|
||||
raw_ignition = data.ct_config.workers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
data "ct_config" "worker-ignitions" {
|
||||
count = length(var.workers)
|
||||
content = data.template_file.worker-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "worker-configs" {
|
||||
# Flatcar Linux workers
|
||||
data "ct_config" "workers" {
|
||||
count = length(var.workers)
|
||||
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.workers.*.domain[count.index]
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
|
||||
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.23.5 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -9,15 +9,16 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.2
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--log-driver k8s-file \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
@ -54,7 +55,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -64,15 +65,19 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /etc/machine-id:/etc/machine-id:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /etc/selinux:/etc/selinux \
|
||||
--volume /sys/fs/selinux:/sys/fs/selinux \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
@ -80,28 +85,13 @@ systemd:
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -133,7 +123,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.23.5
|
||||
quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -143,6 +133,33 @@ storage:
|
||||
- path: /etc/kubernetes
|
||||
- path: /opt/bootstrap
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
@ -179,6 +196,11 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -223,7 +245,6 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -28,7 +28,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -38,15 +38,19 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--log-driver k8s-file \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /etc/machine-id:/etc/machine-id:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /etc/selinux:/etc/selinux \
|
||||
--volume /sys/fs/selinux:/sys/fs/selinux \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
@ -58,23 +62,11 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
@ -90,23 +82,42 @@ systemd:
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
@ -130,7 +141,6 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
@ -41,11 +41,11 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -62,39 +62,20 @@ resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Fedora CoreOS controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = ">= 2.12, < 3.0"
|
||||
|
@ -37,11 +37,11 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = data.ct_config.worker.rendered
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -58,20 +58,12 @@ resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
# Fedora CoreOS worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
})
|
||||
strict = true
|
||||
snippets = var.worker_snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.23.5 (upstream)
|
||||
* Kubernetes v1.26.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=4621c6b256129d1ede32ae1f72bcc1473664cb33"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -10,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.2
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.7
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -65,7 +66,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -91,26 +92,13 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -129,7 +117,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -142,15 +130,39 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /var/lib/etcd
|
||||
filesystem: root
|
||||
mode: 0700
|
||||
overwrite: true
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -173,7 +185,6 @@ storage:
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
@ -187,14 +198,17 @@ storage:
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/etcd/etcd.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -1,4 +1,5 @@
|
||||
---
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
@ -37,7 +38,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.26.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -66,25 +67,12 @@ systemd:
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
$${KUBELET_IMAGE} \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--config=/etc/kubernetes/kubelet.yaml \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
--node-labels=node.kubernetes.io/node
|
||||
ExecStart=docker logs -f kubelet
|
||||
ExecStop=docker stop kubelet
|
||||
ExecStopPost=docker rm kubelet
|
||||
@ -92,27 +80,44 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.yaml
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: /etc/kubernetes/ca.crt
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
clusterDNS:
|
||||
- ${cluster_dns_service_ip}
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
volumePluginDir: /var/lib/kubelet/volumeplugins
|
||||
- path: /etc/systemd/logind.conf.d/inhibitors.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Login]
|
||||
InhibitDelayMaxSec=45s
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
@ -46,11 +46,11 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -67,39 +67,20 @@ resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
# Flatcar Linux controllers
|
||||
data "ct_config" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/cl/controller.yaml")
|
||||
|
||||
vars = {
|
||||
content = templatefile("${path.module}/butane/controller.yaml", {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
etcd_initial_cluster = join(",", [
|
||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||
])
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
})
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,14 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.11"
|
||||
}
|
||||
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = ">= 2.12, < 3.0"
|
||||
|
@ -35,11 +35,11 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# only official DigitalOcean images support IPv6
|
||||
ipv6 = local.is_official_image
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
user_data = data.ct_config.worker.rendered
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
@ -56,20 +56,12 @@ resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
# Flatcar Linux worker
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
})
|
||||
strict = true
|
||||
snippets = var.worker_snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/cl/worker.yaml")
|
||||
|
||||
vars = {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
}
|
||||
|
||||
|
1
docs/CNAME
Normal file
1
docs/CNAME
Normal file
@ -0,0 +1 @@
|
||||
typhoon.psdn.io
|
@ -1,19 +1,21 @@
|
||||
# ARM64
|
||||
|
||||
Typhoon has experimental support for ARM64 on AWS, with Fedora CoreOS or Flatcar Linux. Clusters can be created with ARM64 controller and worker nodes. Or worker pools of ARM64 nodes can be attached to an AMD64 cluster to create a hybrid/mixed architecture cluster.
|
||||
Typhoon supports ARM64 Kubernetes clusters with ARM64 controller and worker nodes (full-cluster) or adding worker pools of ARM64 nodes to clusters with an x86/amd64 control plane for a hybdrid (mixed-arch) cluster.
|
||||
|
||||
!!! note
|
||||
Currently, CNI networking must be set to `flannel` or `cilium`.
|
||||
Typhoon ARM64 clusters (full-cluster or mixed-arch) are available on:
|
||||
|
||||
* AWS with Fedora CoreOS or Flatcar Linux
|
||||
* Azure with Flatcar Linux
|
||||
|
||||
## Cluster
|
||||
|
||||
Create a cluster with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` container images.
|
||||
Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
||||
|
||||
=== "Fedora CoreOS Cluster (arm64)"
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -38,7 +40,7 @@ Create a cluster with ARM64 controller and worker nodes. Container workloads mus
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -64,9 +66,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-21-119 Ready <none> 77s v1.23.5 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.23.5 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.23.5 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-21-119 Ready <none> 77s v1.26.1 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.26.1 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.26.1 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Hybrid
|
||||
@ -77,7 +79,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -100,7 +102,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -123,7 +125,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -147,7 +149,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -172,9 +174,34 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-1-73 Ready <none> 111m v1.23.5 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.23.5 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.23.5 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.23.5 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-1-73 Ready <none> 111m v1.26.1 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.26.1 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.26.1 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.26.1 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Azure
|
||||
|
||||
Create a cluster on Azure with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
region = "centralus"
|
||||
dns_zone = "azure.example.com"
|
||||
dns_zone_group = "example-group"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
arch = "arm64"
|
||||
controller_type = "Standard_D2pls_v5"
|
||||
worker_type = "Standard_D2pls_v5"
|
||||
worker_count = 2
|
||||
host_cidr = "10.0.0.0/20"
|
||||
}
|
||||
```
|
||||
|
@ -12,9 +12,11 @@ Clusters are kept to a minimal Kubernetes control plane by offering components l
|
||||
|
||||
## Hosts
|
||||
|
||||
Typhoon uses the [Ignition](https://github.com/coreos/ignition) system of Fedora CoreOS and Flatcar Linux to immutably declare a system via first-boot disk provisioning. Fedora CoreOS uses a [Butane Config](https://coreos.github.io/butane/specs/) and Flatcar Linux uses a [Container Linux Config](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/examples.md) (CLC). These define disk partitions, filesystems, systemd units, dropins, config files, mount units, raid arrays, and users.
|
||||
### Background
|
||||
|
||||
Controller and worker instances form a minimal and secure Kubernetes cluster on each platform. Typhoon provides the **snippets** feature to accept Butane or Container Linux Configs to validate and additively merge into instance declarations. This allows advanced host customization and experimentation.
|
||||
Typhoon uses the [Ignition](https://github.com/coreos/ignition) system of Fedora CoreOS and Flatcar Linux to immutably declare a system via first-boot disk provisioning. Human-friendly [Butane Configs](https://coreos.github.io/butane/specs/) define disk partitions, filesystems, systemd units, dropins, config files, mount units, raid arrays, users, and more before being converted to Ignition.
|
||||
|
||||
Controller and worker instances form a minimal and secure Kubernetes cluster on each platform. Typhoon provides the **snippets** feature to accept custom Butane Configs that are merged with instance declarations. This allows advanced host customization and experimentation.
|
||||
|
||||
!!! note
|
||||
Snippets cannot be used to modify an already existing instance, the antithesis of immutable provisioning. Ignition fully declares a system on first boot only.
|
||||
@ -25,127 +27,104 @@ Controller and worker instances form a minimal and secure Kubernetes cluster on
|
||||
!!! danger
|
||||
Edits to snippets for controller instances can (correctly) cause Terraform to observe a diff (if not otherwise suppressed) and propose destroying and recreating controller(s). Recognize that this is destructive since controllers run etcd and are stateful. See [blue/green](/topics/maintenance/#upgrades) clusters.
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
!!! note
|
||||
Fedora CoreOS snippets require `terraform-provider-ct` v0.5+
|
||||
### Usage
|
||||
|
||||
Define a Butane Config ([docs](https://coreos.github.io/butane/specs/), [config](https://github.com/coreos/butane/blob/main/docs/config-fcos-v1_4.md)) in version control near your Terraform workspace directory (e.g. perhaps in a `snippets` subdirectory). You may organize snippets into multiple files, if desired.
|
||||
|
||||
For example, ensure an `/opt/hello` file is created with permissions 0644.
|
||||
For example, ensure an `/opt/hello` file is created with permissions 0644 before boot.
|
||||
|
||||
```yaml
|
||||
# custom-files
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
contents:
|
||||
inline: |
|
||||
Hello World
|
||||
mode: 0644
|
||||
```
|
||||
=== "Fedora CoreOS"
|
||||
|
||||
Reference the FCC contents by location (e.g. `file("./custom-units.yaml")`). On [AWS](/fedora-coreos/aws/#cluster) or [Google Cloud](/fedora-coreos/google-cloud/#cluster) extend the `controller_snippets` or `worker_snippets` list variables.
|
||||
```yaml
|
||||
# custom-files.yaml
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
contents:
|
||||
inline: |
|
||||
Hello World
|
||||
mode: 0644
|
||||
```
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
...
|
||||
=== "Flatcar Linux"
|
||||
|
||||
controller_count = 1
|
||||
worker_count = 2
|
||||
controller_snippets = [
|
||||
file("./custom-files"),
|
||||
file("./custom-units"),
|
||||
]
|
||||
worker_snippets = [
|
||||
file("./custom-files"),
|
||||
file("./custom-units")",
|
||||
]
|
||||
...
|
||||
}
|
||||
```
|
||||
```yaml
|
||||
# custom-files.yaml
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
contents:
|
||||
inline: |
|
||||
Hello World
|
||||
mode: 0644
|
||||
```
|
||||
|
||||
On [Bare-Metal](/fedora-coreos/bare-metal/#cluster), different FCCs may be used for each node (since hardware may be heterogeneous). Extend the `snippets` map variable by mapping a controller or worker name key to a list of snippets.
|
||||
Or ensure a systemd unit `hello.service` is created.
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
...
|
||||
snippets = {
|
||||
"node2" = [file("./units/hello.yaml")]
|
||||
"node3" = [
|
||||
file("./units/world.yaml"),
|
||||
file("./units/hello.yaml"),
|
||||
]
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
=== "Fedora CoreOS"
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
Define a Container Linux Config (CLC) ([config](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/configuration.md), [examples](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/examples.md)) in version control near your Terraform workspace directory (e.g. perhaps in a `snippets` subdirectory). You may organize snippets into multiple files, if desired.
|
||||
|
||||
For example, ensure an `/opt/hello` file is created with permissions 0644.
|
||||
|
||||
```yaml
|
||||
# custom-files
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
Hello World
|
||||
mode: 0644
|
||||
```
|
||||
|
||||
Or ensure a systemd unit `hello.service` is created and a dropin `50-etcd-cluster.conf` is added for `etcd-member.service`.
|
||||
|
||||
```yaml
|
||||
# custom-units
|
||||
systemd:
|
||||
units:
|
||||
- name: hello.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Hello World
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/bin/echo Hello World!
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: etcd-member.service
|
||||
enable: true
|
||||
dropins:
|
||||
- name: 50-etcd-cluster.conf
|
||||
```yaml
|
||||
# custom-units.yaml
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: hello.service
|
||||
enabled: true
|
||||
contents: |
|
||||
Environment="ETCD_LOG_PACKAGE_LEVELS=etcdserver=WARNING,security=DEBUG"
|
||||
```
|
||||
[Unit]
|
||||
Description=Hello World
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/bin/echo Hello World!
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
=== "Flatcar Linux"
|
||||
|
||||
```yaml
|
||||
# custom-units.yaml
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: hello.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Hello World
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/bin/echo Hello World!
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
Reference the Butane contents by location (e.g. `file("./custom-units.yaml")`). On [AWS](/fedora-coreos/aws/#cluster), [Azure](/fedora-coreos/azure/#cluster), [DigitalOcean](/fedora-coreos/digital-ocean/#cluster), or [Google Cloud](/fedora-coreos/google-cloud/#cluster) extend the `controller_snippets` or `worker_snippets` list variables.
|
||||
|
||||
Reference the CLC contents by location (e.g. `file("./custom-units.yaml")`). On [AWS](/flatcar-linux/aws/#cluster), [Azure](/flatcar-linux/azure/#cluster), [DigitalOcean](/flatcar-linux/digital-ocean/#cluster), or [Google Cloud](/flatcar-linux/google-cloud/#cluster) extend the `controller_snippets` or `worker_snippets` list variables.
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
...
|
||||
|
||||
controller_count = 1
|
||||
worker_count = 2
|
||||
controller_snippets = [
|
||||
file("./custom-files"),
|
||||
file("./custom-units"),
|
||||
file("./custom-files.yaml"),
|
||||
file("./custom-units.yaml"),
|
||||
]
|
||||
worker_snippets = [
|
||||
file("./custom-files"),
|
||||
file("./custom-units")",
|
||||
file("./custom-files.yaml"),
|
||||
file("./custom-units.yaml")",
|
||||
]
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
On [Bare-Metal](/flatcar-linux/bare-metal/#cluster), different CLCs may be used for each node (since hardware may be heterogeneous). Extend the `snippets` map variable by mapping a controller or worker name key to a list of snippets.
|
||||
On [Bare-Metal](/fedora-coreos/bare-metal/#cluster), different Butane configs may be used for each node (since hardware may be heterogeneous). Extend the `snippets` map variable by mapping a controller or worker name key to a list of snippets.
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
|
@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
|
@ -19,7 +19,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.tempest.vpc_id
|
||||
@ -42,7 +42,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.tempest.vpc_id
|
||||
@ -111,7 +111,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Azure
|
||||
region = module.ramius.region
|
||||
@ -137,7 +137,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Azure
|
||||
region = module.ramius.region
|
||||
@ -189,7 +189,7 @@ The Azure internal `workers` module supports a number of [variables](https://git
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| worker_count | Number of instances | 1 | 3 |
|
||||
| vm_type | Machine type for instances | "Standard_DS1_v2" | See below |
|
||||
| vm_type | Machine type for instances | "Standard_D2as_v5" | See below |
|
||||
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||
| priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | "Regular" | "Spot" |
|
||||
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
|
||||
@ -207,7 +207,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
region = "europe-west2"
|
||||
@ -231,7 +231,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.26.1"
|
||||
|
||||
# Google Cloud
|
||||
region = "europe-west2"
|
||||
@ -262,11 +262,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.23.5
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.23.5
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.23.5
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.23.5
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.23.5
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.26.1
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.26.1
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.26.1
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.26.1
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.26.1
|
||||
```
|
||||
|
||||
### Variables
|
||||
|
@ -53,16 +53,16 @@ Add firewall rules to the worker security group.
|
||||
resource "azurerm_network_security_rule" "some-app" {
|
||||
resource_group_name = "${module.ramius.resource_group_name}"
|
||||
|
||||
name = "some-app"
|
||||
network_security_group_name = module.ramius.worker_security_group_name
|
||||
priority = "3001"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "30333"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = module.ramius.worker_address_prefix
|
||||
name = "some-app"
|
||||
network_security_group_name = module.ramius.worker_security_group_name
|
||||
priority = "3001"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "30333"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefixes = module.ramius.worker_address_prefixes
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -9,8 +9,8 @@ Typhoon supports [Fedora CoreOS](https://getfedora.org/coreos/) and [Flatcar Lin
|
||||
|
||||
Together, they diversify Typhoon to support a range of container technologies.
|
||||
|
||||
* Fedora CoreOS: rpm-ostree, podman, moby
|
||||
* Flatcar Linux: Gentoo core, rkt-fly, docker
|
||||
* Fedora CoreOS: rpm-ostree, podman, containerd
|
||||
* Flatcar Linux: Gentoo core, docker, containerd
|
||||
|
||||
## Host Properties
|
||||
|
||||
@ -19,7 +19,7 @@ Together, they diversify Typhoon to support a range of container technologies.
|
||||
| Kernel | ~5.10.x | ~5.16.x |
|
||||
| systemd | 249 | 249 |
|
||||
| Username | core | core |
|
||||
| Ignition system | Ignition v2.x spec | Ignition v3.x spec |
|
||||
| Ignition system | Ignition v3.x spec | Ignition v3.x spec |
|
||||
| storage driver | overlay2 (extfs) | overlay2 (xfs) |
|
||||
| logging driver | json-file | journald |
|
||||
| cgroup driver | systemd | systemd |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on AWS with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on AWS with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
@ -51,11 +51,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = "4.5.0"
|
||||
version = "4.31.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
@ -111,7 +111,7 @@ Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 81 to add, 0 to change, 0 to destroy.
|
||||
Plan: 109 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
@ -123,7 +123,7 @@ module.tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
|
||||
module.tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
|
||||
module.tempest.null_resource.bootstrap: Creation complete after 5m8s (ID: 3961816482286168143)
|
||||
|
||||
Apply complete! Resources: 98 added, 0 changed, 0 destroyed.
|
||||
Apply complete! Resources: 109 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
@ -145,9 +145,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready <none> 10m v1.23.5
|
||||
ip-10-0-26-65 Ready <none> 10m v1.23.5
|
||||
ip-10-0-41-21 Ready <none> 10m v1.23.5
|
||||
ip-10-0-3-155 Ready <none> 10m v1.26.1
|
||||
ip-10-0-26-65 Ready <none> 10m v1.26.1
|
||||
ip-10-0-41-21 Ready <none> 10m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Azure
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on Azure with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.26.1 cluster on Azure with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
@ -48,11 +48,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
azurerm = {
|
||||
source = "hashicorp/azurerm"
|
||||
version = "2.99.0"
|
||||
version = "3.23.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -64,18 +64,18 @@ Additional configuration options are described in the `azurerm` provider [docs](
|
||||
|
||||
Fedora CoreOS publishes images for Azure, but does not yet upload them. Azure allows custom images to be uploaded to a storage account bucket and imported.
|
||||
|
||||
[Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow).
|
||||
[Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image, decompress it, and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow).
|
||||
|
||||
```
|
||||
xz -d fedora-coreos-31.20200323.3.2-azure.x86_64.vhd.xz
|
||||
xz -d fedora-coreos-36.20220716.3.1-azure.x86_64.vhd.xz
|
||||
```
|
||||
|
||||
Create an Azure disk (note disk ID) and create an Azure image from it (note image ID).
|
||||
|
||||
```
|
||||
az disk create --name fedora-coreos-31.20200323.3.2 -g GROUP --source https://BUCKET.blob.core.windows.net/fedora-coreos/fedora-coreos-31.20200323.3.2-azure.x86_64.vhd
|
||||
az disk create --name fedora-coreos-36.20220716.3.1 -g GROUP --source https://BUCKET.blob.core.windows.net/fedora-coreos/fedora-coreos-36.20220716.3.1-azure.x86_64.vhd
|
||||
|
||||
az image create --name fedora-coreos-31.20200323.3.2 -g GROUP --os-type=linux --source /subscriptions/some/path/providers/Microsoft.Compute/disks/fedora-coreos-31.20200323.3.2
|
||||
az image create --name fedora-coreos-36.20220716.3.1 -g GROUP --os-type=linux --source /subscriptions/some/path/providers/Microsoft.Compute/disks/fedora-coreos-36.20220716.3.1
|
||||
```
|
||||
|
||||
Set the [os_image](#variables) in the next step.
|
||||
@ -86,7 +86,7 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
@ -95,7 +95,7 @@ module "ramius" {
|
||||
dns_zone_group = "example-group"
|
||||
|
||||
# configuration
|
||||
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-31.20200323.3.2"
|
||||
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-36.20220716.3.1"
|
||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
@ -161,9 +161,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready <none> 24m v1.23.5
|
||||
ramius-worker-000001 Ready <none> 25m v1.23.5
|
||||
ramius-worker-000002 Ready <none> 24m v1.23.5
|
||||
ramius-controller-0 Ready <none> 24m v1.26.1
|
||||
ramius-worker-000001 Ready <none> 25m v1.26.1
|
||||
ramius-worker-000002 Ready <none> 24m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -240,7 +240,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
|
||||
| worker_type | Machine type for workers | "Standard_DS1_v2" | See below |
|
||||
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
|
||||
| disk_size | Size of the disk in GB | 30 | 100 |
|
||||
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
||||
| controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.23.5 cluster on bare-metal with Fedora CoreOS.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.26.1 cluster on bare-metal with Fedora CoreOS.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
@ -138,11 +138,11 @@ terraform {
|
||||
required_providers {
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "0.10.0"
|
||||
version = "0.11.0"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "0.5.0"
|
||||
version = "0.5.2"
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.23.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.26.1"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -283,9 +283,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready <none> 10m v1.23.5
|
||||
node2.example.com Ready <none> 10m v1.23.5
|
||||
node3.example.com Ready <none> 10m v1.23.5
|
||||
node1.example.com Ready <none> 10m v1.26.1
|
||||
node2.example.com Ready <none> 10m v1.26.1
|
||||
node3.example.com Ready <none> 10m v1.26.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user