mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 16:41:34 +02:00
Compare commits
166 Commits
Author | SHA1 | Date | |
---|---|---|---|
7d8c0631cd | |||
6ac5a0222b | |||
ed9a031d39 | |||
88112d4de2 | |||
bda94bd278 | |||
cafcdbc3e7 | |||
4bc10a8a4c | |||
4c3dd07ab3 | |||
8524aa00bc | |||
734c8c2107 | |||
fbe36b8b16 | |||
8038669504 | |||
7af83404e1 | |||
e9c7c4a4c1 | |||
ed82c41423 | |||
41907a0ba6 | |||
ab66d11edf | |||
2325a503e1 | |||
7a46eb03ae | |||
0e7977694f | |||
f2f625984e | |||
ac3eab4e00 | |||
aecb7775a8 | |||
301f460d25 | |||
e247673a20 | |||
808eafd178 | |||
4d4c5413de | |||
fbf4544cfd | |||
af719e46f2 | |||
25c9ec8e3d | |||
5bea4b7d9c | |||
84e4f02917 | |||
5e06f29810 | |||
0d997def31 | |||
0ad69f8899 | |||
35435e56ae | |||
493030de82 | |||
8254d8f3db | |||
4691a11afd | |||
5b47d79253 | |||
435fa196da | |||
39af942f4d | |||
4c8bfa4615 | |||
386a004072 | |||
291107e4c9 | |||
2062144597 | |||
c7732d58ae | |||
005a1119f3 | |||
68df37451e | |||
bf9e74f5a1 | |||
6bd6d46fb2 | |||
c8105d7d42 | |||
215c9fe75d | |||
0ce8dfbb95 | |||
8cbcaa5fc6 | |||
f5bc1fb1fd | |||
7475d5fd27 | |||
fbace61af7 | |||
e3bf18ce41 | |||
ebdd8988cf | |||
126973082a | |||
61135da5bb | |||
fc951c7dbf | |||
c259142c28 | |||
81eed2e909 | |||
bdaa0e81b0 | |||
c1e0eba7b6 | |||
66fda88d20 | |||
d29e6e3de1 | |||
be37170e59 | |||
0a6183f859 | |||
1888c272eb | |||
880821391a | |||
9314807dfd | |||
56d71c0eca | |||
9a28fe79a1 | |||
7255f82d71 | |||
6f4b4cc508 | |||
094811dc73 | |||
2a5a43f3a4 | |||
784f60f624 | |||
58e0ff9f5e | |||
9e63f1247a | |||
ecc9a73df4 | |||
1665cfb613 | |||
1919ff1355 | |||
8ebf31073c | |||
867ca6a94e | |||
819dd111ed | |||
c16cc08375 | |||
64472d5bf7 | |||
ae82c57eee | |||
fe23fca72b | |||
4ef1908299 | |||
2272472d59 | |||
fc444d25f8 | |||
5feb4c63f7 | |||
501e6d25e0 | |||
1e76e1a200 | |||
4322857bec | |||
e3bfa1c89b | |||
47213a8e8f | |||
8943c0f55e | |||
44d84cf324 | |||
ec2e0b2fd7 | |||
6bd2a1a528 | |||
5f303212d2 | |||
bcee364b4c | |||
3670ec7ed7 | |||
1e3af87392 | |||
2b3cd451d2 | |||
ff937b0b7e | |||
4891a66e29 | |||
3ff6c2fdf7 | |||
517863c31a | |||
76ebc08fd2 | |||
86e8484e0a | |||
cf20e686c0 | |||
420ddd2154 | |||
435b3d4c88 | |||
f3c327007d | |||
406fb444f0 | |||
1caea3388c | |||
d04d88023d | |||
a205922d06 | |||
b5ba65d4c2 | |||
e696fd2b22 | |||
3ff9b792ca | |||
c4f1d2d1c8 | |||
a1d7b5cd1e | |||
e7591030e0 | |||
f2bf5ac3fb | |||
9cd1c5b17a | |||
d6f739dedb | |||
6bb7a36cf2 | |||
0afe9d65ed | |||
11e540000f | |||
d6cbcf9f96 | |||
ce52a2cd35 | |||
bd9a908125 | |||
0dc8740c77 | |||
a9b12b6bca | |||
d419c58ab1 | |||
da76d32aba | |||
f0e5982b3c | |||
a8990b3045 | |||
f597f7cda3 | |||
b4857c123e | |||
50bffaae8f | |||
a193762eed | |||
adf33df99b | |||
29a005b7b4 | |||
ccebc2313d | |||
1f86592d13 | |||
6a521257d0 | |||
26dbc7e91d | |||
de668e696a | |||
d3b2217444 | |||
937acc4b5a | |||
b0a6dc8115 | |||
420ff6ff04 | |||
9b733d79c7 | |||
35a9e22b1f | |||
0f38a6d405 | |||
a535581ef2 | |||
08d13e7215 |
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -1,10 +0,0 @@
|
||||
High level description of the change.
|
||||
|
||||
* Specific change
|
||||
* Specific change
|
||||
|
||||
## Testing
|
||||
|
||||
Describe your work to validate the change works.
|
||||
|
||||
rel: issue number (if applicable)
|
3
.github/dependabot.yaml
vendored
3
.github/dependabot.yaml
vendored
@ -4,6 +4,3 @@ updates:
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: weekly
|
||||
pull-request-branch-name:
|
||||
separator: "-"
|
||||
open-pull-requests-limit: 3
|
||||
|
12
.github/release.yaml
vendored
Normal file
12
.github/release.yaml
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
changelog:
|
||||
categories:
|
||||
- title: Contributions
|
||||
labels:
|
||||
- '*'
|
||||
exclude:
|
||||
labels:
|
||||
- dependencies
|
||||
- no-release-note
|
||||
- title: Dependencies
|
||||
labels:
|
||||
- dependencies
|
12
.github/workflows/publish.yaml
vendored
Normal file
12
.github/workflows/publish.yaml
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
name: publish
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- release-docs
|
||||
jobs:
|
||||
mkdocs:
|
||||
name: mkdocs
|
||||
uses: poseidon/matchbox/.github/workflows/mkdocs-pages.yaml@main
|
||||
# Add content write for GitHub Pages
|
||||
permissions:
|
||||
contents: write
|
1
.gitignore
vendored
1
.gitignore
vendored
@ -1 +1,2 @@
|
||||
site/
|
||||
venv/
|
||||
|
217
CHANGES.md
217
CHANGES.md
@ -4,9 +4,217 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
## v1.30.0
|
||||
|
||||
* Kubernetes [v1.30.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1300)
|
||||
* Update etcd from v3.5.12 to [v3.5.13](https://github.com/etcd-io/etcd/releases/tag/v3.5.13)
|
||||
* Update Cilium from v1.15.2 to [v1.15.3](https://github.com/cilium/cilium/releases/tag/v1.15.3)
|
||||
* Update Calico from v3.27.2 to [v3.27.3](https://github.com/projectcalico/calico/releases/tag/v3.27.3)
|
||||
|
||||
## v1.29.3
|
||||
|
||||
* Kubernetes [v1.29.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1293)
|
||||
* Update Cilium from v1.15.1 to [v1.15.2](https://github.com/cilium/cilium/releases/tag/v1.15.2)
|
||||
* Update flannel from v0.24.2 to [v0.24.4](https://github.com/flannel-io/flannel/releases/tag/v0.24.4)
|
||||
|
||||
## v1.29.2
|
||||
|
||||
* Kubernetes [v1.29.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1292)
|
||||
* Update etcd from v3.5.10 to [v3.5.12](https://github.com/etcd-io/etcd/releases/tag/v3.5.12)
|
||||
* Update Cilium from v1.14.3 to [v1.15.1](https://github.com/cilium/cilium/releases/tag/v1.15.1)
|
||||
* Update Calico from v3.26.3 to [v3.27.2](https://github.com/projectcalico/calico/releases/tag/v3.27.2)
|
||||
* Fix upstream incompatibility with Fedora CoreOS ([calico#8372](https://github.com/projectcalico/calico/issues/8372))
|
||||
* Update flannel from v0.22.2 to [v0.24.2](https://github.com/flannel-io/flannel/releases/tag/v0.24.2)
|
||||
* Add an `install_container_networking` variable (default `true`)
|
||||
* When `true`, the chosen container `networking` provider is installed during cluster bootstrap
|
||||
* Set `false` to self-manage the container networking provider. This allows flannel, Calico, or Cilium
|
||||
to be managed via Terraform (like any other Kubernetes resources). Nodes will be NotReady until you
|
||||
apply the self-managed container networking provider. This may become the default in future.
|
||||
* Continue to set `networking` to one of the three supported container networking providers. Most
|
||||
require custom firewall / security policies be present across nodes so they have some infra tie-ins.
|
||||
|
||||
## v1.29.1
|
||||
|
||||
* Kubernetes [v1.29.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1291)
|
||||
|
||||
### AWS
|
||||
|
||||
* Continue to support AWS IMDSv1 ([#1412](https://github.com/poseidon/typhoon/pull/1412))
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Calico and Fedora CoreOS cannot be used together currently ([calico#8372](https://github.com/projectcalico/calico/issues/8372))
|
||||
|
||||
## v1.29.0
|
||||
|
||||
* Kubernetes [v1.29.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1290)
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Calico and Fedora CoreOS cannot be used together currently ([calico#8372](https://github.com/projectcalico/calico/issues/8372))
|
||||
|
||||
## v1.28.4
|
||||
|
||||
* Kubernetes [v1.28.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1284)
|
||||
|
||||
## v1.28.3
|
||||
|
||||
* Kubernetes [v1.28.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1283)
|
||||
* Update etcd from v3.5.9 to [v3.5.10](https://github.com/etcd-io/etcd/releases/tag/v3.5.10)
|
||||
* Update Cilium from v1.14.2 to [v1.14.3](https://github.com/cilium/cilium/releases/tag/v1.14.3)
|
||||
* Workaround problems in Cilium v1.14's partial `kube-proxy` implementation ([#365](https://github.com/poseidon/terraform-render-bootstrap/pull/365))
|
||||
* Update Calico from v3.26.1 to [v3.26.3](https://github.com/projectcalico/calico/releases/tag/v3.26.3)
|
||||
|
||||
### Google Cloud
|
||||
|
||||
* Allow upgrading Google Cloud Terraform provider to v5.x
|
||||
|
||||
## v1.28.2
|
||||
|
||||
* Kubernetes [v1.28.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1282)
|
||||
* Update Cilium from v1.14.1 to [v1.14.2](https://github.com/cilium/cilium/releases/tag/v1.14.2)
|
||||
|
||||
### Azure
|
||||
|
||||
* Add optional `azure_authorized_key` variable
|
||||
* Azure obtusely inspects public keys, requires RSA keys, and forbids more secure key formats (e.g. ed25519)
|
||||
* Allow passing a dummy RSA key via `azure_authorized_key` (delete the private key) to satisfy Azure validations, then the usual `ssh_authorized_key` variable can new newer formats (e.g. ed25519)
|
||||
|
||||
## v1.28.1
|
||||
|
||||
* Kubernetes [v1.28.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1281)
|
||||
|
||||
## v1.28.0
|
||||
|
||||
* Kubernetes [v1.28.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#v1280)
|
||||
* Update Cilium from v1.13.4 to [v1.14.1](https://github.com/cilium/cilium/releases/tag/v1.14.1)
|
||||
* Update flannel from v0.22.0 to [v0.22.2](https://github.com/flannel-io/flannel/releases/tag/v0.22.2)
|
||||
|
||||
## v1.27.4
|
||||
|
||||
* Kubernetes [v1.27.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1274)
|
||||
|
||||
## v1.27.3
|
||||
|
||||
* Kubernetes [v1.27.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1273)
|
||||
* Update etcd from v3.5.7 to [v3.5.9](https://github.com/etcd-io/etcd/releases/tag/v3.5.9)
|
||||
* Update Cilium from v1.13.2 to [v1.13.4](https://github.com/cilium/cilium/releases/tag/v1.13.4)
|
||||
* Update Calico from v3.25.1 to [v3.26.1](https://github.com/projectcalico/calico/releases/tag/v3.26.1)
|
||||
* Update flannel from v0.21.2 to [v0.22.0](https://github.com/flannel-io/flannel/releases/tag/v0.22.0)
|
||||
|
||||
### AWS
|
||||
|
||||
* Allow upgrading AWS Terraform provider to v5.x ([#1353](https://github.com/poseidon/typhoon/pull/1353))
|
||||
|
||||
### Azure
|
||||
|
||||
* Enable boot diagnostics for controller and worker VMs ([#1351](https://github.com/poseidon/typhoon/pull/1351))
|
||||
|
||||
## v1.27.2
|
||||
|
||||
* Kubernetes [v1.27.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1272)
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Update Butane Config version from v1.4.0 to v1.5.0
|
||||
* Require any custom Butane [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) update to v1.5.0
|
||||
* Require Fedora CoreOS `37.20230303.3.0` or newer (with ignition v2.15)
|
||||
* Require poseidon/ct v0.13+ (**action required**)
|
||||
|
||||
## v1.27.1
|
||||
|
||||
* Kubernetes [v1.27.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#v1271)
|
||||
* Update etcd from v3.5.7 to [v3.5.8](https://github.com/etcd-io/etcd/releases/tag/v3.5.8)
|
||||
* Update Cilium from v1.13.1 to [v1.13.2](https://github.com/cilium/cilium/releases/tag/v1.13.2)
|
||||
* Update Calico from v3.25.0 to [v3.25.1](https://github.com/projectcalico/calico/releases/tag/v3.25.1)
|
||||
|
||||
## v1.26.3
|
||||
|
||||
* Kubernetes [v1.26.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1263)
|
||||
* Update Cilium from v1.12.6 to [v1.13.1](https://github.com/cilium/cilium/releases/tag/v1.13.1)
|
||||
|
||||
### Bare-Metal
|
||||
|
||||
* Add `oem_type` variable for Flatcar Linux ([#1302](https://github.com/poseidon/typhoon/pull/1302))
|
||||
|
||||
## v1.26.2
|
||||
|
||||
* Kubernetes [v1.26.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1262)
|
||||
* Update Cilium from v1.12.5 to [v1.12.6](https://github.com/cilium/cilium/releases/tag/v1.12.6)
|
||||
* Update flannel from v0.20.2 to [v0.21.2](https://github.com/flannel-io/flannel/releases/tag/v0.21.2)
|
||||
|
||||
### Bare-Metal
|
||||
|
||||
* Add a `worker` module to allow customizing individual worker nodes ([#1295](https://github.com/poseidon/typhoon/pull/1295))
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Fedora CoreOS [issue](https://github.com/coreos/fedora-coreos-tracker/issues/1423) fix is progressing through channels
|
||||
|
||||
## v1.26.1
|
||||
|
||||
* Kubernetes [v1.26.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1261)
|
||||
* Update etcd from v3.5.6 to [v3.5.7](https://github.com/etcd-io/etcd/releases/tag/v3.5.7)
|
||||
* Update Cilium from v1.12.4 to [v1.12.5](https://github.com/cilium/cilium/releases/tag/v1.12.5)
|
||||
* Update Calico from v3.24.5 to [v3.25.0](https://github.com/projectcalico/calico/releases/tag/v3.25.0)
|
||||
* Update CoreDNS from v1.9.3 to [v1.9.4](https://github.com/poseidon/terraform-render-bootstrap/pull/341)
|
||||
|
||||
## v1.26.0
|
||||
|
||||
* Kubernetes [v1.26.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#v1260)
|
||||
* Update etcd from v3.5.5 to [v3.5.6](https://github.com/etcd-io/etcd/releases/tag/v3.5.6)
|
||||
* Update Cilium from v1.12.3 to [v1.12.4](https://github.com/cilium/cilium/releases/tag/v1.12.4)
|
||||
* Update flannel from v0.15.1 to [v0.20.2](https://github.com/flannel-io/flannel/releases/tag/v0.20.2)
|
||||
* Reminder: Modules are no longer published to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon) ([#1282](https://github.com/poseidon/typhoon/pull/1282))
|
||||
* See [#1282](https://github.com/poseidon/typhoon/pull/1282) and [v1.25.4](https://github.com/poseidon/typhoon/releases/tag/v1.25.4) for details
|
||||
|
||||
### AWS
|
||||
|
||||
* Migrate AWS launch configurations to launch templates ([#1275](https://github.com/poseidon/typhoon/pull/1275))
|
||||
* Starting Dec 31, 2022 AWS won't add new instance types/families to launch configurations
|
||||
|
||||
### Addons
|
||||
|
||||
* Update ingress-nginx from v1.3.1 to [v1.5.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.5.1)
|
||||
* Update Prometheus from v2.40.1 to [v2.40.5](https://github.com/prometheus/prometheus/releases/tag/v2.40.5)
|
||||
* Update node-exporter from v1.3.1 to [v1.5.0](https://github.com/prometheus/node_exporter/releases/tag/v1.5.0)
|
||||
* Update kube-state-metrics from v2.6.0 to [v2.7.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.7.0)
|
||||
* Update Grafana from v9.2.4 to [v9.3.1](https://github.com/grafana/grafana/releases/tag/v9.3.1)
|
||||
|
||||
## v1.25.4
|
||||
|
||||
* Kubernetes [v1.25.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1254)
|
||||
* Update Calico from v3.24.1 to [v3.24.5](https://github.com/projectcalico/calico/releases/tag/v3.24.5)
|
||||
* Allow Kubelet kubeconfig to drain nodes, if desired ([#330](https://github.com/poseidon/terraform-render-bootstrap/pull/330))
|
||||
* Re-enable Kubelet Graceful Node Shutdown ([#1261](https://github.com/poseidon/typhoon/pull/1261))
|
||||
* Introduce companion project [poseidon/scuttle](https://github.com/poseidon/scuttle)
|
||||
* Link to new Mastodon account for release announcements
|
||||
* [@typhoon@fosstodon.org](https://fosstodon.org/@typhoon)
|
||||
* [@poseidon@fosstodon.org](https://fosstodon.org/@poseidon)
|
||||
* Deprecate publishing to the [Terraform Module Registry](https://registry.terraform.io/search/modules?q=poseidon)
|
||||
* Typhoon docs have always shown using Git-based module sources, not the Terraform Module Registry
|
||||
* Module usage should be `source = "git::https://github.com/poseidon/typhoon/...` not `source = poseidon/kubernetes/...`
|
||||
* Terraform's Module Registry requires subtree mirroring typhoon to special terraform-platform-kubernetes repos, only supports release versions (no commit SHAs or forks), only ever contained Flatcar Linux modules (not Fedora CoreOS) for historical reasons
|
||||
* Note, this does not affect Terraform Providers like `poseidon/matchbox` or `poseidon/ct`, the registry works well for providers
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Remove unused `Wants=network.target` from `etcd-member.service` ([#1254](https://github.com/poseidon/typhoon/pull/1254))
|
||||
|
||||
### Cloud
|
||||
|
||||
* Remove defunct `delete-node.service` from worker node configurations ([#1256](https://github.com/poseidon/typhoon/pull/1256))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.39.1 to [v2.40.1](https://github.com/prometheus/prometheus/releases/tag/v2.40.1)
|
||||
* Update Grafana from v9.1.7 to [v9.2.4](https://github.com/grafana/grafana/releases/tag/v9.2.4)
|
||||
|
||||
## v1.25.3
|
||||
|
||||
* Kubernetes [v1.25.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#v1253)
|
||||
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` for addons
|
||||
* Update Cilium from v1.12.2 to [v1.12.3](https://github.com/cilium/cilium/releases/tag/v1.12.3)
|
||||
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` for addons ([#1246](https://github.com/poseidon/typhoon/pull/1246))
|
||||
* Update Cilium from v1.12.2 to [v1.12.3](https://github.com/cilium/cilium/releases/tag/v1.12.3) ([#1253](https://github.com/poseidon/typhoon/pull/1253))
|
||||
|
||||
### Azure
|
||||
|
||||
@ -22,6 +230,11 @@ Notable changes between versions.
|
||||
* Switch from Azure Hypervisor gen1 to gen2 (**action required**) ([#1248](https://github.com/poseidon/typhoon/pull/1248))
|
||||
* Run `az vm image terms accept --publish kinvolk --offer flatcar-container-linux-free --plan stable-gen2`
|
||||
|
||||
### Docs
|
||||
|
||||
* Remove old docs note about not supporting ARM64 with Calico
|
||||
* Typhoon supports ARM64 with `cilium`, `calico`, and `flannel`
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.38.0 to [v2.39.1](https://github.com/prometheus/prometheus/releases/tag/v2.39.1)
|
||||
|
24
README.md
24
README.md
@ -1,4 +1,9 @@
|
||||
# Typhoon [](https://github.com/poseidon/typhoon/releases) [](https://github.com/poseidon/typhoon/stargazers) [](https://github.com/sponsors/poseidon) [](https://twitter.com/typhoon8s)
|
||||
# Typhoon
|
||||
|
||||
[](https://github.com/poseidon/typhoon/releases)
|
||||
[](https://github.com/poseidon/typhoon/stargazers)
|
||||
[](https://github.com/sponsors/poseidon)
|
||||
[](https://fosstodon.org/@typhoon)
|
||||
|
||||
<img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
@ -13,7 +18,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.25.3 (upstream)
|
||||
* Kubernetes v1.30.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
@ -65,7 +70,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.0"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -104,9 +109,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.25.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.25.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.25.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.0
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.0
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -159,5 +164,12 @@ Poseidon's Github [Sponsors](https://github.com/sponsors/poseidon) support the i
|
||||
<img src="https://opensource.nyc3.cdn.digitaloceanspaces.com/attribution/assets/SVG/DO_Logo_horizontal_blue.svg" width="201px">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
<a href="https://deploy.equinix.com/">
|
||||
<img src="https://storage.googleapis.com/poseidon/equinix.png" width="201px">
|
||||
</a>
|
||||
<br>
|
||||
<br>
|
||||
|
||||
If you'd like your company here, please contact dghubble at psdn.io.
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: grafana
|
||||
image: docker.io/grafana/grafana:9.1.7
|
||||
image: docker.io/grafana/grafana:9.3.1
|
||||
env:
|
||||
- name: GF_PATHS_CONFIG
|
||||
value: "/etc/grafana/custom.ini"
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.3.1
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.3.1
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.3.1
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.3.1
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.3.1
|
||||
image: registry.k8s.io/ingress-nginx/controller:v1.5.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --controller-class=k8s.io/public
|
||||
|
@ -59,4 +59,11 @@ rules:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
|
||||
- apiGroups:
|
||||
- discovery.k8s.io
|
||||
resources:
|
||||
- "endpointslices"
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
|
@ -21,7 +21,7 @@ spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.39.1
|
||||
image: quay.io/prometheus/prometheus:v2.40.5
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -25,7 +25,7 @@ spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.6.0
|
||||
image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.7.0
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
|
@ -30,7 +30,7 @@ spec:
|
||||
hostPID: true
|
||||
containers:
|
||||
- name: node-exporter
|
||||
image: quay.io/prometheus/node-exporter:v1.3.1
|
||||
image: quay.io/prometheus/node-exporter:v1.5.0
|
||||
args:
|
||||
- --path.procfs=/host/proc
|
||||
- --path.sysfs=/host/sys
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.25.3 (upstream)
|
||||
* Kubernetes v1.30.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,11 +1,11 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=946d81be0909e470a0508912eb0a2d4315075310"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=d233e90754e68d258a60abf1087e11377bdc1e4b"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = aws_route53_record.etcds.*.fqdn
|
||||
networking = var.networking
|
||||
networking = var.install_container_networking ? var.networking : "none"
|
||||
network_mtu = var.network_mtu
|
||||
pod_cidr = var.pod_cidr
|
||||
service_cidr = var.service_cidr
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -9,10 +9,10 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.5
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -57,7 +57,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -116,7 +116,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.25.3
|
||||
quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -177,8 +177,8 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/ 2>/dev/null || true
|
||||
rm -rf assets auth static-manifests tls manifests manifests-networking
|
||||
chcon -R -u system_u -t container_file_t /etc/kubernetes/pki
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
|
@ -31,6 +31,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
tags = {}
|
||||
}
|
||||
|
||||
# network
|
||||
|
@ -107,6 +107,12 @@ variable "networking" {
|
||||
default = "cilium"
|
||||
}
|
||||
|
||||
variable "install_container_networking" {
|
||||
type = bool
|
||||
description = "Install the chosen networking provider during cluster bootstrap (use false to self-manage)"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "network_mtu" {
|
||||
type = number
|
||||
description = "CNI interface MTU (applies to calico only). Use 8981 if using instances types with Jumbo frames."
|
||||
|
@ -3,11 +3,11 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
aws = ">= 2.23, <= 6.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: containerd.service
|
||||
@ -29,7 +29,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -77,19 +77,6 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -120,6 +107,8 @@ storage:
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
|
@ -3,10 +3,10 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
aws = ">= 2.23, <= 6.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
vpc_zone_identifier = var.subnet_ids
|
||||
|
||||
# template
|
||||
launch_configuration = aws_launch_configuration.worker.name
|
||||
launch_template {
|
||||
id = aws_launch_template.worker.id
|
||||
version = aws_launch_template.worker.latest_version
|
||||
}
|
||||
|
||||
# target groups to which instances should be added
|
||||
target_group_arns = flatten([
|
||||
@ -49,25 +52,47 @@ resource "aws_autoscaling_group" "workers" {
|
||||
}
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
resource "aws_launch_template" "worker" {
|
||||
name_prefix = "${var.name}-worker"
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
spot_price = var.spot_price > 0 ? var.spot_price : null
|
||||
enable_monitoring = false
|
||||
monitoring {
|
||||
enabled = false
|
||||
}
|
||||
|
||||
user_data = data.ct_config.worker.rendered
|
||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
ebs_optimized = true
|
||||
block_device_mappings {
|
||||
device_name = "/dev/xvda"
|
||||
ebs {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
security_groups = var.security_groups
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# metadata
|
||||
metadata_options {
|
||||
http_tokens = "optional"
|
||||
}
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
content {
|
||||
market_type = "spot"
|
||||
spot_options {
|
||||
max_price = var.spot_price
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
// Override the default destroy and replace update behavior
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.25.3 (upstream)
|
||||
* Kubernetes v1.30.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,11 +1,11 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=946d81be0909e470a0508912eb0a2d4315075310"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=d233e90754e68d258a60abf1087e11377bdc1e4b"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = aws_route53_record.etcds.*.fqdn
|
||||
networking = var.networking
|
||||
networking = var.install_container_networking ? var.networking : "none"
|
||||
network_mtu = var.network_mtu
|
||||
pod_cidr = var.pod_cidr
|
||||
service_cidr = var.service_cidr
|
||||
|
@ -11,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.5
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -58,7 +58,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -109,7 +109,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -177,8 +177,8 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/ 2>/dev/null || true
|
||||
rm -rf assets auth static-manifests tls manifests manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
|
@ -32,6 +32,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
tags = {}
|
||||
}
|
||||
|
||||
# network
|
||||
|
@ -107,6 +107,12 @@ variable "networking" {
|
||||
default = "cilium"
|
||||
}
|
||||
|
||||
variable "install_container_networking" {
|
||||
type = bool
|
||||
description = "Install the chosen networking provider during cluster bootstrap (use false to self-manage)"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "network_mtu" {
|
||||
type = number
|
||||
description = "CNI interface MTU (applies to calico only). Use 8981 if using instances types with Jumbo frames."
|
||||
|
@ -3,7 +3,7 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
aws = ">= 2.23, <= 6.0"
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
|
@ -30,7 +30,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -78,19 +78,6 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
@ -119,6 +106,8 @@ storage:
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
|
@ -3,7 +3,7 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 5.0"
|
||||
aws = ">= 2.23, <= 6.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.11"
|
||||
|
@ -13,7 +13,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
vpc_zone_identifier = var.subnet_ids
|
||||
|
||||
# template
|
||||
launch_configuration = aws_launch_configuration.worker.name
|
||||
launch_template {
|
||||
id = aws_launch_template.worker.id
|
||||
version = aws_launch_template.worker.latest_version
|
||||
}
|
||||
|
||||
# target groups to which instances should be added
|
||||
target_group_arns = flatten([
|
||||
@ -49,25 +52,47 @@ resource "aws_autoscaling_group" "workers" {
|
||||
}
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
resource "aws_launch_template" "worker" {
|
||||
name_prefix = "${var.name}-worker"
|
||||
image_id = local.ami_id
|
||||
instance_type = var.instance_type
|
||||
spot_price = var.spot_price > 0 ? var.spot_price : null
|
||||
enable_monitoring = false
|
||||
monitoring {
|
||||
enabled = false
|
||||
}
|
||||
|
||||
user_data = data.ct_config.worker.rendered
|
||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
ebs_optimized = true
|
||||
block_device_mappings {
|
||||
device_name = "/dev/xvda"
|
||||
ebs {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
delete_on_termination = true
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
security_groups = var.security_groups
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# metadata
|
||||
metadata_options {
|
||||
http_tokens = "optional"
|
||||
}
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
content {
|
||||
market_type = "spot"
|
||||
spot_options {
|
||||
max_price = var.spot_price
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
// Override the default destroy and replace update behavior
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.25.3 (upstream)
|
||||
* Kubernetes v1.30.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,13 +1,12 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=946d81be0909e470a0508912eb0a2d4315075310"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=d233e90754e68d258a60abf1087e11377bdc1e4b"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = formatlist("%s.%s", azurerm_dns_a_record.etcds.*.name, var.dns_zone)
|
||||
|
||||
networking = var.networking
|
||||
|
||||
networking = var.install_container_networking ? var.networking : "none"
|
||||
# only effective with Calico networking
|
||||
# we should be able to use 1450 MTU, but in practice, 1410 was needed
|
||||
network_encapsulation = "vxlan"
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -9,10 +9,10 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.5
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -54,7 +54,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -111,7 +111,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.25.3
|
||||
quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -172,8 +172,8 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/ 2>/dev/null || true
|
||||
rm -rf assets auth static-manifests tls manifests-networking manifests
|
||||
chcon -R -u system_u -t container_file_t /etc/kubernetes/pki
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
|
@ -1,3 +1,11 @@
|
||||
locals {
|
||||
# Typhoon ssh_authorized_key supports RSA or a newer formats (e.g. ed25519).
|
||||
# However, Azure requires an older RSA key to pass validations. To use a
|
||||
# newer key format, pass a dummy RSA key as the azure_authorized_key and
|
||||
# delete the associated private key so it's never used.
|
||||
azure_authorized_key = var.azure_authorized_key == "" ? var.ssh_authorized_key : var.azure_authorized_key
|
||||
}
|
||||
|
||||
# Discrete DNS records for each controller's private IPv4 for etcd usage
|
||||
resource "azurerm_dns_a_record" "etcds" {
|
||||
count = var.controller_count
|
||||
@ -55,7 +63,7 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
admin_username = "core"
|
||||
admin_ssh_key {
|
||||
username = "core"
|
||||
public_key = var.ssh_authorized_key
|
||||
public_key = local.azure_authorized_key
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
|
@ -82,12 +82,24 @@ variable "ssh_authorized_key" {
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "azure_authorized_key" {
|
||||
type = string
|
||||
description = "Optionally, pass a dummy RSA key to satisfy Azure validations (then use an ed25519 key set above)"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "networking" {
|
||||
type = string
|
||||
description = "Choice of networking provider (flannel, calico, or cilium)"
|
||||
default = "cilium"
|
||||
}
|
||||
|
||||
variable "install_container_networking" {
|
||||
type = bool
|
||||
description = "Install the chosen networking provider during cluster bootstrap (use false to self-manage)"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "host_cidr" {
|
||||
type = string
|
||||
description = "CIDR IPv4 range to assign to instances"
|
||||
|
@ -7,7 +7,7 @@ terraform {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -17,6 +17,7 @@ module "workers" {
|
||||
# configuration
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
azure_authorized_key = var.azure_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
snippets = var.worker_snippets
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: containerd.service
|
||||
@ -26,7 +26,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -72,19 +72,6 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -115,6 +102,8 @@ storage:
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
|
@ -73,6 +73,12 @@ variable "ssh_authorized_key" {
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "azure_authorized_key" {
|
||||
type = string
|
||||
description = "Optionally, pass a dummy RSA key to satisfy Azure validations (then use an ed25519 key set above)"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "service_cidr" {
|
||||
type = string
|
||||
description = <<EOD
|
||||
|
@ -6,7 +6,7 @@ terraform {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1,3 +1,7 @@
|
||||
locals {
|
||||
azure_authorized_key = var.azure_authorized_key == "" ? var.ssh_authorized_key : var.azure_authorized_key
|
||||
}
|
||||
|
||||
# Workers scale set
|
||||
resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
resource_group_name = var.resource_group_name
|
||||
@ -22,7 +26,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
admin_username = "core"
|
||||
admin_ssh_key {
|
||||
username = "core"
|
||||
public_key = var.ssh_authorized_key
|
||||
public_key = var.azure_authorized_key
|
||||
}
|
||||
|
||||
# network
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.25.3 (upstream)
|
||||
* Kubernetes v1.30.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,13 +1,12 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=946d81be0909e470a0508912eb0a2d4315075310"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=d233e90754e68d258a60abf1087e11377bdc1e4b"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = formatlist("%s.%s", azurerm_dns_a_record.etcds.*.name, var.dns_zone)
|
||||
|
||||
networking = var.networking
|
||||
|
||||
networking = var.install_container_networking ? var.networking : "none"
|
||||
# only effective with Calico networking
|
||||
# we should be able to use 1450 MTU, but in practice, 1410 was needed
|
||||
network_encapsulation = "vxlan"
|
||||
|
@ -11,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.5
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -56,7 +56,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -105,7 +105,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -173,8 +173,8 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/ 2>/dev/null || true
|
||||
rm -rf assets auth static-manifests tls manifests-networking manifests
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
|
@ -20,6 +20,12 @@ locals {
|
||||
channel = split("-", var.os_image)[1]
|
||||
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
|
||||
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
|
||||
|
||||
# Typhoon ssh_authorized_key supports RSA or a newer formats (e.g. ed25519).
|
||||
# However, Azure requires an older RSA key to pass validations. To use a
|
||||
# newer key format, pass a dummy RSA key as the azure_authorized_key and
|
||||
# delete the associated private key so it's never used.
|
||||
azure_authorized_key = var.azure_authorized_key == "" ? var.ssh_authorized_key : var.azure_authorized_key
|
||||
}
|
||||
|
||||
# Controller availability set to spread controllers
|
||||
@ -44,6 +50,9 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
|
||||
size = var.controller_type
|
||||
custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
|
||||
boot_diagnostics {
|
||||
# defaults to a managed storage account
|
||||
}
|
||||
|
||||
# storage
|
||||
os_disk {
|
||||
@ -72,14 +81,14 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
||||
|
||||
# network
|
||||
network_interface_ids = [
|
||||
azurerm_network_interface.controllers.*.id[count.index]
|
||||
azurerm_network_interface.controllers[count.index].id
|
||||
]
|
||||
|
||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
||||
admin_username = "core"
|
||||
admin_ssh_key {
|
||||
username = "core"
|
||||
public_key = var.ssh_authorized_key
|
||||
public_key = local.azure_authorized_key
|
||||
}
|
||||
|
||||
lifecycle {
|
||||
|
@ -88,12 +88,24 @@ variable "ssh_authorized_key" {
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "azure_authorized_key" {
|
||||
type = string
|
||||
description = "Optionally, pass a dummy RSA key to satisfy Azure validations (then use an ed25519 key set above)"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "networking" {
|
||||
type = string
|
||||
description = "Choice of networking provider (flannel, calico, or cilium)"
|
||||
default = "cilium"
|
||||
}
|
||||
|
||||
variable "install_container_networking" {
|
||||
type = bool
|
||||
description = "Install the chosen networking provider during cluster bootstrap (use false to self-manage)"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "host_cidr" {
|
||||
type = string
|
||||
description = "CIDR IPv4 range to assign to instances"
|
||||
|
@ -17,6 +17,7 @@ module "workers" {
|
||||
# configuration
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
azure_authorized_key = var.azure_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
snippets = var.worker_snippets
|
||||
|
@ -28,7 +28,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -74,19 +74,6 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
@ -115,6 +102,8 @@ storage:
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
|
@ -79,6 +79,12 @@ variable "ssh_authorized_key" {
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "azure_authorized_key" {
|
||||
type = string
|
||||
description = "Optionally, pass a dummy RSA key to satisfy Azure validations (then use an ed25519 key set above)"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "service_cidr" {
|
||||
type = string
|
||||
description = <<EOD
|
||||
|
@ -3,6 +3,8 @@ locals {
|
||||
channel = split("-", var.os_image)[1]
|
||||
offer_suffix = var.arch == "arm64" ? "corevm" : "free"
|
||||
urn = var.arch == "arm64" ? local.channel : "${local.channel}-gen2"
|
||||
|
||||
azure_authorized_key = var.azure_authorized_key == "" ? var.ssh_authorized_key : var.azure_authorized_key
|
||||
}
|
||||
|
||||
# Workers scale set
|
||||
@ -17,6 +19,9 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
computer_name_prefix = "${var.name}-worker"
|
||||
single_placement_group = false
|
||||
custom_data = base64encode(data.ct_config.worker.rendered)
|
||||
boot_diagnostics {
|
||||
# defaults to a managed storage account
|
||||
}
|
||||
|
||||
# storage
|
||||
os_disk {
|
||||
@ -45,7 +50,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
admin_username = "core"
|
||||
admin_ssh_key {
|
||||
username = "core"
|
||||
public_key = var.ssh_authorized_key
|
||||
public_key = local.azure_authorized_key
|
||||
}
|
||||
|
||||
# network
|
||||
@ -69,6 +74,9 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||
# eviction policy may only be set when priority is Spot
|
||||
priority = var.priority
|
||||
eviction_policy = var.priority == "Spot" ? "Delete" : null
|
||||
termination_notification {
|
||||
enabled = true
|
||||
}
|
||||
}
|
||||
|
||||
# Scale up or down to maintain desired number, tolerating deallocations.
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.25.3 (upstream)
|
||||
* Kubernetes v1.30.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,11 +1,11 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=946d81be0909e470a0508912eb0a2d4315075310"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=d233e90754e68d258a60abf1087e11377bdc1e4b"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
etcd_servers = var.controllers.*.domain
|
||||
networking = var.networking
|
||||
networking = var.install_container_networking ? var.networking : "none"
|
||||
network_mtu = var.network_mtu
|
||||
network_ip_autodetection_method = var.network_ip_autodetection_method
|
||||
pod_cidr = var.pod_cidr
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -9,10 +9,10 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.5
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -53,7 +53,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -113,7 +113,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPre=-/usr/bin/podman rm bootstrap
|
||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||
--network host \
|
||||
@ -182,8 +182,8 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/ 2>/dev/null || true
|
||||
rm -rf assets auth static-manifests tls manifests-networking manifests
|
||||
chcon -R -u system_u -t container_file_t /etc/kubernetes/pki
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
|
@ -1,22 +0,0 @@
|
||||
# Match each controller or worker to a profile
|
||||
|
||||
resource "matchbox_group" "controller" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-%s", var.cluster_name, var.controllers.*.name[count.index])
|
||||
profile = matchbox_profile.controllers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.controllers.*.mac[count.index]
|
||||
}
|
||||
}
|
||||
|
||||
resource "matchbox_group" "worker" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-%s", var.cluster_name, var.workers.*.name[count.index])
|
||||
profile = matchbox_profile.workers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.workers.*.mac[count.index]
|
||||
}
|
||||
}
|
||||
|
@ -3,6 +3,13 @@ output "kubeconfig-admin" {
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
# Outputs for workers
|
||||
|
||||
output "kubeconfig" {
|
||||
value = module.bootstrap.kubeconfig-kubelet
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
# Outputs for debug
|
||||
|
||||
output "assets_dist" {
|
||||
|
@ -28,6 +28,16 @@ locals {
|
||||
args = var.cached_install ? local.cached_args : local.remote_args
|
||||
}
|
||||
|
||||
# Match a controller to a profile by MAC
|
||||
resource "matchbox_group" "controller" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-%s", var.cluster_name, var.controllers.*.name[count.index])
|
||||
profile = matchbox_profile.controllers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.controllers.*.mac[count.index]
|
||||
}
|
||||
}
|
||||
|
||||
// Fedora CoreOS controller profile
|
||||
resource "matchbox_profile" "controllers" {
|
||||
@ -55,30 +65,3 @@ data "ct_config" "controllers" {
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
// Fedora CoreOS worker profile
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
|
||||
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.workers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
# Fedora CoreOS workers
|
||||
data "ct_config" "workers" {
|
||||
count = length(var.workers)
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.workers.*.domain[count.index]
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
|
||||
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
@ -15,7 +15,6 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
module.bootstrap,
|
||||
]
|
||||
|
||||
@ -45,37 +44,6 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy kubeconfig to all workers. Activates kubelet.service
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
count = length(var.workers)
|
||||
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = var.workers.*.domain[count.index]
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo touch /etc/kubernetes",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
# Without depends_on, this remote-exec may start before the kubeconfig copy.
|
||||
@ -83,7 +51,6 @@ resource "null_resource" "bootstrap" {
|
||||
# while no Kubelets are running.
|
||||
depends_on = [
|
||||
null_resource.copy-controller-secrets,
|
||||
null_resource.copy-worker-secrets,
|
||||
]
|
||||
|
||||
connection {
|
||||
|
@ -53,6 +53,7 @@ List of worker machine details (unique name, identifying MAC address, FQDN)
|
||||
{ name = "node3", mac = "52:54:00:c3:61:77", domain = "node3.example.com"}
|
||||
]
|
||||
EOD
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "snippets" {
|
||||
@ -91,6 +92,12 @@ variable "networking" {
|
||||
default = "cilium"
|
||||
}
|
||||
|
||||
variable "install_container_networking" {
|
||||
type = bool
|
||||
description = "Install the chosen networking provider during cluster bootstrap (use false to self-manage)"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "network_mtu" {
|
||||
type = number
|
||||
description = "CNI interface MTU (applies to calico only)"
|
||||
|
@ -6,7 +6,7 @@ terraform {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: containerd.service
|
||||
@ -25,7 +25,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -111,6 +111,8 @@ storage:
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
63
bare-metal/fedora-coreos/kubernetes/worker/matchbox.tf
Normal file
63
bare-metal/fedora-coreos/kubernetes/worker/matchbox.tf
Normal file
@ -0,0 +1,63 @@
|
||||
locals {
|
||||
remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
remote_initrd = [
|
||||
"--name main https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
]
|
||||
|
||||
remote_args = [
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
]
|
||||
|
||||
cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-kernel-x86_64"
|
||||
cached_initrd = [
|
||||
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
|
||||
]
|
||||
|
||||
cached_args = [
|
||||
"initrd=main",
|
||||
"coreos.live.rootfs_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
|
||||
"coreos.inst.install_dev=${var.install_disk}",
|
||||
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
]
|
||||
|
||||
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
|
||||
initrd = var.cached_install ? local.cached_initrd : local.remote_initrd
|
||||
args = var.cached_install ? local.cached_args : local.remote_args
|
||||
}
|
||||
|
||||
// Match a worker to a profile by MAC
|
||||
resource "matchbox_group" "worker" {
|
||||
name = format("%s-%s", var.cluster_name, var.name)
|
||||
profile = matchbox_profile.worker.name
|
||||
selector = {
|
||||
mac = var.mac
|
||||
}
|
||||
}
|
||||
|
||||
// Fedora CoreOS worker profile
|
||||
resource "matchbox_profile" "worker" {
|
||||
name = format("%s-worker-%s", var.cluster_name, var.name)
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.worker.rendered
|
||||
}
|
||||
|
||||
# Fedora CoreOS workers
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.domain
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
27
bare-metal/fedora-coreos/kubernetes/worker/ssh.tf
Normal file
27
bare-metal/fedora-coreos/kubernetes/worker/ssh.tf
Normal file
@ -0,0 +1,27 @@
|
||||
# Secure copy kubeconfig to worker. Activates kubelet.service
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.worker,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = var.domain
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = var.kubeconfig
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo touch /etc/kubernetes",
|
||||
]
|
||||
}
|
||||
}
|
111
bare-metal/fedora-coreos/kubernetes/worker/variables.tf
Normal file
111
bare-metal/fedora-coreos/kubernetes/worker/variables.tf
Normal file
@ -0,0 +1,111 @@
|
||||
variable "cluster_name" {
|
||||
type = string
|
||||
description = "Must be set to the `cluster_name` of cluster"
|
||||
}
|
||||
|
||||
# bare-metal
|
||||
|
||||
variable "matchbox_http_endpoint" {
|
||||
type = string
|
||||
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
|
||||
}
|
||||
|
||||
variable "os_stream" {
|
||||
type = string
|
||||
description = "Fedora CoreOS release stream (e.g. stable, testing, next)"
|
||||
default = "stable"
|
||||
|
||||
validation {
|
||||
condition = contains(["stable", "testing", "next"], var.os_stream)
|
||||
error_message = "The os_stream must be stable, testing, or next."
|
||||
}
|
||||
}
|
||||
|
||||
variable "os_version" {
|
||||
type = string
|
||||
description = "Fedora CoreOS version to PXE and install (e.g. 31.20200310.3.0)"
|
||||
}
|
||||
|
||||
# machine
|
||||
|
||||
variable "name" {
|
||||
type = string
|
||||
description = "Unique name for the machine (e.g. node1)"
|
||||
}
|
||||
|
||||
variable "mac" {
|
||||
type = string
|
||||
description = "MAC address (e.g. 52:54:00:a1:9c:ae)"
|
||||
}
|
||||
|
||||
variable "domain" {
|
||||
type = string
|
||||
description = "Fully qualified domain name (e.g. node1.example.com)"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "kubeconfig" {
|
||||
type = string
|
||||
description = "Must be set to `kubeconfig` output by cluster"
|
||||
}
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = string
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "snippets" {
|
||||
type = list(string)
|
||||
description = "List of Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "node_labels" {
|
||||
type = list(string)
|
||||
description = "List of initial node labels"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "node_taints" {
|
||||
type = list(string)
|
||||
description = "List of initial node taints"
|
||||
default = []
|
||||
}
|
||||
|
||||
# optional
|
||||
|
||||
variable "cached_install" {
|
||||
type = bool
|
||||
description = "Whether Fedora CoreOS should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "install_disk" {
|
||||
type = string
|
||||
description = "Disk device to install Fedora CoreOS (e.g. sda)"
|
||||
default = "sda"
|
||||
}
|
||||
|
||||
variable "kernel_args" {
|
||||
type = list(string)
|
||||
description = "Additional kernel arguments to provide at PXE boot."
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "service_cidr" {
|
||||
type = string
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = string
|
||||
default = "cluster.local"
|
||||
}
|
17
bare-metal/fedora-coreos/kubernetes/worker/versions.tf
Normal file
17
bare-metal/fedora-coreos/kubernetes/worker/versions.tf
Normal file
@ -0,0 +1,17 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
30
bare-metal/fedora-coreos/kubernetes/workers.tf
Normal file
30
bare-metal/fedora-coreos/kubernetes/workers.tf
Normal file
@ -0,0 +1,30 @@
|
||||
module "workers" {
|
||||
count = length(var.workers)
|
||||
source = "./worker"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
|
||||
# metal
|
||||
matchbox_http_endpoint = var.matchbox_http_endpoint
|
||||
os_stream = var.os_stream
|
||||
os_version = var.os_version
|
||||
|
||||
# machine
|
||||
name = var.workers[count.index].name
|
||||
mac = var.workers[count.index].mac
|
||||
domain = var.workers[count.index].domain
|
||||
|
||||
# configuration
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
|
||||
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
|
||||
snippets = lookup(var.snippets, var.workers[count.index].name, [])
|
||||
|
||||
# optional
|
||||
cached_install = var.cached_install
|
||||
install_disk = var.install_disk
|
||||
kernel_args = var.kernel_args
|
||||
}
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.25.3 (upstream)
|
||||
* Kubernetes v1.30.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,11 +1,11 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=946d81be0909e470a0508912eb0a2d4315075310"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=d233e90754e68d258a60abf1087e11377bdc1e4b"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
etcd_servers = var.controllers.*.domain
|
||||
networking = var.networking
|
||||
networking = var.install_container_networking ? var.networking : "none"
|
||||
network_mtu = var.network_mtu
|
||||
network_ip_autodetection_method = var.network_ip_autodetection_method
|
||||
pod_cidr = var.pod_cidr
|
||||
|
@ -11,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.5
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -64,7 +64,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -114,7 +114,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -184,8 +184,8 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/ 2>/dev/null || true
|
||||
rm -rf assets auth static-manifests tls manifests-networking manifests
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
|
@ -35,6 +35,7 @@ storage:
|
||||
-d ${install_disk} \
|
||||
-C ${os_channel} \
|
||||
-V ${os_version} \
|
||||
${oem_flag} \
|
||||
${baseurl_flag} \
|
||||
-i ignition.json
|
||||
udevadm settle
|
||||
|
@ -1,35 +0,0 @@
|
||||
resource "matchbox_group" "install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
|
||||
name = format("install-%s", concat(var.controllers.*.name, var.workers.*.name)[count.index])
|
||||
|
||||
# pick Matchbox profile (Flatcar upstream or Matchbox image cache)
|
||||
profile = var.cached_install ? matchbox_profile.cached-flatcar-install.*.name[count.index] : matchbox_profile.flatcar-install.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
}
|
||||
}
|
||||
|
||||
resource "matchbox_group" "controller" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-%s", var.cluster_name, var.controllers[count.index].name)
|
||||
profile = matchbox_profile.controllers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.controllers[count.index].mac
|
||||
os = "installed"
|
||||
}
|
||||
}
|
||||
|
||||
resource "matchbox_group" "worker" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-%s", var.cluster_name, var.workers[count.index].name)
|
||||
profile = matchbox_profile.workers.*.name[count.index]
|
||||
|
||||
selector = {
|
||||
mac = var.workers[count.index].mac
|
||||
os = "installed"
|
||||
}
|
||||
}
|
||||
|
@ -3,6 +3,13 @@ output "kubeconfig-admin" {
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
# Outputs for workers
|
||||
|
||||
output "kubeconfig" {
|
||||
value = module.bootstrap.kubeconfig-kubelet
|
||||
sensitive = true
|
||||
}
|
||||
|
||||
# Outputs for debug
|
||||
|
||||
output "assets_dist" {
|
||||
|
@ -1,54 +1,53 @@
|
||||
locals {
|
||||
# flatcar-stable -> stable channel
|
||||
channel = split("-", var.os_channel)[1]
|
||||
}
|
||||
|
||||
// Flatcar Linux install profile (from release.flatcar-linux.net)
|
||||
resource "matchbox_profile" "flatcar-install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
name = format("%s-flatcar-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
|
||||
|
||||
kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
|
||||
initrd = [
|
||||
remote_kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
remote_initrd = [
|
||||
"${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
args = flatten([
|
||||
args = [
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
var.kernel_args,
|
||||
])
|
||||
]
|
||||
|
||||
raw_ignition = data.ct_config.install.*.rendered[count.index]
|
||||
}
|
||||
|
||||
// Flatcar Linux Install profile (from matchbox /assets cache)
|
||||
// Note: Admin must have downloaded os_version into matchbox assets/flatcar.
|
||||
resource "matchbox_profile" "cached-flatcar-install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
name = format("%s-cached-flatcar-linux-install-%s", var.cluster_name, concat(var.controllers.*.name, var.workers.*.name)[count.index])
|
||||
|
||||
kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
|
||||
initrd = [
|
||||
cached_kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
cached_initrd = [
|
||||
"/assets/flatcar/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
args = flatten([
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
var.kernel_args,
|
||||
])
|
||||
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
|
||||
initrd = var.cached_install ? local.cached_initrd : local.remote_initrd
|
||||
}
|
||||
|
||||
raw_ignition = data.ct_config.cached-install.*.rendered[count.index]
|
||||
# Match controllers to install profiles by MAC
|
||||
resource "matchbox_group" "install" {
|
||||
count = length(var.controllers)
|
||||
|
||||
name = format("install-%s", var.controllers[count.index].name)
|
||||
profile = matchbox_profile.install[count.index].name
|
||||
selector = {
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
}
|
||||
}
|
||||
|
||||
// Flatcar Linux install
|
||||
resource "matchbox_profile" "install" {
|
||||
count = length(var.controllers)
|
||||
|
||||
name = format("%s-install-%s", var.cluster_name, var.controllers.*.name[count.index])
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.install[count.index].rendered
|
||||
}
|
||||
|
||||
# Flatcar Linux install
|
||||
data "ct_config" "install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
count = length(var.controllers)
|
||||
|
||||
content = templatefile("${path.module}/butane/install.yaml", {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
@ -56,26 +55,22 @@ data "ct_config" "install" {
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
oem_flag = var.oem_type != "" ? "-o ${var.oem_type}" : ""
|
||||
# only cached profile adds -b baseurl
|
||||
baseurl_flag = ""
|
||||
baseurl_flag = var.cached_install ? "-b ${var.matchbox_http_endpoint}/assets/flatcar" : ""
|
||||
})
|
||||
strict = true
|
||||
}
|
||||
|
||||
# Flatcar Linux cached install
|
||||
data "ct_config" "cached-install" {
|
||||
count = length(var.controllers) + length(var.workers)
|
||||
content = templatefile("${path.module}/butane/install.yaml", {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
|
||||
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
# profile uses -b baseurl to install from matchbox cache
|
||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/flatcar"
|
||||
})
|
||||
strict = true
|
||||
# Match each controller by MAC
|
||||
resource "matchbox_group" "controller" {
|
||||
count = length(var.controllers)
|
||||
name = format("%s-%s", var.cluster_name, var.controllers[count.index].name)
|
||||
profile = matchbox_profile.controllers[count.index].name
|
||||
selector = {
|
||||
mac = var.controllers[count.index].mac
|
||||
os = "installed"
|
||||
}
|
||||
}
|
||||
|
||||
// Kubernetes Controller profiles
|
||||
@ -99,25 +94,3 @@ data "ct_config" "controllers" {
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
// Kubernetes Worker profiles
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = length(var.workers)
|
||||
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
|
||||
raw_ignition = data.ct_config.workers.*.rendered[count.index]
|
||||
}
|
||||
|
||||
# Flatcar Linux workers
|
||||
data "ct_config" "workers" {
|
||||
count = length(var.workers)
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.workers.*.domain[count.index]
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
|
||||
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
|
||||
})
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
@ -16,7 +16,6 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
depends_on = [
|
||||
matchbox_group.install,
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
module.bootstrap,
|
||||
]
|
||||
|
||||
@ -45,37 +44,6 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy kubeconfig to all workers. Activates kubelet.service
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
count = length(var.workers)
|
||||
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.install,
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = var.workers.*.domain[count.index]
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
# Without depends_on, this remote-exec may start before the kubeconfig copy.
|
||||
@ -83,7 +51,6 @@ resource "null_resource" "bootstrap" {
|
||||
# while no Kubelets are running.
|
||||
depends_on = [
|
||||
null_resource.copy-controller-secrets,
|
||||
null_resource.copy-worker-secrets,
|
||||
]
|
||||
|
||||
connection {
|
||||
@ -99,4 +66,3 @@ resource "null_resource" "bootstrap" {
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -52,6 +52,7 @@ List of worker machine details (unique name, identifying MAC address, FQDN)
|
||||
{ name = "node3", mac = "52:54:00:c3:61:77", domain = "node3.example.com"}
|
||||
]
|
||||
EOD
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "snippets" {
|
||||
@ -90,6 +91,12 @@ variable "networking" {
|
||||
default = "cilium"
|
||||
}
|
||||
|
||||
variable "install_container_networking" {
|
||||
type = bool
|
||||
description = "Install the chosen networking provider during cluster bootstrap (use false to self-manage)"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "network_mtu" {
|
||||
type = number
|
||||
description = "CNI interface MTU (applies to calico only)"
|
||||
@ -155,6 +162,17 @@ variable "enable_aggregation" {
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "oem_type" {
|
||||
type = string
|
||||
description = <<EOD
|
||||
An OEM type to install with flatcar-install. Find available types by looking for Flatcar image files
|
||||
ending in `image.bin.bz2`. The OEM identifier is contained in the filename.
|
||||
E.g., `flatcar_production_vmware_raw_image.bin.bz2` leads to `vmware_raw`.
|
||||
See: https://www.flatcar.org/docs/latest/installing/bare-metal/installing-to-disk/#choose-a-channel
|
||||
EOD
|
||||
default = ""
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
|
@ -0,0 +1,47 @@
|
||||
variant: flatcar
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: installer.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Requires=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/opt/installer
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
# Avoid using the standard SSH port so terraform apply cannot SSH until
|
||||
# post-install. But admins may SSH to debug disk install problems.
|
||||
# After install, sshd will use port 22 and users/terraform can connect.
|
||||
- name: sshd.socket
|
||||
dropins:
|
||||
- name: 10-sshd-port.conf
|
||||
contents: |
|
||||
[Socket]
|
||||
ListenStream=
|
||||
ListenStream=2222
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/installer
|
||||
mode: 0500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -ex
|
||||
curl --retry 10 "${ignition_endpoint}?mac=${mac}&os=installed" -o ignition.json
|
||||
flatcar-install \
|
||||
-d ${install_disk} \
|
||||
-C ${os_channel} \
|
||||
-V ${os_version} \
|
||||
${oem_flag} \
|
||||
${baseurl_flag} \
|
||||
-i ignition.json
|
||||
udevadm settle
|
||||
systemctl reboot
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
- "${ssh_authorized_key}"
|
@ -36,7 +36,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -116,6 +116,8 @@ storage:
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
88
bare-metal/flatcar-linux/kubernetes/worker/matchbox.tf
Normal file
88
bare-metal/flatcar-linux/kubernetes/worker/matchbox.tf
Normal file
@ -0,0 +1,88 @@
|
||||
locals {
|
||||
# flatcar-stable -> stable channel
|
||||
channel = split("-", var.os_channel)[1]
|
||||
|
||||
remote_kernel = "${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
remote_initrd = [
|
||||
"${var.download_protocol}://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
args = flatten([
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
var.kernel_args,
|
||||
])
|
||||
|
||||
cached_kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
cached_initrd = [
|
||||
"/assets/flatcar/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
|
||||
initrd = var.cached_install ? local.cached_initrd : local.remote_initrd
|
||||
}
|
||||
|
||||
# Match machine to an install profile by MAC
|
||||
resource "matchbox_group" "install" {
|
||||
name = format("install-%s", var.name)
|
||||
profile = matchbox_profile.install.name
|
||||
selector = {
|
||||
mac = var.mac
|
||||
}
|
||||
}
|
||||
|
||||
// Flatcar Linux install profile (from release.flatcar-linux.net)
|
||||
resource "matchbox_profile" "install" {
|
||||
name = format("%s-install-%s", var.cluster_name, var.name)
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = local.args
|
||||
|
||||
raw_ignition = data.ct_config.install.rendered
|
||||
}
|
||||
|
||||
# Flatcar Linux install
|
||||
data "ct_config" "install" {
|
||||
content = templatefile("${path.module}/butane/install.yaml", {
|
||||
os_channel = local.channel
|
||||
os_version = var.os_version
|
||||
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
|
||||
mac = var.mac
|
||||
install_disk = var.install_disk
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
oem_flag = var.oem_type != "" ? "-o ${var.oem_type}" : ""
|
||||
# only cached profile adds -b baseurl
|
||||
baseurl_flag = var.cached_install ? "-b ${var.matchbox_http_endpoint}/assets/flatcar" : ""
|
||||
})
|
||||
strict = true
|
||||
}
|
||||
|
||||
# Match a worker to a profile by MAC
|
||||
resource "matchbox_group" "worker" {
|
||||
name = format("%s-%s", var.cluster_name, var.name)
|
||||
profile = matchbox_profile.worker.name
|
||||
selector = {
|
||||
mac = var.mac
|
||||
os = "installed"
|
||||
}
|
||||
}
|
||||
|
||||
// Flatcar Linux Worker profile
|
||||
resource "matchbox_profile" "worker" {
|
||||
name = format("%s-worker-%s", var.cluster_name, var.name)
|
||||
raw_ignition = data.ct_config.worker.rendered
|
||||
}
|
||||
|
||||
# Flatcar Linux workers
|
||||
data "ct_config" "worker" {
|
||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||
domain_name = var.domain
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = join(",", var.node_labels)
|
||||
node_taints = join(",", var.node_taints)
|
||||
})
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
27
bare-metal/flatcar-linux/kubernetes/worker/ssh.tf
Normal file
27
bare-metal/flatcar-linux/kubernetes/worker/ssh.tf
Normal file
@ -0,0 +1,27 @@
|
||||
# Secure copy kubeconfig to worker. Activates kubelet.service
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
matchbox_group.install,
|
||||
matchbox_group.worker,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = var.domain
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = var.kubeconfig
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
126
bare-metal/flatcar-linux/kubernetes/worker/variables.tf
Normal file
126
bare-metal/flatcar-linux/kubernetes/worker/variables.tf
Normal file
@ -0,0 +1,126 @@
|
||||
variable "cluster_name" {
|
||||
type = string
|
||||
description = "Must be set to the `cluster_name` of cluster"
|
||||
}
|
||||
|
||||
# bare-metal
|
||||
|
||||
variable "matchbox_http_endpoint" {
|
||||
type = string
|
||||
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
|
||||
}
|
||||
|
||||
variable "os_channel" {
|
||||
type = string
|
||||
description = "Channel for a Flatcar Linux (flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||
|
||||
validation {
|
||||
condition = contains(["flatcar-stable", "flatcar-beta", "flatcar-alpha"], var.os_channel)
|
||||
error_message = "The os_channel must be flatcar-stable, flatcar-beta, or flatcar-alpha."
|
||||
}
|
||||
}
|
||||
|
||||
variable "os_version" {
|
||||
type = string
|
||||
description = "Version of Flatcar Linux to PXE and install (e.g. 2079.5.1)"
|
||||
}
|
||||
|
||||
# machine
|
||||
|
||||
variable "name" {
|
||||
type = string
|
||||
description = "Unique name for the machine (e.g. node1)"
|
||||
}
|
||||
|
||||
variable "mac" {
|
||||
type = string
|
||||
description = "MAC address (e.g. 52:54:00:a1:9c:ae)"
|
||||
}
|
||||
|
||||
variable "domain" {
|
||||
type = string
|
||||
description = "Fully qualified domain name (e.g. node1.example.com)"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "kubeconfig" {
|
||||
type = string
|
||||
description = "Must be set to `kubeconfig` output by cluster"
|
||||
}
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = string
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "snippets" {
|
||||
type = list(string)
|
||||
description = "List of Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "node_labels" {
|
||||
type = list(string)
|
||||
description = "List of initial node labels"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "node_taints" {
|
||||
type = list(string)
|
||||
description = "List of initial node taints"
|
||||
default = []
|
||||
}
|
||||
|
||||
# optional
|
||||
|
||||
variable "download_protocol" {
|
||||
type = string
|
||||
description = "Protocol iPXE should use to download the kernel and initrd. Defaults to https, which requires iPXE compiled with crypto support. Unused if cached_install is true."
|
||||
default = "https"
|
||||
}
|
||||
|
||||
variable "cached_install" {
|
||||
type = bool
|
||||
description = "Whether Flatcar Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "install_disk" {
|
||||
type = string
|
||||
default = "/dev/sda"
|
||||
description = "Disk device to which the install profiles should install Flatcar Linux (e.g. /dev/sda)"
|
||||
}
|
||||
|
||||
variable "kernel_args" {
|
||||
type = list(string)
|
||||
description = "Additional kernel arguments to provide at PXE boot."
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "oem_type" {
|
||||
type = string
|
||||
default = ""
|
||||
description = "An OEM type to install with flatcar-install."
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "service_cidr" {
|
||||
type = string
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
type = string
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
|
16
bare-metal/flatcar-linux/kubernetes/worker/versions.tf
Normal file
16
bare-metal/flatcar-linux/kubernetes/worker/versions.tf
Normal file
@ -0,0 +1,16 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.5.0"
|
||||
}
|
||||
}
|
||||
}
|
33
bare-metal/flatcar-linux/kubernetes/workers.tf
Normal file
33
bare-metal/flatcar-linux/kubernetes/workers.tf
Normal file
@ -0,0 +1,33 @@
|
||||
module "workers" {
|
||||
count = length(var.workers)
|
||||
source = "./worker"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
|
||||
# metal
|
||||
matchbox_http_endpoint = var.matchbox_http_endpoint
|
||||
os_channel = var.os_channel
|
||||
os_version = var.os_version
|
||||
|
||||
# machine
|
||||
name = var.workers[count.index].name
|
||||
mac = var.workers[count.index].mac
|
||||
domain = var.workers[count.index].domain
|
||||
|
||||
# configuration
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
|
||||
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
|
||||
snippets = lookup(var.snippets, var.workers[count.index].name, [])
|
||||
|
||||
# optional
|
||||
download_protocol = var.download_protocol
|
||||
cached_install = var.cached_install
|
||||
install_disk = var.install_disk
|
||||
kernel_args = var.kernel_args
|
||||
oem_type = var.oem_type
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.25.3 (upstream)
|
||||
* Kubernetes v1.30.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,13 +1,12 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=946d81be0909e470a0508912eb0a2d4315075310"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=d233e90754e68d258a60abf1087e11377bdc1e4b"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = digitalocean_record.etcds.*.fqdn
|
||||
|
||||
networking = var.networking
|
||||
|
||||
networking = var.install_container_networking ? var.networking : "none"
|
||||
# only effective with Calico networking
|
||||
network_encapsulation = "vxlan"
|
||||
network_mtu = "1450"
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -9,10 +9,10 @@ systemd:
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
Wants=network-online.target network.target
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.5
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -55,7 +55,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -123,7 +123,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.25.3
|
||||
quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -179,8 +179,8 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/ 2>/dev/null || true
|
||||
rm -rf assets auth static-manifests tls manifests-networking manifests
|
||||
chcon -R -u system_u -t container_file_t /etc/kubernetes/pki
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: containerd.service
|
||||
@ -28,7 +28,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -82,19 +82,6 @@ systemd:
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /var/lib/kubelet:/var/lib/kubelet:ro,z --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -120,6 +107,8 @@ storage:
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
|
@ -71,6 +71,12 @@ variable "networking" {
|
||||
default = "cilium"
|
||||
}
|
||||
|
||||
variable "install_container_networking" {
|
||||
type = bool
|
||||
description = "Install the chosen networking provider during cluster bootstrap (use false to self-manage)"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "pod_cidr" {
|
||||
type = string
|
||||
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.25.3 (upstream)
|
||||
* Kubernetes v1.30.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,13 +1,12 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=946d81be0909e470a0508912eb0a2d4315075310"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=d233e90754e68d258a60abf1087e11377bdc1e4b"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = digitalocean_record.etcds.*.fqdn
|
||||
|
||||
networking = var.networking
|
||||
|
||||
networking = var.install_container_networking ? var.networking : "none"
|
||||
# only effective with Calico networking
|
||||
network_encapsulation = "vxlan"
|
||||
network_mtu = "1450"
|
||||
|
@ -11,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.5
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -66,7 +66,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -117,7 +117,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -182,8 +182,8 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/ 2>/dev/null || true
|
||||
rm -rf assets auth static-manifests tls manifests-networking manifests
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
|
@ -38,7 +38,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.0
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -80,19 +80,6 @@ systemd:
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.25.3
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/docker run -v /var/lib/kubelet:/var/lib/kubelet:ro --entrypoint /usr/local/bin/kubectl $${KUBELET_IMAGE} --kubeconfig=/var/lib/kubelet/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
@ -119,6 +106,8 @@ storage:
|
||||
clusterDomain: ${cluster_domain_suffix}
|
||||
healthzPort: 0
|
||||
rotateCertificates: true
|
||||
shutdownGracePeriod: 45s
|
||||
shutdownGracePeriodCriticalPods: 30s
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
readOnlyPort: 0
|
||||
resolvConf: /run/systemd/resolve/resolv.conf
|
||||
|
@ -71,6 +71,12 @@ variable "networking" {
|
||||
default = "cilium"
|
||||
}
|
||||
|
||||
variable "install_container_networking" {
|
||||
type = bool
|
||||
description = "Install the chosen networking provider during cluster bootstrap (use false to self-manage)"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "pod_cidr" {
|
||||
type = string
|
||||
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||
|
@ -6,7 +6,7 @@ Declare a Zincati `fleet_lock` strategy when provisioning Fedora CoreOS nodes vi
|
||||
|
||||
```yaml
|
||||
variant: fcos
|
||||
version: 1.1.0
|
||||
version: 1.5.0
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/zincati/config.d/55-update-strategy.toml
|
||||
|
@ -15,7 +15,7 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.0"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -40,7 +40,7 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.0"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -66,9 +66,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-21-119 Ready <none> 77s v1.25.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.25.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.25.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-21-119 Ready <none> 77s v1.30.0 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.30.0 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.30.0 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Hybrid
|
||||
@ -79,7 +79,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.0"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -102,7 +102,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.0"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -125,7 +125,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.30.0"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -149,7 +149,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.30.0"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -174,10 +174,10 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-1-73 Ready <none> 111m v1.25.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.25.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.25.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.25.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-1-73 Ready <none> 111m v1.30.0 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.30.0 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.30.0 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.30.0 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Azure
|
||||
@ -186,7 +186,7 @@ Create a cluster on Azure with ARM64 controller and worker nodes. Container work
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.30.0"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
|
@ -38,7 +38,7 @@ For example, ensure an `/opt/hello` file is created with permissions 0644 before
|
||||
```yaml
|
||||
# custom-files.yaml
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
@ -70,7 +70,7 @@ Or ensure a systemd unit `hello.service` is created.
|
||||
```yaml
|
||||
# custom-units.yaml
|
||||
variant: fcos
|
||||
version: 1.4.0
|
||||
version: 1.5.0
|
||||
systemd:
|
||||
units:
|
||||
- name: hello.service
|
||||
@ -164,7 +164,7 @@ To set an alternative etcd image or Kubelet image, use a snippet to set a system
|
||||
```yaml
|
||||
# kubelet-image-override.yaml
|
||||
variant: fcos <- remove for Flatcar Linux
|
||||
version: 1.4.0 <- remove for Flatcar Linux
|
||||
version: 1.5.0 <- remove for Flatcar Linux
|
||||
systemd:
|
||||
units:
|
||||
- name: kubelet.service
|
||||
@ -180,7 +180,7 @@ To set an alternative etcd image or Kubelet image, use a snippet to set a system
|
||||
```yaml
|
||||
# etcd-image-override.yaml
|
||||
variant: fcos <- remove for Flatcar Linux
|
||||
version: 1.4.0 <- remove for Flatcar Linux
|
||||
version: 1.5.0 <- remove for Flatcar Linux
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
|
@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.0"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.0"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.0"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.25.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.0"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user