mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 19:01:34 +02:00
Compare commits
95 Commits
Author | SHA1 | Date | |
---|---|---|---|
f614c538cf | |||
3da8c1575c | |||
dedd17d085 | |||
e274a451ff | |||
b2e36947ab | |||
5af0a5c5b9 | |||
2265ab5375 | |||
08ea9776f3 | |||
2e8bc99164 | |||
b18b0a9f3d | |||
beb9f1477a | |||
f544a9c71f | |||
415b7fa19a | |||
d0c29099ba | |||
30e4070474 | |||
43f6a19060 | |||
50215e373b | |||
a9f9c59b91 | |||
6ed048eb65 | |||
ce7b2fa21f | |||
9e3807798f | |||
ef9c6aa423 | |||
bb5e5811ec | |||
16aa997604 | |||
fb6650b06b | |||
43c6558aaf | |||
125008fbb3 | |||
136107b448 | |||
e97c1cc9e5 | |||
39da5b53f5 | |||
41f739891b | |||
861021ee98 | |||
9d583ab377 | |||
c1d28e6f61 | |||
a8fd21d250 | |||
9c626c9dbd | |||
85252dec6e | |||
298ea65d3e | |||
c0ab15ba22 | |||
5d7b6f611e | |||
93594292eb | |||
0546608e77 | |||
94b2793e40 | |||
4fd43b39ad | |||
65083aca7d | |||
07db4c1143 | |||
e5d0ce5fd7 | |||
b934a13605 | |||
cd005a0b27 | |||
dd4a5a4e7e | |||
af835f976f | |||
9e4a369f76 | |||
831d897533 | |||
17dce49982 | |||
5744e10329 | |||
20748536df | |||
f2e6256dd9 | |||
443bd5a26b | |||
f8162b9be3 | |||
20ffbba4bf | |||
15117fb95b | |||
10af8b4120 | |||
e51b2903c1 | |||
cb72b261c7 | |||
209efd2f5b | |||
388b1238bc | |||
5a1e455220 | |||
69f37c8b17 | |||
b30de949b8 | |||
4973178750 | |||
bb7f31822e | |||
c6923b9ef3 | |||
dae79d5916 | |||
f4d5ac0ca7 | |||
7e1b2cdba1 | |||
3bb20ce083 | |||
eb29fb639b | |||
fcbdb50d93 | |||
efac611e9c | |||
87ff431b80 | |||
0d8ceae1d9 | |||
c5cf803634 | |||
61ee01f462 | |||
cbef202eec | |||
0c99b909a9 | |||
739db3b35f | |||
c68b035a63 | |||
1a5949824c | |||
9bac641511 | |||
37ff3c28eb | |||
f03045f0dc | |||
b603bbde3d | |||
810236f6df | |||
3c3d3a2473 | |||
1af9fd8094 |
175
CHANGES.md
175
CHANGES.md
@ -4,6 +4,175 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
* Kubernetes [v1.23.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1232)
|
||||
* Remove Kubelet flag `--network-plugin`. Unused since `docker-shim` isn't used ([#1106](https://github.com/poseidon/typhoon/pull/1106))
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Switch Kubernetes Container Runtime from `docker` to `containerd` ([#1101](https://github.com/poseidon/typhoon/pull/1101))
|
||||
* Mask `docker.service` to prevent it from being socket activated ([#1105](https://github.com/poseidon/typhoon/pull/1105))
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
#### AWS
|
||||
|
||||
* Add experimental Flatcar Linux ARM64 support ([docs](https://typhoon.psdn.io/advanced/arm64/), [#1102](https://github.com/poseidon/typhoon/pull/1102))
|
||||
* Add `arch` variable to AWS `kubernetes` and `workers` modules
|
||||
* Allow arm64 full-cluster or mixed/hybrid cluster with arm64 workers
|
||||
* Requires `flannel` or `cilium` CNI provider
|
||||
|
||||
### DigitalOcean
|
||||
|
||||
* Upgrade DigitalOcean Terraform provider to [v2.x](https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs) ([#1109](https://github.com/poseidon/typhoon/pull/1109))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update nginx-ingress from v1.1.0 to [v1.1.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.1.1)
|
||||
* Update Grafana from v8.3.3 to [v8.3.4](https://github.com/grafana/grafana/releases/tag/v8.3.4)
|
||||
|
||||
## v1.23.1
|
||||
|
||||
* Kubernetes [v1.23.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1231)
|
||||
* Workaround Terraform v1.1 regression in `file` provisioner ([#1093](https://github.com/poseidon/typhoon/pull/1093))
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
* Switch Kubernetes Container Runtime from `docker` to `containerd` ([#1087](https://github.com/poseidon/typhoon/pull/1087))
|
||||
|
||||
### Addons
|
||||
|
||||
* Configure Prometheus to allow a custom scrape query parameter ([#1095](https://github.com/poseidon/typhoon/pull/1095))
|
||||
* Configure Prometheus to probe Kubernetes Ingress via `blackbox-exporter` ([#1096](https://github.com/poseidon/typhoon/pull/1096))
|
||||
* Fix Prometheus Service probes to use `blackbox-exporter`, not `blackbox` ([#1096](https://github.com/poseidon/typhoon/pull/1096))
|
||||
|
||||
## v1.23.0
|
||||
|
||||
* Kubernetes [v1.23.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1230)
|
||||
* Normalize CA cert mounts in static Pods and kube-proxy ([#1078](https://github.com/poseidon/typhoon/pull/1078))
|
||||
* Set Kubelet resolver config to `/run/systemd/resolve/resolv.conf` ([#1082](https://github.com/poseidon/typhoon/pull/1082))
|
||||
* Update Cilium from v1.10.5 to [v1.11.0](https://github.com/cilium/cilium/releases/tag/v1.11.0) ([#1083](https://github.com/poseidon/typhoon/pull/1083))
|
||||
* With Calico, add missing `caliconodestatuses` CRD ([#289](https://github.com/poseidon/terraform-render-bootstrap/pull/289))
|
||||
* Change `enable_aggregation` default to true ([#279](https://github.com/poseidon/terraform-render-bootstrap/pull/279))
|
||||
* Remove deprecated `--port` from `kube-scheduler` ([#1078](https://github.com/poseidon/typhoon/pull/1078))
|
||||
|
||||
### AWS
|
||||
|
||||
* Change controller node default `disk_iops` to 3000 ([#1073](https://github.com/poseidon/typhoon/pull/1073))
|
||||
|
||||
### Azure
|
||||
|
||||
* Fix warning about deprecated `backend_address_pool_id` ([#1086](https://github.com/poseidon/typhoon/pull/1086))
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Fix Fedora ARM64 workers to official Fedora CoreOS AMIs ([#1072](https://github.com/poseidon/typhoon/pull/1072))
|
||||
* Should have been changed alongside controller AMIs in ([#1038](https://github.com/poseidon/typhoon/pull/1038))
|
||||
* Old Poseidon built ARM64 AMIs have been deleted
|
||||
|
||||
### Addons
|
||||
|
||||
* Update nginx-ingress from v1.0.5 to [v1.1.0](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.1.0)
|
||||
* Update Prometheus from v2.31.1 to [v2.32.0](https://github.com/prometheus/prometheus/releases/tag/v2.32.0)
|
||||
* Update kube-state-metrics from v2.2.4 to [v2.3.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.3.0)
|
||||
* Update node-exporter from v1.3.0 to [v1.3.1](https://github.com/prometheus/node_exporter/releases/tag/v1.3.1)
|
||||
* Update Grafana from v8.2.4 to [v8.3.3](https://github.com/grafana/grafana/releases/tag/v8.3.3)
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Calico does not yet support Kubernetes v1.23.0, use `flannel` or `cilium` ([calico#5011](https://github.com/projectcalico/calico/issues/5011))
|
||||
|
||||
## v1.22.4
|
||||
|
||||
* Kubernetes [v1.22.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1224)
|
||||
* Update CoreDNS from v1.8.4 to [v1.8.6](https://github.com/poseidon/terraform-render-bootstrap/pull/284)
|
||||
* Update Calico from v3.20.2 to [v3.21.0](https://github.com/projectcalico/calico/releases/tag/v3.21.0)
|
||||
* Update flannel from v0.14.0 to [v0.15.1](https://github.com/flannel-io/flannel/releases/tag/v0.15.1)
|
||||
|
||||
### Google
|
||||
|
||||
* Allow use of Terraform provider `google` [v4.0+](https://github.com/hashicorp/terraform-provider-google/releases/tag/v4.0.0)
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
* Change Kubelet mounts for cgroups v2 ([#1064](https://github.com/poseidon/typhoon/pull/1064))
|
||||
* Update cgroup driver from cgroupfs to systemd (Flatcar Linux changed default) ([#1064](https://github.com/poseidon/typhoon/pull/1064))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.30.3 to [v2.31.1](https://github.com/prometheus/prometheus/releases/tag/v2.31.1)
|
||||
* Update node-exporter from v1.2.2 to [v1.3.0](https://github.com/prometheus/node_exporter/releases/tag/v1.3.0)
|
||||
* Update kube-state-metrics from v2.2.3 to [v2.2.4](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.2.4)
|
||||
* Update Grafana from v8.2.1 to [v8.2.4](https://github.com/grafana/grafana/releases/tag/v8.2.4)
|
||||
* Update nginx-ingress from v1.0.4 to [v1.0.5](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.0.5)
|
||||
|
||||
## v1.23.3
|
||||
|
||||
* Kubernetes [v1.22.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1223)
|
||||
* Update etcd from v3.5.0 to [v3.5.1](https://github.com/etcd-io/etcd/releases/tag/v3.5.1)
|
||||
* Update Cilium from v1.10.4 to [v1.10.5](https://github.com/cilium/cilium/releases/tag/v1.10.5)
|
||||
* Update Calico from v3.20.1 to [v3.20.2](https://github.com/projectcalico/calico/releases/tag/v3.20.2)
|
||||
* Use Calico's iptables legacy vs nft auto-detection
|
||||
* Update flannel from v0.13.0 to v0.14.0
|
||||
|
||||
### Bare-Metal
|
||||
|
||||
* Require Terraform provider `poseidon/matchbox` v0.5+ ([#1048](https://github.com/poseidon/typhoon/pull/1048))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update nginx-ingress from v1.0.0 to [v1.0.4](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.0.4)
|
||||
* Update Prometheus from v2.29.2 to [v2.30.3](https://github.com/prometheus/prometheus/releases/tag/v2.30.3)
|
||||
* Update kube-state-metrics from v2.2.0 to [v2.2.3](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.2.3)
|
||||
* Update Grafana from v8.1.2 to [v8.2.1](https://github.com/grafana/grafana/releases/tag/v8.2.1)
|
||||
|
||||
## v1.22.2
|
||||
|
||||
* Kubernetes [v1.22.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1222)
|
||||
* Update Cilium from v1.10.3 to [v1.10.4](https://github.com/cilium/cilium/releases/tag/v1.10.4)
|
||||
* Update Calico from v3.20.0 to [v3.20.1](https://github.com/projectcalico/calico/releases/tag/v3.20.1)
|
||||
* Fix access to ClusterIP services with Cilium ([#276](https://github.com/poseidon/terraform-render-bootstrap/pull/276))
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Use Fedora CoreOS ARM64 AMIs ([#1038](https://github.com/poseidon/typhoon/pull/1038))
|
||||
|
||||
### Addons
|
||||
|
||||
* Update Prometheus from v2.29.1 to [v2.29.2](https://github.com/prometheus/prometheus/releases/tag/v2.29.2)
|
||||
* Update kube-state-metrics from v2.1.1 to [v2.2.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.2.0)
|
||||
|
||||
## v1.22.1
|
||||
|
||||
* Kubernetes [v1.22.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1221)
|
||||
* Update Calico from v3.19.1 to [v3.20.0](https://github.com/projectcalico/calico/releases/tag/v3.20.0)
|
||||
|
||||
### Addons
|
||||
|
||||
* Update nginx-ingress from v1.0.0-beta.1 to [v1.0.0](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.0.0)
|
||||
* Update Prometheus from v2.28.1 to [v2.29.1](https://github.com/prometheus/prometheus/releases/tag/v2.29.1)
|
||||
* Update Grafana from v8.1.1 to [v8.1.2](https://github.com/grafana/grafana/releases/tag/v8.1.2)
|
||||
|
||||
## v1.22.0
|
||||
|
||||
* Kubernetes [v1.22.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#v1220)
|
||||
* Update etcd from v3.4.16 to [v3.5.0](https://github.com/etcd-io/etcd/releases/tag/v3.5.0)
|
||||
* Switch `kube-controller-manager` and `kube-scheduler` to use secure port only
|
||||
* Update Prometheus config to discover endpoints and use a bearer token to scrape
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
* Add Cilium cgroups v2 support on Fedora CoreOS
|
||||
* Update Butane Config version from v1.2.0 to v1.4.0
|
||||
* Rename Fedora CoreOS Config to Butane Config
|
||||
* Require any [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customizations to update to v1.4.0
|
||||
|
||||
### Addons
|
||||
|
||||
* Update nginx-ingress from v0.47.0 to [v1.0.0-beta.1](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.0.0-beta.1)
|
||||
* Update node-exporter from v1.2.0 to [v1.2.2](https://github.com/prometheus/node_exporter/releases/tag/v1.2.2)
|
||||
* Update kube-state-metrics from v2.1.0 to [v2.1.1](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.1.1)
|
||||
* Update Grafana from v8.0.6 to [v8.1.1](https://github.com/grafana/grafana/releases/tag/v8.1.1)
|
||||
|
||||
## v1.21.3
|
||||
|
||||
* Kubernetes [v1.21.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#v1213)
|
||||
@ -20,6 +189,10 @@ Notable changes between versions.
|
||||
* Update node-exporter from v1.1.2 to [v1.2.0](https://github.com/prometheus/node_exporter/releases/tag/v1.2.0)
|
||||
* Update Grafana from v8.0.3 to [v8.0.6](https://github.com/grafana/grafana/releases/tag/v8.0.6)
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Cilium with recent Fedora CoreOS will have networking issues ([fedora-coreos#881](https://github.com/coreos/fedora-coreos-tracker/issues/881)) (fixed in v1.21.4)
|
||||
|
||||
## v1.21.2
|
||||
|
||||
* Kubernetes [v1.21.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#v1212)
|
||||
@ -53,7 +226,7 @@ Notable changes between versions.
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Cilium with recent Fedora CoreOS will have networking issues ([fedora-coreos#881](https://github.com/coreos/fedora-coreos-tracker/issues/881))
|
||||
* Cilium with recent Fedora CoreOS will have networking issues ([fedora-coreos#881](https://github.com/coreos/fedora-coreos-tracker/issues/881)) (fixed in v1.21.4)
|
||||
|
||||
## v1.21.1
|
||||
|
||||
|
16
README.md
16
README.md
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.21.3 (upstream)
|
||||
* Kubernetes v1.23.2 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
@ -45,6 +45,10 @@ Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/
|
||||
| DigitalOcean | Flatcar Linux | [digital-ocean/flatcar-linux/kubernetes](digital-ocean/flatcar-linux/kubernetes) | beta |
|
||||
| Google Cloud | Flatcar Linux | [google-cloud/flatcar-linux/kubernetes](google-cloud/flatcar-linux/kubernetes) | beta |
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Flatcar Linux (ARM64) | [aws/flatcar-linux/kubernetes](aws/flatcar-linux/kubernetes) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
* [Docs](https://typhoon.psdn.io)
|
||||
@ -58,7 +62,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.21.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.2"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -67,7 +71,7 @@ module "yavin" {
|
||||
dns_zone_name = "example-zone"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
@ -97,9 +101,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.21.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.21.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.21.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.2
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.2
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: grafana
|
||||
image: docker.io/grafana/grafana:8.0.6
|
||||
image: docker.io/grafana/grafana:8.3.4
|
||||
env:
|
||||
- name: GF_PATHS_CONFIG
|
||||
value: "/etc/grafana/custom.ini"
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v0.47.0
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v0.47.0
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v0.47.0
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v0.47.0
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v0.47.0
|
||||
image: k8s.gcr.io/ingress-nginx/controller:v1.1.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -72,6 +72,48 @@ data:
|
||||
regex: apiserver_request_duration_seconds_count;.+
|
||||
action: drop
|
||||
|
||||
# Scrape config for kube-controller-manager endpoints.
|
||||
#
|
||||
# kube-controller-manager service endpoints can be discovered by using the
|
||||
# `endpoints` role and relabelling to only keep only endpoints associated with
|
||||
# kube-system/kube-controller-manager and the `https` port.
|
||||
- job_name: 'kube-controller-manager'
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
insecure_skip_verify: true
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: kube-system;kube-controller-manager;metrics
|
||||
- replacement: kube-controller-manager
|
||||
action: replace
|
||||
target_label: job
|
||||
|
||||
# Scrape config for kube-scheduler endpoints.
|
||||
#
|
||||
# kube-scheduler service endpoints can be discovered by using the `endpoints`
|
||||
# role and relabelling to only keep only endpoints associated with
|
||||
# kube-system/kube-scheduler and the `https` port.
|
||||
- job_name: 'kube-scheduler'
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
insecure_skip_verify: true
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: kube-system;kube-scheduler;metrics
|
||||
- replacement: kube-scheduler
|
||||
action: replace
|
||||
target_label: job
|
||||
|
||||
# Scrape config for node (i.e. kubelet) /metrics (e.g. 'kubelet_'). Explore
|
||||
# metrics from a node by scraping kubelet (127.0.0.1:10250/metrics).
|
||||
- job_name: 'kubelet'
|
||||
@ -133,6 +175,7 @@ data:
|
||||
# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
|
||||
# * `prometheus.io/port`: If the metrics are exposed on a different port to the
|
||||
# service then set this appropriately.
|
||||
# * `prometheus.io/param`: Custom metrics query parameter, like "format=prometheus".
|
||||
- job_name: 'kubernetes-service-endpoints'
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
@ -155,6 +198,11 @@ data:
|
||||
target_label: __address__
|
||||
regex: ([^:]+)(?::\d+)?;(\d+)
|
||||
replacement: $1:$2
|
||||
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_param]
|
||||
action: replace
|
||||
target_label: __param_$1
|
||||
regex: ([^=]+)=(.*)
|
||||
replacement: $2
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_service_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_namespace]
|
||||
@ -172,38 +220,6 @@ data:
|
||||
action: drop
|
||||
regex: etcd_(debugging|disk|request|server).*
|
||||
|
||||
# Example scrape config for probing services via the Blackbox Exporter.
|
||||
#
|
||||
# The relabeling allows the actual service scrape endpoint to be configured
|
||||
# via the following annotations:
|
||||
#
|
||||
# * `prometheus.io/probe`: Only probe services that have a value of `true`
|
||||
- job_name: 'kubernetes-services'
|
||||
|
||||
metrics_path: /probe
|
||||
params:
|
||||
module: [http_2xx]
|
||||
|
||||
kubernetes_sd_configs:
|
||||
- role: service
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
|
||||
action: keep
|
||||
regex: true
|
||||
- source_labels: [__address__]
|
||||
target_label: __param_target
|
||||
- target_label: __address__
|
||||
replacement: blackbox
|
||||
- source_labels: [__param_target]
|
||||
target_label: instance
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_service_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_namespace]
|
||||
target_label: namespace
|
||||
- source_labels: [__meta_kubernetes_service_name]
|
||||
target_label: job
|
||||
|
||||
# Example scrape config for pods
|
||||
#
|
||||
# The relabeling allows the actual pod scrape endpoint to be configured via the
|
||||
@ -240,6 +256,67 @@ data:
|
||||
action: replace
|
||||
target_label: kubernetes_pod_name
|
||||
|
||||
# Example scrape config for probing Services via the Blackbox Exporter.
|
||||
#
|
||||
# Relabeling allows service scraping to be configured via annotations:
|
||||
# * `prometheus.io/probe`: Only probe services that have a value of `true`
|
||||
- job_name: 'kubernetes-services'
|
||||
|
||||
metrics_path: /probe
|
||||
params:
|
||||
module: [http_2xx]
|
||||
|
||||
kubernetes_sd_configs:
|
||||
- role: service
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
|
||||
action: keep
|
||||
regex: true
|
||||
- source_labels: [__address__]
|
||||
target_label: __param_target
|
||||
- target_label: __address__
|
||||
replacement: blackbox-exporter:8080
|
||||
- source_labels: [__param_target]
|
||||
target_label: instance
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_service_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_namespace]
|
||||
target_label: namespace
|
||||
- source_labels: [__meta_kubernetes_service_name]
|
||||
target_label: job
|
||||
|
||||
# Example scrape config for probing Ingresses via a Blackbox Exporter.
|
||||
#
|
||||
# Relabeling allows service scraping to be configured via annotations:
|
||||
# * `prometheus.io/probe`: Only probe ingresses that have a value of `true`
|
||||
- job_name: 'kubernetes-ingresses'
|
||||
metrics_path: /probe
|
||||
params:
|
||||
module: [http_2xx]
|
||||
|
||||
kubernetes_sd_configs:
|
||||
- role: ingress
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
|
||||
action: keep
|
||||
regex: true
|
||||
- source_labels: [__meta_kubernetes_ingress_scheme, __address__, __meta_kubernetes_ingress_path]
|
||||
regex: (.+);(.+);(.+)
|
||||
replacement: ${1}://${2}${3}
|
||||
target_label: __param_target
|
||||
- target_label: __address__
|
||||
replacement: blackbox-exporter:8080
|
||||
- source_labels: [__param_target]
|
||||
target_label: instance
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_ingress_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_namespace]
|
||||
target_label: namespace
|
||||
- source_labels: [__meta_kubernetes_service_name]
|
||||
target_label: job
|
||||
|
||||
# Rule files
|
||||
rule_files:
|
||||
- "/etc/prometheus/rules/*.rules"
|
||||
|
@ -21,7 +21,7 @@ spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.28.1
|
||||
image: quay.io/prometheus/prometheus:v2.32.1
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -1,11 +1,9 @@
|
||||
# Allow Prometheus to scrape service endpoints
|
||||
# Allow Prometheus to discover service endpoints
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kube-controller-manager
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
@ -14,5 +12,5 @@ spec:
|
||||
ports:
|
||||
- name: metrics
|
||||
protocol: TCP
|
||||
port: 10252
|
||||
targetPort: 10252
|
||||
port: 10257
|
||||
targetPort: 10257
|
||||
|
@ -1,11 +1,9 @@
|
||||
# Allow Prometheus to scrape service endpoints
|
||||
# Allow Prometheus to discover service endpoints
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kube-scheduler
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
@ -14,5 +12,5 @@ spec:
|
||||
ports:
|
||||
- name: metrics
|
||||
protocol: TCP
|
||||
port: 10251
|
||||
targetPort: 10251
|
||||
port: 10259
|
||||
targetPort: 10259
|
||||
|
@ -25,7 +25,7 @@ spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.1.0
|
||||
image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.3.0
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
|
@ -28,7 +28,7 @@ spec:
|
||||
hostPID: true
|
||||
containers:
|
||||
- name: node-exporter
|
||||
image: quay.io/prometheus/node-exporter:v1.2.0
|
||||
image: quay.io/prometheus/node-exporter:v1.3.1
|
||||
args:
|
||||
- --path.procfs=/host/proc
|
||||
- --path.sysfs=/host/sys
|
||||
|
@ -10,6 +10,17 @@ rules:
|
||||
- services
|
||||
- endpoints
|
||||
- pods
|
||||
verbs: ["get", "list", "watch"]
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- nonResourceURLs: ["/metrics"]
|
||||
verbs: ["get"]
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.21.3 (upstream)
|
||||
* Kubernetes v1.23.2 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,4 +1,3 @@
|
||||
|
||||
data "aws_ami" "fedora-coreos" {
|
||||
most_recent = true
|
||||
owners = ["125523088429"]
|
||||
@ -19,14 +18,11 @@ data "aws_ami" "fedora-coreos" {
|
||||
}
|
||||
}
|
||||
|
||||
# Experimental Fedora CoreOS arm64 / aarch64 AMIs from Poseidon
|
||||
# WARNING: These AMIs will be removed when Fedora CoreOS publishes arm64 AMIs
|
||||
# and may be removed for any reason before then as well. Do not use.
|
||||
data "aws_ami" "fedora-coreos-arm" {
|
||||
count = var.arch == "arm64" ? 1 : 0
|
||||
|
||||
most_recent = true
|
||||
owners = ["099663496933"]
|
||||
owners = ["125523088429"]
|
||||
|
||||
filter {
|
||||
name = "architecture"
|
||||
@ -39,8 +35,7 @@ data "aws_ami" "fedora-coreos-arm" {
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["fedora-coreos-*"]
|
||||
name = "description"
|
||||
values = ["Fedora CoreOS ${var.os_stream} *"]
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=5746f9c221fb779def042c81ea827fed1b844f1d"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f45deec67e2fea4f06b5a3edad628b0fe0e9ec60"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
@ -13,7 +13,5 @@ module "bootstrap" {
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
daemonset_tolerations = var.daemonset_tolerations
|
||||
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
}
|
||||
|
||||
|
@ -62,7 +62,6 @@ data "template_file" "controller-configs" {
|
||||
|
||||
vars = {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_arch = var.arch == "arm64" ? "-arm64" : ""
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.2.0
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -12,7 +12,7 @@ systemd:
|
||||
Wants=network-online.target network.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.4.16${etcd_arch}
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.1
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -29,8 +29,10 @@ systemd:
|
||||
LimitNOFILE=40000
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: docker.service
|
||||
- name: containerd.service
|
||||
enabled: true
|
||||
- name: docker.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
@ -54,7 +56,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -74,7 +76,7 @@ systemd:
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
@ -86,17 +88,19 @@ systemd:
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -122,7 +126,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.21.3
|
||||
quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -217,7 +221,26 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
ETCD_UNSUPPORTED_ARCH=arm64
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
||||
inline: |
|
||||
version = 2
|
||||
root = "/var/lib/containerd"
|
||||
state = "/run/containerd"
|
||||
subreaper = true
|
||||
oom_score = -999
|
||||
[grpc]
|
||||
address = "/run/containerd/containerd.sock"
|
||||
uid = 0
|
||||
gid = 0
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
enable_selinux = true
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||
SystemdCgroup = true
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -201,8 +201,8 @@ resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10251
|
||||
to_port = 10251
|
||||
from_port = 10259
|
||||
to_port = 10259
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
@ -212,8 +212,8 @@ resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10252
|
||||
to_port = 10252
|
||||
from_port = 10257
|
||||
to_port = 10257
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
|
@ -24,7 +24,7 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = join("\n", local.assets_bundle)
|
||||
destination = "$HOME/assets"
|
||||
destination = "/home/core/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
|
@ -66,8 +66,8 @@ variable "disk_type" {
|
||||
|
||||
variable "disk_iops" {
|
||||
type = number
|
||||
description = "IOPS of the EBS volume (e.g. 100)"
|
||||
default = 0
|
||||
description = "IOPS of the EBS volume (e.g. 3000)"
|
||||
default = 3000
|
||||
}
|
||||
|
||||
variable "worker_price" {
|
||||
@ -84,13 +84,13 @@ variable "worker_target_groups" {
|
||||
|
||||
variable "controller_snippets" {
|
||||
type = list(string)
|
||||
description = "Controller Fedora CoreOS Config snippets"
|
||||
description = "Controller Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "worker_snippets" {
|
||||
type = list(string)
|
||||
description = "Worker Fedora CoreOS Config snippets"
|
||||
description = "Worker Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
@ -142,8 +142,8 @@ variable "enable_reporting" {
|
||||
|
||||
variable "enable_aggregation" {
|
||||
type = bool
|
||||
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||
default = false
|
||||
description = "Enable the Kubernetes Aggregation Layer"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "worker_node_labels" {
|
||||
|
@ -4,8 +4,8 @@ terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 4.0"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
|
@ -1,4 +1,3 @@
|
||||
|
||||
data "aws_ami" "fedora-coreos" {
|
||||
most_recent = true
|
||||
owners = ["125523088429"]
|
||||
@ -19,14 +18,11 @@ data "aws_ami" "fedora-coreos" {
|
||||
}
|
||||
}
|
||||
|
||||
# Experimental Fedora CoreOS arm64 / aarch64 AMIs from Poseidon
|
||||
# WARNING: These AMIs will be removed when Fedora CoreOS publishes arm64 AMIs
|
||||
# and may be removed for any reason before then as well. Do not use.
|
||||
data "aws_ami" "fedora-coreos-arm" {
|
||||
count = var.arch == "arm64" ? 1 : 0
|
||||
|
||||
most_recent = true
|
||||
owners = ["099663496933"]
|
||||
owners = ["125523088429"]
|
||||
|
||||
filter {
|
||||
name = "architecture"
|
||||
@ -39,8 +35,7 @@ data "aws_ami" "fedora-coreos-arm" {
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["fedora-coreos-*"]
|
||||
name = "description"
|
||||
values = ["Fedora CoreOS ${var.os_stream} *"]
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1,10 +1,12 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.2.0
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
- name: containerd.service
|
||||
enabled: true
|
||||
- name: docker.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
@ -27,7 +29,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -47,7 +49,7 @@ systemd:
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
@ -59,13 +61,14 @@ systemd:
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
@ -76,6 +79,7 @@ systemd:
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${AFTERBURN_AWS_AVAILABILITY_ZONE}/$${AFTERBURN_AWS_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
@ -90,7 +94,7 @@ systemd:
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
@ -129,9 +133,28 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
||||
inline: |
|
||||
version = 2
|
||||
root = "/var/lib/containerd"
|
||||
state = "/run/containerd"
|
||||
subreaper = true
|
||||
oom_score = -999
|
||||
[grpc]
|
||||
address = "/run/containerd/containerd.sock"
|
||||
uid = 0
|
||||
gid = 0
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
enable_selinux = true
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||
SystemdCgroup = true
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
- ${ssh_authorized_key}
|
||||
|
||||
|
@ -77,7 +77,7 @@ variable "target_groups" {
|
||||
|
||||
variable "snippets" {
|
||||
type = list(string)
|
||||
description = "Fedora CoreOS Config snippets"
|
||||
description = "Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
|
@ -4,7 +4,7 @@ terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 4.0"
|
||||
template = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.21.3 (upstream)
|
||||
* Kubernetes v1.23.2 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,7 +1,7 @@
|
||||
locals {
|
||||
# Pick a Flatcar Linux AMI
|
||||
# flatcar-stable -> Flatcar Linux AMI
|
||||
ami_id = data.aws_ami.flatcar.image_id
|
||||
ami_id = var.arch == "arm64" ? data.aws_ami.flatcar-arm64[0].image_id : data.aws_ami.flatcar.image_id
|
||||
channel = split("-", var.os_image)[1]
|
||||
}
|
||||
|
||||
@ -25,3 +25,25 @@ data "aws_ami" "flatcar" {
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_ami" "flatcar-arm64" {
|
||||
count = var.arch == "arm64" ? 1 : 0
|
||||
|
||||
most_recent = true
|
||||
owners = ["075585003325"]
|
||||
|
||||
filter {
|
||||
name = "architecture"
|
||||
values = ["arm64"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Flatcar-${local.channel}-*"]
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=5746f9c221fb779def042c81ea827fed1b844f1d"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f45deec67e2fea4f06b5a3edad628b0fe0e9ec60"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -10,7 +10,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.4.16
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.1
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -57,7 +57,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -76,10 +76,9 @@ systemd:
|
||||
-v /usr/lib/os-release:/etc/os-release:ro \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /run:/run \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
-v /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
-v /var/lib/calico:/var/lib/calico:ro \
|
||||
-v /var/lib/docker:/var/lib/docker \
|
||||
-v /var/lib/containerd:/var/lib/containerd \
|
||||
-v /var/lib/kubelet:/var/lib/kubelet:rshared \
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
@ -88,16 +87,19 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -119,7 +121,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
|
@ -201,8 +201,8 @@ resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10251
|
||||
to_port = 10251
|
||||
from_port = 10259
|
||||
to_port = 10259
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
@ -212,8 +212,8 @@ resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10252
|
||||
to_port = 10252
|
||||
from_port = 10257
|
||||
to_port = 10257
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
|
@ -24,7 +24,7 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = join("\n", local.assets_bundle)
|
||||
destination = "$HOME/assets"
|
||||
destination = "/home/core/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
|
@ -66,8 +66,8 @@ variable "disk_type" {
|
||||
|
||||
variable "disk_iops" {
|
||||
type = number
|
||||
description = "IOPS of the EBS volume (e.g. 100)"
|
||||
default = 0
|
||||
description = "IOPS of the EBS volume (e.g. 3000)"
|
||||
default = 3000
|
||||
}
|
||||
|
||||
variable "worker_price" {
|
||||
@ -142,8 +142,8 @@ variable "enable_reporting" {
|
||||
|
||||
variable "enable_aggregation" {
|
||||
type = bool
|
||||
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||
default = false
|
||||
description = "Enable the Kubernetes Aggregation Layer"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "worker_node_labels" {
|
||||
@ -160,6 +160,17 @@ variable "cluster_domain_suffix" {
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "arch" {
|
||||
type = string
|
||||
description = "Container architecture (amd64 or arm64)"
|
||||
default = "amd64"
|
||||
|
||||
validation {
|
||||
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||
error_message = "The arch must be amd64 or arm64."
|
||||
}
|
||||
}
|
||||
|
||||
variable "daemonset_tolerations" {
|
||||
type = list(string)
|
||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||
|
@ -4,8 +4,8 @@ terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 4.0"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
|
@ -9,6 +9,7 @@ module "workers" {
|
||||
worker_count = var.worker_count
|
||||
instance_type = var.worker_type
|
||||
os_image = var.os_image
|
||||
arch = var.arch
|
||||
disk_size = var.disk_size
|
||||
spot_price = var.worker_price
|
||||
target_groups = var.worker_target_groups
|
||||
|
@ -1,7 +1,7 @@
|
||||
locals {
|
||||
# Pick a Flatcar Linux AMI
|
||||
# flatcar-stable -> Flatcar Linux AMI
|
||||
ami_id = data.aws_ami.flatcar.image_id
|
||||
ami_id = var.arch == "arm64" ? data.aws_ami.flatcar-arm64[0].image_id : data.aws_ami.flatcar.image_id
|
||||
channel = split("-", var.os_image)[1]
|
||||
}
|
||||
|
||||
@ -25,3 +25,24 @@ data "aws_ami" "flatcar" {
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_ami" "flatcar-arm64" {
|
||||
count = var.arch == "arm64" ? 1 : 0
|
||||
|
||||
most_recent = true
|
||||
owners = ["075585003325"]
|
||||
|
||||
filter {
|
||||
name = "architecture"
|
||||
values = ["arm64"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Flatcar-${local.channel}-*"]
|
||||
}
|
||||
}
|
||||
|
@ -29,7 +29,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -51,10 +51,9 @@ systemd:
|
||||
-v /usr/lib/os-release:/etc/os-release:ro \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /run:/run \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
-v /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
-v /var/lib/calico:/var/lib/calico:ro \
|
||||
-v /var/lib/docker:/var/lib/docker \
|
||||
-v /var/lib/containerd:/var/lib/containerd \
|
||||
-v /var/lib/kubelet:/var/lib/kubelet:rshared \
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
@ -63,12 +62,14 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
@ -79,6 +80,7 @@ systemd:
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--provider-id=aws:///$${COREOS_EC2_AVAILABILITY_ZONE}/$${COREOS_EC2_INSTANCE_ID} \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStart=docker logs -f kubelet
|
||||
@ -94,7 +96,7 @@ systemd:
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
|
@ -119,3 +119,16 @@ variable "node_taints" {
|
||||
description = "List of initial node taints"
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "arch" {
|
||||
type = string
|
||||
description = "Container architecture (amd64 or arm64)"
|
||||
default = "amd64"
|
||||
|
||||
validation {
|
||||
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||
error_message = "The arch must be amd64 or arm64."
|
||||
}
|
||||
}
|
||||
|
@ -4,7 +4,7 @@ terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
aws = ">= 2.23, <= 4.0"
|
||||
template = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.21.3 (upstream)
|
||||
* Kubernetes v1.23.2 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=5746f9c221fb779def042c81ea827fed1b844f1d"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f45deec67e2fea4f06b5a3edad628b0fe0e9ec60"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
@ -19,8 +19,5 @@ module "bootstrap" {
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
daemonset_tolerations = var.daemonset_tolerations
|
||||
|
||||
# Fedora CoreOS
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
}
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.2.0
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -12,7 +12,7 @@ systemd:
|
||||
Wants=network-online.target network.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.4.16
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.1
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -29,8 +29,10 @@ systemd:
|
||||
LimitNOFILE=40000
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: docker.service
|
||||
- name: containerd.service
|
||||
enabled: true
|
||||
- name: docker.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
@ -51,7 +53,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -70,7 +72,7 @@ systemd:
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
@ -82,16 +84,18 @@ systemd:
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -117,7 +121,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.21.3
|
||||
quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -212,6 +216,26 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
||||
inline: |
|
||||
version = 2
|
||||
root = "/var/lib/containerd"
|
||||
state = "/run/containerd"
|
||||
subreaper = true
|
||||
oom_score = -999
|
||||
[grpc]
|
||||
address = "/run/containerd/containerd.sock"
|
||||
uid = 0
|
||||
gid = 0
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
enable_selinux = true
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||
SystemdCgroup = true
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -59,11 +59,11 @@ resource "azurerm_lb_rule" "apiserver" {
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration_name = "apiserver"
|
||||
|
||||
protocol = "Tcp"
|
||||
frontend_port = 6443
|
||||
backend_port = 6443
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||
probe_id = azurerm_lb_probe.apiserver.id
|
||||
protocol = "Tcp"
|
||||
frontend_port = 6443
|
||||
backend_port = 6443
|
||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.controller.id]
|
||||
probe_id = azurerm_lb_probe.apiserver.id
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-http" {
|
||||
@ -74,11 +74,11 @@ resource "azurerm_lb_rule" "ingress-http" {
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
disable_outbound_snat = true
|
||||
|
||||
protocol = "Tcp"
|
||||
frontend_port = 80
|
||||
backend_port = 80
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
||||
probe_id = azurerm_lb_probe.ingress.id
|
||||
protocol = "Tcp"
|
||||
frontend_port = 80
|
||||
backend_port = 80
|
||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker.id]
|
||||
probe_id = azurerm_lb_probe.ingress.id
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-https" {
|
||||
@ -89,11 +89,11 @@ resource "azurerm_lb_rule" "ingress-https" {
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
disable_outbound_snat = true
|
||||
|
||||
protocol = "Tcp"
|
||||
frontend_port = 443
|
||||
backend_port = 443
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
||||
probe_id = azurerm_lb_probe.ingress.id
|
||||
protocol = "Tcp"
|
||||
frontend_port = 443
|
||||
backend_port = 443
|
||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker.id]
|
||||
probe_id = azurerm_lb_probe.ingress.id
|
||||
}
|
||||
|
||||
# Worker outbound TCP/UDP SNAT
|
||||
|
@ -95,7 +95,7 @@ resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10251-10252"
|
||||
destination_port_range = "10257-10259"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
@ -25,7 +25,7 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = join("\n", local.assets_bundle)
|
||||
destination = "$HOME/assets"
|
||||
destination = "/home/core/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
|
@ -65,13 +65,13 @@ variable "worker_priority" {
|
||||
|
||||
variable "controller_snippets" {
|
||||
type = list(string)
|
||||
description = "Controller Fedora CoreOS Config snippets"
|
||||
description = "Controller Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "worker_snippets" {
|
||||
type = list(string)
|
||||
description = "Worker Fedora CoreOS Config snippets"
|
||||
description = "Worker Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
@ -117,8 +117,8 @@ variable "enable_reporting" {
|
||||
|
||||
variable "enable_aggregation" {
|
||||
type = bool
|
||||
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||
default = false
|
||||
description = "Enable the Kubernetes Aggregation Layer"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "worker_node_labels" {
|
||||
|
@ -4,8 +4,8 @@ terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
|
@ -1,10 +1,12 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.2.0
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
- name: containerd.service
|
||||
enabled: true
|
||||
- name: docker.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
@ -24,7 +26,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -43,7 +45,7 @@ systemd:
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
@ -55,13 +57,14 @@ systemd:
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
@ -71,6 +74,7 @@ systemd:
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
@ -85,7 +89,7 @@ systemd:
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
@ -124,10 +128,29 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
||||
inline: |
|
||||
version = 2
|
||||
root = "/var/lib/containerd"
|
||||
state = "/run/containerd"
|
||||
subreaper = true
|
||||
oom_score = -999
|
||||
[grpc]
|
||||
address = "/run/containerd/containerd.sock"
|
||||
uid = 0
|
||||
gid = 0
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
enable_selinux = true
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||
SystemdCgroup = true
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
- ${ssh_authorized_key}
|
||||
|
||||
|
||||
|
@ -57,7 +57,7 @@ variable "priority" {
|
||||
|
||||
variable "snippets" {
|
||||
type = list(string)
|
||||
description = "Fedora CoreOS Config snippets"
|
||||
description = "Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
|
@ -4,7 +4,7 @@ terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
template = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.21.3 (upstream)
|
||||
* Kubernetes v1.23.2 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=5746f9c221fb779def042c81ea827fed1b844f1d"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f45deec67e2fea4f06b5a3edad628b0fe0e9ec60"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -10,7 +10,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.4.16
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.1
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -55,7 +55,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -73,10 +73,9 @@ systemd:
|
||||
-v /usr/lib/os-release:/etc/os-release:ro \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /run:/run \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
-v /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
-v /var/lib/calico:/var/lib/calico:ro \
|
||||
-v /var/lib/docker:/var/lib/docker \
|
||||
-v /var/lib/containerd:/var/lib/containerd \
|
||||
-v /var/lib/kubelet:/var/lib/kubelet:rshared \
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
@ -85,15 +84,18 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -115,7 +117,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
|
@ -59,11 +59,11 @@ resource "azurerm_lb_rule" "apiserver" {
|
||||
loadbalancer_id = azurerm_lb.cluster.id
|
||||
frontend_ip_configuration_name = "apiserver"
|
||||
|
||||
protocol = "Tcp"
|
||||
frontend_port = 6443
|
||||
backend_port = 6443
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||
probe_id = azurerm_lb_probe.apiserver.id
|
||||
protocol = "Tcp"
|
||||
frontend_port = 6443
|
||||
backend_port = 6443
|
||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.controller.id]
|
||||
probe_id = azurerm_lb_probe.apiserver.id
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-http" {
|
||||
@ -74,11 +74,11 @@ resource "azurerm_lb_rule" "ingress-http" {
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
disable_outbound_snat = true
|
||||
|
||||
protocol = "Tcp"
|
||||
frontend_port = 80
|
||||
backend_port = 80
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
||||
probe_id = azurerm_lb_probe.ingress.id
|
||||
protocol = "Tcp"
|
||||
frontend_port = 80
|
||||
backend_port = 80
|
||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker.id]
|
||||
probe_id = azurerm_lb_probe.ingress.id
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-https" {
|
||||
@ -89,11 +89,11 @@ resource "azurerm_lb_rule" "ingress-https" {
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
disable_outbound_snat = true
|
||||
|
||||
protocol = "Tcp"
|
||||
frontend_port = 443
|
||||
backend_port = 443
|
||||
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
||||
probe_id = azurerm_lb_probe.ingress.id
|
||||
protocol = "Tcp"
|
||||
frontend_port = 443
|
||||
backend_port = 443
|
||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker.id]
|
||||
probe_id = azurerm_lb_probe.ingress.id
|
||||
}
|
||||
|
||||
# Worker outbound TCP/UDP SNAT
|
||||
|
@ -95,7 +95,7 @@ resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10251-10252"
|
||||
destination_port_range = "10257-10259"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
@ -25,7 +25,7 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = join("\n", local.assets_bundle)
|
||||
destination = "$HOME/assets"
|
||||
destination = "/home/core/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
|
@ -123,8 +123,8 @@ variable "enable_reporting" {
|
||||
|
||||
variable "enable_aggregation" {
|
||||
type = bool
|
||||
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||
default = false
|
||||
description = "Enable the Kubernetes Aggregation Layer"
|
||||
default = true
|
||||
}
|
||||
|
||||
variable "worker_node_labels" {
|
||||
|
@ -4,8 +4,8 @@ terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
|
@ -27,7 +27,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -48,10 +48,9 @@ systemd:
|
||||
-v /usr/lib/os-release:/etc/os-release:ro \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /run:/run \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
-v /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
-v /var/lib/calico:/var/lib/calico:ro \
|
||||
-v /var/lib/docker:/var/lib/docker \
|
||||
-v /var/lib/containerd:/var/lib/containerd \
|
||||
-v /var/lib/kubelet:/var/lib/kubelet:rshared \
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
@ -60,12 +59,14 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
--node-labels=${label} \
|
||||
@ -75,6 +76,7 @@ systemd:
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStart=docker logs -f kubelet
|
||||
@ -90,7 +92,7 @@ systemd:
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
|
@ -4,7 +4,7 @@ terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
template = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.21.3 (upstream)
|
||||
* Kubernetes v1.23.2 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=5746f9c221fb779def042c81ea827fed1b844f1d"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f45deec67e2fea4f06b5a3edad628b0fe0e9ec60"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
@ -13,8 +13,6 @@ module "bootstrap" {
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
}
|
||||
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.2.0
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -12,7 +12,7 @@ systemd:
|
||||
Wants=network-online.target network.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.4.16
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.1
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -29,8 +29,10 @@ systemd:
|
||||
LimitNOFILE=40000
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: docker.service
|
||||
- name: containerd.service
|
||||
enabled: true
|
||||
- name: docker.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
@ -50,7 +52,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -69,7 +71,7 @@ systemd:
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
@ -81,6 +83,8 @@ systemd:
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
@ -88,10 +92,10 @@ systemd:
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -119,7 +123,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPre=-/usr/bin/podman rm bootstrap
|
||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||
--network host \
|
||||
@ -222,6 +226,26 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
||||
inline: |
|
||||
version = 2
|
||||
root = "/var/lib/containerd"
|
||||
state = "/run/containerd"
|
||||
subreaper = true
|
||||
oom_score = -999
|
||||
[grpc]
|
||||
address = "/run/containerd/containerd.sock"
|
||||
uid = 0
|
||||
gid = 0
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
enable_selinux = true
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||
SystemdCgroup = true
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -1,10 +1,12 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.2.0
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
- name: containerd.service
|
||||
enabled: true
|
||||
- name: docker.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
@ -23,7 +25,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -42,7 +44,7 @@ systemd:
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
@ -54,6 +56,8 @@ systemd:
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
@ -61,7 +65,6 @@ systemd:
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in compact(split(",", node_labels)) ~}
|
||||
--node-labels=${label} \
|
||||
@ -71,6 +74,7 @@ systemd:
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
@ -120,6 +124,26 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
||||
inline: |
|
||||
version = 2
|
||||
root = "/var/lib/containerd"
|
||||
state = "/run/containerd"
|
||||
subreaper = true
|
||||
oom_score = -999
|
||||
[grpc]
|
||||
address = "/run/containerd/containerd.sock"
|
||||
uid = 0
|
||||
gid = 0
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
enable_selinux = true
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||
SystemdCgroup = true
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -44,7 +44,7 @@ resource "matchbox_profile" "controllers" {
|
||||
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
}
|
||||
@ -78,7 +78,7 @@ resource "matchbox_profile" "workers" {
|
||||
|
||||
kernel = local.kernel
|
||||
initrd = local.initrd
|
||||
args = concat(local.args, var.kernel_args)
|
||||
args = concat(local.args, var.kernel_args)
|
||||
|
||||
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index]
|
||||
}
|
||||
|
@ -28,17 +28,17 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = join("\n", local.assets_bundle)
|
||||
destination = "$HOME/assets"
|
||||
destination = "/home/core/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo touch /etc/kubernetes",
|
||||
"sudo /opt/bootstrap/layout",
|
||||
]
|
||||
@ -65,12 +65,12 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo touch /etc/kubernetes",
|
||||
]
|
||||
}
|
||||
|
@ -57,7 +57,7 @@ EOD
|
||||
|
||||
variable "snippets" {
|
||||
type = map(list(string))
|
||||
description = "Map from machine names to lists of Fedora CoreOS Config snippets"
|
||||
description = "Map from machine names to lists of Butane snippets"
|
||||
default = {}
|
||||
}
|
||||
|
||||
@ -146,8 +146,8 @@ variable "enable_reporting" {
|
||||
|
||||
variable "enable_aggregation" {
|
||||
type = bool
|
||||
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||
default = false
|
||||
description = "Enable the Kubernetes Aggregation Layer"
|
||||
default = true
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
@ -3,8 +3,8 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
@ -13,7 +13,7 @@ terraform {
|
||||
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.4.1"
|
||||
version = "~> 0.5.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.21.3 (upstream)
|
||||
* Kubernetes v1.23.2 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=5746f9c221fb779def042c81ea827fed1b844f1d"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f45deec67e2fea4f06b5a3edad628b0fe0e9ec60"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -10,7 +10,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.4.16
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.1
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -63,7 +63,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -81,10 +81,9 @@ systemd:
|
||||
-v /usr/lib/os-release:/etc/os-release:ro \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /run:/run \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
-v /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
-v /var/lib/calico:/var/lib/calico:ro \
|
||||
-v /var/lib/docker:/var/lib/docker \
|
||||
-v /var/lib/containerd:/var/lib/containerd \
|
||||
-v /var/lib/kubelet:/var/lib/kubelet:rshared \
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
@ -93,16 +92,19 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -124,7 +126,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
|
@ -35,7 +35,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -56,10 +56,9 @@ systemd:
|
||||
-v /usr/lib/os-release:/etc/os-release:ro \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /run:/run \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
-v /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
-v /var/lib/calico:/var/lib/calico:ro \
|
||||
-v /var/lib/docker:/var/lib/docker \
|
||||
-v /var/lib/containerd:/var/lib/containerd \
|
||||
-v /var/lib/kubelet:/var/lib/kubelet:rshared \
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
@ -68,13 +67,15 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in compact(split(",", node_labels)) ~}
|
||||
--node-labels=${label} \
|
||||
@ -84,6 +85,7 @@ systemd:
|
||||
%{~ endfor ~}
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStart=docker logs -f kubelet
|
||||
|
@ -29,17 +29,17 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = join("\n", local.assets_bundle)
|
||||
destination = "$HOME/assets"
|
||||
destination = "/home/core/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo /opt/bootstrap/layout",
|
||||
]
|
||||
}
|
||||
@ -66,12 +66,12 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -151,8 +151,8 @@ variable "enable_reporting" {
|
||||
|
||||
variable "enable_aggregation" {
|
||||
type = bool
|
||||
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||
default = false
|
||||
description = "Enable the Kubernetes Aggregation Layer"
|
||||
default = true
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
@ -3,8 +3,8 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
@ -13,7 +13,7 @@ terraform {
|
||||
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
version = "~> 0.4.1"
|
||||
version = "~> 0.5.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.21.3 (upstream)
|
||||
* Kubernetes v1.23.2 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=5746f9c221fb779def042c81ea827fed1b844f1d"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f45deec67e2fea4f06b5a3edad628b0fe0e9ec60"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
@ -17,8 +17,5 @@ module "bootstrap" {
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
|
||||
# Fedora CoreOS
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
}
|
||||
|
||||
|
@ -41,7 +41,6 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
private_networking = true
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.2.0
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
@ -12,7 +12,7 @@ systemd:
|
||||
Wants=network-online.target network.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.4.16
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.1
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -29,8 +29,10 @@ systemd:
|
||||
LimitNOFILE=40000
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: docker.service
|
||||
- name: containerd.service
|
||||
enabled: true
|
||||
- name: docker.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
@ -52,7 +54,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -72,7 +74,7 @@ systemd:
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
@ -84,6 +86,8 @@ systemd:
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
@ -91,10 +95,10 @@ systemd:
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -129,7 +133,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.21.3
|
||||
quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -219,3 +223,24 @@ storage:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
||||
inline: |
|
||||
version = 2
|
||||
root = "/var/lib/containerd"
|
||||
state = "/run/containerd"
|
||||
subreaper = true
|
||||
oom_score = -999
|
||||
[grpc]
|
||||
address = "/run/containerd/containerd.sock"
|
||||
uid = 0
|
||||
gid = 0
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
enable_selinux = true
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||
SystemdCgroup = true
|
||||
|
||||
|
@ -1,10 +1,12 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.2.0
|
||||
version: 1.4.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
- name: containerd.service
|
||||
enabled: true
|
||||
- name: docker.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
@ -26,7 +28,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -46,7 +48,7 @@ systemd:
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/containerd:/var/lib/containerd \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
@ -58,6 +60,8 @@ systemd:
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
@ -65,10 +69,10 @@ systemd:
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
@ -92,7 +96,7 @@ systemd:
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
@ -126,3 +130,23 @@ storage:
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/fedora-coreos/iptables-legacy.stamp
|
||||
- path: /etc/containerd/config.toml
|
||||
overwrite: true
|
||||
contents:
|
||||
inline: |
|
||||
version = 2
|
||||
root = "/var/lib/containerd"
|
||||
state = "/run/containerd"
|
||||
subreaper = true
|
||||
oom_score = -999
|
||||
[grpc]
|
||||
address = "/run/containerd/containerd.sock"
|
||||
uid = 0
|
||||
gid = 0
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
enable_selinux = true
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||
SystemdCgroup = true
|
||||
|
@ -116,7 +116,7 @@ resource "digitalocean_firewall" "controllers" {
|
||||
# kube-scheduler metrics, kube-controller-manager metrics
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "10251-10252"
|
||||
port_range = "10257-10259"
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
}
|
||||
|
@ -25,17 +25,17 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = join("\n", local.assets_bundle)
|
||||
destination = "$HOME/assets"
|
||||
destination = "/home/core/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo touch /etc/kubernetes",
|
||||
"sudo /opt/bootstrap/layout",
|
||||
]
|
||||
@ -55,12 +55,12 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo touch /etc/kubernetes",
|
||||
]
|
||||
}
|
||||
|
@ -48,13 +48,13 @@ variable "os_image" {
|
||||
|
||||
variable "controller_snippets" {
|
||||
type = list(string)
|
||||
description = "Controller Fedora CoreOS Config snippets"
|
||||
description = "Controller Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "worker_snippets" {
|
||||
type = list(string)
|
||||
description = "Worker Fedora CoreOS Config snippets"
|
||||
description = "Worker Butane snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
@ -94,8 +94,8 @@ variable "enable_reporting" {
|
||||
|
||||
variable "enable_aggregation" {
|
||||
type = bool
|
||||
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||
default = false
|
||||
description = "Enable the Kubernetes Aggregation Layer"
|
||||
default = true
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
@ -3,8 +3,8 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
@ -13,7 +13,7 @@ terraform {
|
||||
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = "~> 1.20"
|
||||
version = ">= 2.12, < 3.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -37,7 +37,6 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
private_networking = true
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.21.3 (upstream)
|
||||
* Kubernetes v1.23.2 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=5746f9c221fb779def042c81ea827fed1b844f1d"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f45deec67e2fea4f06b5a3edad628b0fe0e9ec60"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -10,7 +10,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.4.16
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.1
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -65,7 +65,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -84,10 +84,9 @@ systemd:
|
||||
-v /usr/lib/os-release:/etc/os-release:ro \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /run:/run \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
-v /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
-v /var/lib/calico:/var/lib/calico:ro \
|
||||
-v /var/lib/docker:/var/lib/docker \
|
||||
-v /var/lib/containerd:/var/lib/containerd \
|
||||
-v /var/lib/kubelet:/var/lib/kubelet:rshared \
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
@ -96,16 +95,19 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -127,7 +129,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
|
@ -37,7 +37,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -59,10 +59,9 @@ systemd:
|
||||
-v /usr/lib/os-release:/etc/os-release:ro \
|
||||
-v /lib/modules:/lib/modules:ro \
|
||||
-v /run:/run \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
-v /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
-v /sys/fs/cgroup:/sys/fs/cgroup \
|
||||
-v /var/lib/calico:/var/lib/calico:ro \
|
||||
-v /var/lib/docker:/var/lib/docker \
|
||||
-v /var/lib/containerd:/var/lib/containerd \
|
||||
-v /var/lib/kubelet:/var/lib/kubelet:rshared \
|
||||
-v /var/log:/var/log \
|
||||
-v /opt/cni/bin:/opt/cni/bin \
|
||||
@ -71,16 +70,19 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=systemd \
|
||||
--container-runtime=remote \
|
||||
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStart=docker logs -f kubelet
|
||||
@ -96,7 +98,7 @@ systemd:
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.21.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.2
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
|
@ -46,7 +46,6 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
private_networking = true
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
@ -116,7 +116,7 @@ resource "digitalocean_firewall" "controllers" {
|
||||
# kube-scheduler metrics, kube-controller-manager metrics
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "10251-10252"
|
||||
port_range = "10257-10259"
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
}
|
||||
|
@ -25,17 +25,17 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = join("\n", local.assets_bundle)
|
||||
destination = "$HOME/assets"
|
||||
destination = "/home/core/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo /opt/bootstrap/layout",
|
||||
]
|
||||
}
|
||||
@ -54,12 +54,12 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
destination = "/home/core/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -94,8 +94,8 @@ variable "enable_reporting" {
|
||||
|
||||
variable "enable_aggregation" {
|
||||
type = bool
|
||||
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||
default = false
|
||||
description = "Enable the Kubernetes Aggregation Layer"
|
||||
default = true
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
@ -3,8 +3,8 @@
|
||||
terraform {
|
||||
required_version = ">= 0.13.0, < 2.0.0"
|
||||
required_providers {
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
template = "~> 2.2"
|
||||
null = ">= 2.1"
|
||||
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
@ -13,7 +13,7 @@ terraform {
|
||||
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
version = "~> 1.20"
|
||||
version = ">= 2.12, < 3.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -35,7 +35,6 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
private_networking = true
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# only official DigitalOcean images support IPv6
|
||||
ipv6 = local.is_official_image
|
||||
|
@ -1,66 +1,19 @@
|
||||
# ARM64
|
||||
|
||||
!!! warning
|
||||
ARM64 support is experimental
|
||||
|
||||
Typhoon has experimental support for ARM64 with Fedora CoreOS on AWS. Full clusters can be created with ARM64 controller and worker nodes. Or worker pools of ARM64 nodes can be attached to an AMD64 cluster to create a hybrid/mixed architecture cluster.
|
||||
Typhoon has experimental support for ARM64 on AWS, with Fedora CoreOS or Flatcar Linux. Clusters can be created with ARM64 controller and worker nodes. Or worker pools of ARM64 nodes can be attached to an AMD64 cluster to create a hybrid/mixed architecture cluster.
|
||||
|
||||
!!! note
|
||||
Currently, CNI networking must be set to flannel or Cilium.
|
||||
|
||||
## AMIs
|
||||
|
||||
In lieu of official Fedora CoreOS ARM64 AMIs, Poseidon publishes experimental ARM64 AMIs to a few regions (us-east-1, us-east-2, us-west-1). These AMIs may be **removed** at any time and will be replaced when Fedora CoreOS publishes equivalents.
|
||||
|
||||
!!! note
|
||||
AMIs are only published to a few regions, and AWS availability of ARM instance types varies.
|
||||
Currently, CNI networking must be set to `flannel` or `cilium`.
|
||||
|
||||
## Cluster
|
||||
|
||||
Create a cluster with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` container images.
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.21.3"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
dns_zone = "aws.example.com"
|
||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
arch = "arm64"
|
||||
networking = "cilium"
|
||||
worker_count = 2
|
||||
worker_price = "0.0168"
|
||||
|
||||
controller_type = "t4g.small"
|
||||
worker_type = "t4g.small"
|
||||
}
|
||||
```
|
||||
|
||||
Verify the cluster has only arm64 (`aarch64`) nodes.
|
||||
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-12-178 Ready <none> 101s v1.21.3 10.0.12.178 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
|
||||
ip-10-0-18-93 Ready <none> 102s v1.21.3 10.0.18.93 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
|
||||
ip-10-0-90-10 Ready <none> 104s v1.21.3 10.0.90.10 <none> Fedora CoreOS 32.20201104.dev.0 5.8.17-200.fc32.aarch64 docker://19.3.11
|
||||
```
|
||||
|
||||
## Hybrid
|
||||
|
||||
Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [worker pool](worker-pools.md#aws) with ARM64 workers. Optional taints are added to aid in scheduling.
|
||||
|
||||
=== "Cluster (amd64)"
|
||||
=== "Fedora CoreOS Cluster (arm64)"
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.21.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.2"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -68,7 +21,71 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
arch = "arm64"
|
||||
networking = "cilium"
|
||||
worker_count = 2
|
||||
worker_price = "0.0168"
|
||||
|
||||
controller_type = "t4g.small"
|
||||
worker_type = "t4g.small"
|
||||
}
|
||||
```
|
||||
|
||||
=== "Flatcar Linux Cluster (arm64)"
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.2"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
dns_zone = "aws.example.com"
|
||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
arch = "arm64"
|
||||
networking = "cilium"
|
||||
worker_count = 2
|
||||
worker_price = "0.0168"
|
||||
|
||||
controller_type = "t4g.small"
|
||||
worker_type = "t4g.small"
|
||||
}
|
||||
```
|
||||
|
||||
Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe nodes.
|
||||
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-21-119 Ready <none> 77s v1.23.2 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.23.2 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.23.2 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Hybrid
|
||||
|
||||
Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [worker pool](worker-pools.md#aws) with ARM64 workers. Optional taints are added to aid in scheduling.
|
||||
|
||||
=== "FCOS Cluster"
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.2"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
dns_zone = "aws.example.com"
|
||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
networking = "cilium"
|
||||
@ -79,11 +96,58 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
}
|
||||
```
|
||||
|
||||
=== "Worker Pool (arm64)"
|
||||
=== "Flatcar Cluster"
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.2"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
dns_zone = "aws.example.com"
|
||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||
|
||||
# optional
|
||||
networking = "cilium"
|
||||
worker_count = 2
|
||||
worker_price = "0.021"
|
||||
|
||||
daemonset_tolerations = ["arch"] # important
|
||||
}
|
||||
```
|
||||
|
||||
=== "FCOS ARM64 Workers"
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.21.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.23.2"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
subnet_ids = module.gravitas.subnet_ids
|
||||
security_groups = module.gravitas.worker_security_groups
|
||||
|
||||
# configuration
|
||||
name = "gravitas-arm64"
|
||||
kubeconfig = module.gravitas.kubeconfig
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
|
||||
# optional
|
||||
arch = "arm64"
|
||||
instance_type = "t4g.small"
|
||||
spot_price = "0.0168"
|
||||
node_taints = ["arch=arm64:NoSchedule"]
|
||||
}
|
||||
```
|
||||
|
||||
=== "Flatcar ARM64 Workers"
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.23.2"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -107,10 +171,10 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
|
||||
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-1-81 Ready <none> 4m28s v1.21.3 10.0.1.81 <none> Fedora CoreOS 34.20210427.3.0 5.11.15-300.fc34.x86_64 docker://20.10.6
|
||||
ip-10-0-17-86 Ready <none> 4m28s v1.21.3 10.0.17.86 <none> Fedora CoreOS 33.20210413.dev.0 5.10.19-200.fc33.aarch64 docker://19.3.13
|
||||
ip-10-0-21-45 Ready <none> 4m28s v1.21.3 10.0.21.45 <none> Fedora CoreOS 34.20210427.3.0 5.11.15-300.fc34.x86_64 docker://20.10.6
|
||||
ip-10-0-40-36 Ready <none> 4m22s v1.21.3 10.0.40.36 <none> Fedora CoreOS 34.20210427.3.0 5.11.15-300.fc34.x86_64 docker://20.10.6
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-1-73 Ready <none> 111m v1.23.2 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.23.2 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.23.2 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.23.2 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
```
|
||||
|
||||
|
@ -12,9 +12,9 @@ Clusters are kept to a minimal Kubernetes control plane by offering components l
|
||||
|
||||
## Hosts
|
||||
|
||||
Typhoon uses the [Ignition](https://github.com/coreos/ignition) system of Fedora CoreOS and Flatcar Linux to immutably declare a system via first-boot disk provisioning. Fedora CoreOS uses a [Fedora CoreOS Config](https://docs.fedoraproject.org/en-US/fedora-coreos/fcct-config/) (FCC) and Flatcar Linux uses a [Container Linux Config](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/examples.md) (CLC). These define disk partitions, filesystems, systemd units, dropins, config files, mount units, raid arrays, and users.
|
||||
Typhoon uses the [Ignition](https://github.com/coreos/ignition) system of Fedora CoreOS and Flatcar Linux to immutably declare a system via first-boot disk provisioning. Fedora CoreOS uses a [Butane Config](https://coreos.github.io/butane/specs/) and Flatcar Linux uses a [Container Linux Config](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/examples.md) (CLC). These define disk partitions, filesystems, systemd units, dropins, config files, mount units, raid arrays, and users.
|
||||
|
||||
Controller and worker instances form a minimal and secure Kubernetes cluster on each platform. Typhoon provides the **snippets** feature to accept Fedora CoreOS Configs or Container Linux Configs to validate and additively merge into instance declarations. This allows advanced host customization and experimentation.
|
||||
Controller and worker instances form a minimal and secure Kubernetes cluster on each platform. Typhoon provides the **snippets** feature to accept Butane or Container Linux Configs to validate and additively merge into instance declarations. This allows advanced host customization and experimentation.
|
||||
|
||||
!!! note
|
||||
Snippets cannot be used to modify an already existing instance, the antithesis of immutable provisioning. Ignition fully declares a system on first boot only.
|
||||
@ -30,14 +30,14 @@ Controller and worker instances form a minimal and secure Kubernetes cluster on
|
||||
!!! note
|
||||
Fedora CoreOS snippets require `terraform-provider-ct` v0.5+
|
||||
|
||||
Define a Fedora CoreOS Config (FCC) ([docs](https://docs.fedoraproject.org/en-US/fedora-coreos/fcct-config/), [config](https://github.com/coreos/fcct/blob/master/docs/configuration-v1_0.md), [examples](https://github.com/coreos/fcct/blob/master/docs/examples.md)) in version control near your Terraform workspace directory (e.g. perhaps in a `snippets` subdirectory). You may organize snippets into multiple files, if desired.
|
||||
Define a Butane Config ([docs](https://coreos.github.io/butane/specs/), [config](https://github.com/coreos/butane/blob/main/docs/config-fcos-v1_4.md)) in version control near your Terraform workspace directory (e.g. perhaps in a `snippets` subdirectory). You may organize snippets into multiple files, if desired.
|
||||
|
||||
For example, ensure an `/opt/hello` file is created with permissions 0644.
|
||||
|
||||
```yaml
|
||||
# custom-files
|
||||
variant: fcos
|
||||
version: 1.2.0
|
||||
version: 1.4.0
|
||||
storage:
|
||||
files:
|
||||
- path: /opt/hello
|
||||
@ -185,7 +185,7 @@ To set an alternative etcd image or Kubelet image, use a snippet to set a system
|
||||
```yaml
|
||||
# kubelet-image-override.yaml
|
||||
variant: fcos <- remove for Flatcar Linux
|
||||
version: 1.2.0 <- remove for Flatcar Linux
|
||||
version: 1.4.0 <- remove for Flatcar Linux
|
||||
systemd:
|
||||
units:
|
||||
- name: kubelet.service
|
||||
@ -201,7 +201,7 @@ To set an alternative etcd image or Kubelet image, use a snippet to set a system
|
||||
```yaml
|
||||
# etcd-image-override.yaml
|
||||
variant: fcos <- remove for Flatcar Linux
|
||||
version: 1.2.0 <- remove for Flatcar Linux
|
||||
version: 1.4.0 <- remove for Flatcar Linux
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
|
@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.21.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.2"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.21.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.2"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.21.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.2"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.21.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.2"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user