Compare commits

...

32 Commits

Author SHA1 Message Date
bf06412dfd Update Prometheus and Grafana addons 2022-08-21 08:56:00 -07:00
505818b7d5 Update docs showing the terraform plan resources count
* Although I don't plan to keep these in sync, some users are
confused when the docs don't match the actual resource count
2022-08-21 08:52:35 -07:00
0d27811265 Update recommended Terraform provider versions 2022-08-18 09:08:55 -07:00
c13d060b38 Add docs for GCP MIG update and AWS instance refresh
* Document that worker instances are rolling replaced when
changes to their configuration are applied
2022-08-18 09:02:38 -07:00
e87d5aabc3 Adjust Google Cloud worker health checks to use kube-proxy healthz
* Change the workers managed instance group to health check nodes
via HTTP probe of the kube-proxy port 10256 /healthz endpoints
* Advantages: kube-proxy is a lower value target (in case there
were bugs in firewalls) that Kubelet, its more representative than
health checking Kubelet (Kubelet must run AND kube-proxy Daemonset
must be healthy), and its already used by kube-proxy liveness probes
(better discoverability via kubectl or alerts on pods crashlooping)
* Another motivator is that GKE clusters also use kube-proxy port
10256 checks to assess node health
2022-08-17 20:50:52 -07:00
760b4cd5ee Update Kubernetes from v1.24.3 to v1.24.4
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1244
2022-08-17 20:09:30 -07:00
fcd8ff2b17 Update Cilium from v1.12.0 to v1.12.1
* https://github.com/cilium/cilium/releases/tag/v1.12.1
2022-08-17 08:53:56 -07:00
ef2d2af0c7 Bump mkdocs-material from 8.3.9 to 8.4.0
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.3.9 to 8.4.0.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.3.9...8.4.0)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-16 08:29:51 -07:00
8e2027ed2d Bump pygments from 2.12.0 to 2.13.0
Bumps [pygments](https://github.com/pygments/pygments) from 2.12.0 to 2.13.0.
- [Release notes](https://github.com/pygments/pygments/releases)
- [Changelog](https://github.com/pygments/pygments/blob/master/CHANGES)
- [Commits](https://github.com/pygments/pygments/compare/2.12.0...2.13.0)

---
updated-dependencies:
- dependency-name: pygments
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-08-16 08:26:45 -07:00
52427a4271 Refresh instances in autoscaling group when launch configuration changes
* Changes to worker launch configurations start an autoscaling group instance
refresh to replace instances
* Instance refresh creates surge instances, waits for a warm-up period, then
deletes old instances
* Changing worker_type, disk_*, worker_price, worker_target_groups, or Butane
worker_snippets on existing worker nodes will replace instances
* New AMIs or changing `os_stream` will be ignored, to allow Fedora CoreOS or
Flatcar Linux to keep themselves updated
* Previously, new launch configurations were made in the same way, but not
applied to instances unless manually replaced
2022-08-14 21:43:49 -07:00
20b76d6e00 Roll instance template changes to worker managed instance groups
* When a worker managed instance group's (MIG) instance template
changes (including machine type, disk size, or Butane snippets
but excluding new AMIs), use Google Cloud's rolling update features
to ensure instances match declared state
* Ignore new AMIs since Fedora CoreOS and Flatcar Linux nodes
already auto-update and reboot themselves
* Rolling updates will create surge instances, wait for health
checks, then delete old instances (0 unavilable instances)
* Instances are replaced to ensure new Ignition/Butane snippets
are respected
* Add managed instance group autohealing (i.e. health checks) to
ensure new instances' Kubelet is running

Renames

* Name apiserver and kubelet health checks consistently
* Rename MIG from `${var.name}-worker-group` to `${var.name}-worker`

Rel: https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
2022-08-14 13:06:53 -07:00
6facfca4ed Switch Kubernetes image registry from k8s.gcr.io to registry.k8s.io
* Announce: https://groups.google.com/g/kubernetes-sig-testing/c/U7b_im9vRrM

Rel: https://github.com/poseidon/terraform-render-bootstrap/pull/319
2022-08-13 16:16:21 -07:00
ed8c6a5aeb Upgrade CoreDNS from v1.8.5 to v1.9.3
Rel: https://github.com/poseidon/terraform-render-bootstrap/pull/318
2022-08-13 15:43:03 -07:00
003af72cc8 Rename google-cloud/fedora-coreos/kubernetes/workers fcc to butane
* Should have been part of https://github.com/poseidon/typhoon/pull/1203
2022-08-13 15:40:16 -07:00
b321b90a4f Update Grafana from v9.0.6 to v9.0.7 2022-08-13 15:39:44 -07:00
e5d0e2d48b Rename Fedora CoreOS fcc directory to butane
* Align both Fedora CoreOS and Flatcar Linux keeping Butane
Configs in a directory called butane
2022-08-10 09:10:18 -07:00
679f8b878f Update Grafana from v9.0.5 to v9.0.6 2022-08-10 08:23:04 -07:00
87a8278c9d Improve AWS autoscaling group and launch config names
* Rename launch configuration to use a name_prefix named after the
cluster and worker to improve identifiability
* Shorten AWS autoscaling group name to not include the launch config
id. Years ago this used to be needed to update the ASG but the AWS
provider detects changes to the launch configuration just fine
2022-08-08 20:46:08 -07:00
93b7f2554e Remove ineffective iptables-legacy.stamp
* Typhoon Fedora CoreOS is already using iptables nf_tables since
F36. The file to pin to legacy iptables was renamed to
/etc/coreos/iptables-legacy.stamp
2022-08-08 20:27:21 -07:00
62d47ad3f0 Update Cilium from v1.11.7 to v1.12.0
* https://github.com/cilium/cilium/releases/tag/v1.12.0
2022-08-08 19:59:03 -07:00
6eb7861f96 Update Grafana liveness and readiness probes
* Use the liveness and readiness probes that Grafana recommends
* Update Grafana from v9.0.3 to v9.0.5
2022-08-08 09:22:44 -07:00
ffbacbccf7 Update node-exporter DaemonSet to fix permission denied
* Add toleration to run node-exporter on controller nodes
* Add HostToContainer mount propagation and security context group
settings from upstream
* Fix SELinux denied accessing /host/proc/1/mounts. The mounts file
is has an SELinux type attribute init_t, but that won't allow running
the node-exporter binary so we have to use spc_t. This should be more
targeted at just the SELinux issue than making the Pod privileged
* Remove excluded mount points and filesystem types, the defaults are
https://github.com/prometheus/node_exporter/blob/v1.3.1/collector/filesystem_linux.go#L35

```
caller=collector.go:169 level=error msg="collector failed" name=filesystem duration_seconds=0.000666766 err="open /host/proc/1/mounts: permission denied"
```

```
[ 3664.880899] audit: type=1400 audit(1659639161.568:4400): avc:  denied  { search } for  pid=28325 comm="node_exporter" name="1" dev="proc" ino=22542 scontext=system_u:system_r:container_t:s0 tcontext=system_u:system_r:init_t:s0 tclass=dir permissive=0
```
2022-08-08 09:19:46 -07:00
16c2785878 Update docs on using Butane snippets for customization
* Typhoon now consistently uses Butane Configs for snippets
(variant `fcos` or `flatcar`). Previously snippets were either
Butane Configs (on FCOS) or Container Linux Configs (on Flatcar)
* Update docs on uploading Flatcar Linux DigitalOcean images
* Update docs on uploading Fedora CoreOS Azure images
2022-08-03 20:28:53 -07:00
4a469513dd Migrate Flatcar Linux from Ignition spec v2.3.0 to v3.3.0
* Requires poseidon v0.11+ and Flatcar Linux 3185.0.0+ (action required)
* Previously, Flatcar Linux configs have been parsed as Container
Linux Configs to Ignition v2.2.0 specs by poseidon/ct
* Flatcar Linux starting in 3185.0.0 now supports Ignition v3.x specs
(which are rendered from Butane Configs, like Fedora CoreOS)
* poseidon/ct v0.11.0 adds support for the flatcar Butane Config
variant so that Flatcar Linux can use Ignition v3.x

Rel:

* [Flatcar Support](https://flatcar-linux.org/docs/latest/provisioning/ignition/specification/#ignition-v3)
* [poseidon/ct support](https://github.com/poseidon/terraform-provider-ct/pull/131)
2022-08-03 08:32:52 -07:00
47d8431fe0 Fix bug provisioning multi-controller clusters on Google Cloud
* Google Cloud Terraform provider resource google_dns_record_set's
name field provides the full domain name with a trailing ".". This
isn't a new behavior, Google has behaved this way as long as I can
remember
* etcd domain names are passed to the bootstrap module to generate
TLS certificates. What seems to be new(ish?) is that etcd peers
see example.foo and example.foo. as different domains during TLS
SANs validation. As a result, clusters with multiple controller
nodes fail to run etcd-member, which manifests as cluster provisioning
hanging. Single controller/master clusters (default) are unaffected
* Fix etcd-member.service error in multi-controller clusters:

```
"error":"x509: certificate is valid for conformance-etcd0.redacted.,
conform-etcd1.redacted., conform-etcd2.redacted., not conform-etcd1.redacted"}
```
2022-08-02 20:21:02 -07:00
256b87812e Remove Terraform template provider dependency
* Use Terraform builtin templatefile functionality
* Remove dependency on deprecated Terraform template provider

Rel:

* https://registry.terraform.io/providers/hashicorp/template/2.2.0
* https://github.com/poseidon/terraform-render-bootstrap/pull/293
2022-08-02 18:15:03 -07:00
ca6eef365f Add badges to README 2022-07-31 18:03:09 -07:00
c6794f1007 Update Calico from v3.23.1 to v3.23.3
* https://github.com/projectcalico/calico/releases/tag/v3.23.3
2022-07-30 18:15:33 -07:00
de6f27e119 Update FCOS iPXE initrd and kernel arg settings
* Add initrd=main kernel argument for UEFI
* Switch to using the coreos.live.rootfs_url kernel argument
instead of passing the rootfs as an appended initrd
* Remove coreos.inst.image_url kernel argument since coreos-installer
now defaults to installing from the embedded live system
* Remove rd.neednet=1 and dhcp=ip kernel args that aren't needed
* Remove serial console kernel args by default (these can be
added via var.kernel_args if needed)

Rel:
* https://github.com/poseidon/matchbox/pull/972 (thank you @bgilbert)
* https://github.com/poseidon/matchbox/pull/978
2022-07-30 16:27:08 -07:00
6a9c32d3a9 Migrate from internal hosting to GitHub pages
* Add Twitter card customizations that have been kept in
an internal fork
* Add CNAME needed for GitHub pages
2022-07-27 21:56:42 -07:00
a7e9e423f5 Bump mkdocs from 1.3.0 to 1.3.1
Bumps [mkdocs](https://github.com/mkdocs/mkdocs) from 1.3.0 to 1.3.1.
- [Release notes](https://github.com/mkdocs/mkdocs/releases)
- [Commits](https://github.com/mkdocs/mkdocs/compare/1.3.0...1.3.1)

---
updated-dependencies:
- dependency-name: mkdocs
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-21 09:07:21 -07:00
83236eab57 Add table of details about static Pods
* Also remote outdated mentions of rkt-fly
2022-07-21 09:03:27 -07:00
109 changed files with 912 additions and 938 deletions

View File

@ -4,6 +4,69 @@ Notable changes between versions.
## Latest ## Latest
## v1.24.4
* Kubernetes [v1.24.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1244)
* Update CoreDNS from v1.8.6 to [v1.9.3](https://github.com/poseidon/terraform-render-bootstrap/pull/318)
* Update Cilium from v1.11.7 to [v1.12.1](https://github.com/cilium/cilium/releases/tag/v1.12.1)
* Update Calico from v3.23.1 to [v3.23.3](https://github.com/projectcalico/calico/releases/tag/v3.23.3)
* Switch Kubernetes registry from `k8s.gcr.io` to `registry.k8s.io` ([#1206](https://github.com/poseidon/typhoon/pull/1206))
* Remove use of deprecated Terraform [template](https://registry.terraform.io/providers/hashicorp/template) provider ([#1194](https://github.com/poseidon/typhoon/pull/1194))
### Fedora CoreOS
* Remove ineffective `/etc/fedora-coreos/iptables-legacy.stamp` ([#1201](https://github.com/poseidon/typhoon/pull/1201))
* Typhoon already uses iptables v1.8.7 (nf_tables) since FCOS 36
* Staying on legacy iptables required a file in `/etc/coreos` instead
### Flatcar Linux
* Migrate Flatcar Linux from Ignition spec v2.3.0 to v3.3.0 ([#1196](https://github.com/poseidon/typhoon/pull/1196)) (**action required**)
* Flatcar Linux 3185.0.0+ [supports](https://flatcar-linux.org/docs/latest/provisioning/ignition/specification/#ignition-v3) Ignition v3.x specs (which are rendered from Butane Configs, like Fedora CoreOS)
* `poseidon/ct` v0.11.0 [supports](https://github.com/poseidon/terraform-provider-ct/pull/131) the `flatcar` Butane Config variant
* Require poseidon/ct v0.11+ and Flatcar Linux 3185.0.0+
* Please modify any Flatcar Linux snippets to use the [Butane Config](https://coreos.github.io/butane/config-flatcar-v1_0/) format (**action required**)
```tf
variant: flatcar
version: 1.0.0
...
```
### AWS
* [Refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) instances in autoscaling group when launch configuration changes ([#1208](https://github.com/poseidon/typhoon/pull/1208)) ([docs](https://typhoon.psdn.io/topics/maintenance/#node-configuration-updates), **important**)
* Worker launch configuration changes start an autoscaling group instance refresh to replace instances
* Instance refresh creates surge instances, waits for a warm-up period, then deletes old instances
* Changing `worker_type`, `disk_*`, `worker_price`, `worker_target_groups`, or Butane `worker_snippets` on existing worker nodes will replace instances
* New AMIs or changing `os_stream` will be ignored, to allow Fedora CoreOS or Flatcar Linux to keep themselves updated
* Previously, new launch configurations were made in the same way, but not applied to instances unless manually replaced
* Rename worker autoscaling group `${cluster_name}-worker` ([#1202](https://github.com/poseidon/typhoon/pull/1202))
* Rename launch configuration `${cluster_name}-worker` instead of a random id
### Google
* [Roll](https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups) instance template changes to worker managed instance groups ([#1207](https://github.com/poseidon/typhoon/pull/1207)) ([docs](https://typhoon.psdn.io/topics/maintenance/#node-configuration-updates), **important**)
* Worker instance template changes roll out by gradually replacing instances
* Automatic rollouts create surge instances, wait for health checks, then delete old instances (0 unavailable instances)
* Changing `worker_type`, `disk_size`, `worker_preemptible`, or Butane `worker_snippets` on existing worker nodes will replace instances
* New compute images or changing `os_stream` will be ignored, to allow Fedora CoreOS or Flatcar Linux to keep themselves updated
* Previously, new instance templates were made in the same way, but not applied to instances unless manually replaced
* Add health checks to worker managed instance groups (i.e. "autohealing") ([#1207](https://github.com/poseidon/typhoon/pull/1207))
* Use health checks to probe kube-proxy every 30s
* Replace worker nodes that fail the health check 6 times (3min)
* Name `kube-apiserver` and `worker` health checks consistently ([#1207](https://github.com/poseidon/typhoon/pull/1207))
* Use name `${cluster_name}-apiserver-health` and `${cluster_name}-worker-health`
* Rename managed instance group from `${cluster_name}-worker-group` to `${cluster_name}-worker` ([#1207](https://github.com/poseidon/typhoon/pull/1207))
* Fix bug provisioning clusters with multiple controller nodes ([#1195](https://github.com/poseidon/typhoon/pull/1195))
### Addons
* Update Prometheus from v2.37.0 to [v2.38.0](https://github.com/prometheus/prometheus/releases/tag/v2.38.0)
* Update Grafana from v9.0.3 to [v9.1.0](https://github.com/grafana/grafana/releases/tag/v9.1.0)
## v1.24.3
* Kubernetes [v1.24.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1243) * Kubernetes [v1.24.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#v1243)
* Update Cilium from v1.11.6 to [v1.11.7](https://github.com/cilium/cilium/releases/tag/v1.11.7) * Update Cilium from v1.11.6 to [v1.11.7](https://github.com/cilium/cilium/releases/tag/v1.11.7)

View File

@ -1,4 +1,6 @@
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png"> # Typhoon [![Release](https://img.shields.io/github/v/release/poseidon/typhoon)](https://github.com/poseidon/typhoon/releases) [![Stars](https://img.shields.io/github/stars/poseidon/typhoon)](https://github.com/poseidon/typhoon/stargazers) [![Sponsors](https://img.shields.io/github/sponsors/poseidon?logo=github)](https://github.com/sponsors/poseidon) [![Twitter](https://img.shields.io/badge/follow-news-1da1f2?logo=twitter)](https://twitter.com/typhoon8s)
<img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
Typhoon is a minimal and free Kubernetes distribution. Typhoon is a minimal and free Kubernetes distribution.
@ -11,7 +13,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
@ -62,7 +64,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.4"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -101,9 +103,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.3 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.4
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.3 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.4
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.3 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -24,7 +24,7 @@ spec:
type: RuntimeDefault type: RuntimeDefault
containers: containers:
- name: grafana - name: grafana
image: docker.io/grafana/grafana:9.0.3 image: docker.io/grafana/grafana:9.1.0
env: env:
- name: GF_PATHS_CONFIG - name: GF_PATHS_CONFIG
value: "/etc/grafana/custom.ini" value: "/etc/grafana/custom.ini"
@ -32,15 +32,22 @@ spec:
- name: http - name: http
containerPort: 8080 containerPort: 8080
livenessProbe: livenessProbe:
httpGet: tcpSocket:
path: /metrics
port: 8080 port: 8080
initialDelaySeconds: 10 initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 5
successThreshold: 1
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /api/health scheme: HTTP
path: /robots.txt
port: 8080 port: 8080
initialDelaySeconds: 10 initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
resources: resources:
requests: requests:
cpu: 100m cpu: 100m

View File

@ -21,7 +21,7 @@ spec:
serviceAccountName: prometheus serviceAccountName: prometheus
containers: containers:
- name: prometheus - name: prometheus
image: quay.io/prometheus/prometheus:v2.37.0 image: quay.io/prometheus/prometheus:v2.38.0
args: args:
- --web.listen-address=0.0.0.0:9090 - --web.listen-address=0.0.0.0:9090
- --config.file=/etc/prometheus/prometheus.yaml - --config.file=/etc/prometheus/prometheus.yaml

View File

@ -22,6 +22,8 @@ spec:
securityContext: securityContext:
runAsNonRoot: true runAsNonRoot: true
runAsUser: 65534 runAsUser: 65534
runAsGroup: 65534
fsGroup: 65534
seccompProfile: seccompProfile:
type: RuntimeDefault type: RuntimeDefault
hostNetwork: true hostNetwork: true
@ -33,8 +35,6 @@ spec:
- --path.procfs=/host/proc - --path.procfs=/host/proc
- --path.sysfs=/host/sys - --path.sysfs=/host/sys
- --path.rootfs=/host/root - --path.rootfs=/host/root
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+)($|/)
- --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
ports: ports:
- name: metrics - name: metrics
containerPort: 9100 containerPort: 9100
@ -46,6 +46,9 @@ spec:
limits: limits:
cpu: 200m cpu: 200m
memory: 100Mi memory: 100Mi
securityContext:
seLinuxOptions:
type: spc_t
volumeMounts: volumeMounts:
- name: proc - name: proc
mountPath: /host/proc mountPath: /host/proc
@ -55,9 +58,12 @@ spec:
readOnly: true readOnly: true
- name: root - name: root
mountPath: /host/root mountPath: /host/root
mountPropagation: HostToContainer
readOnly: true readOnly: true
tolerations: tolerations:
- key: node-role.kubernetes.io/master - key: node-role.kubernetes.io/controller
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists operator: Exists
- key: node.kubernetes.io/not-ready - key: node.kubernetes.io/not-ready
operator: Exists operator: Exists

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -56,7 +56,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -129,7 +129,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.24.3 quay.io/poseidon/kubelet:v1.24.4
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -224,7 +224,6 @@ storage:
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
ETCD_PEER_CLIENT_CERT_AUTH=true ETCD_PEER_CLIENT_CERT_AUTH=true
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:

View File

@ -23,7 +23,7 @@ resource "aws_instance" "controllers" {
instance_type = var.controller_type instance_type = var.controller_type
ami = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id ami = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
user_data = data.ct_config.controller-ignitions.*.rendered[count.index] user_data = data.ct_config.controllers.*.rendered[count.index]
# storage # storage
root_block_device { root_block_device {
@ -46,41 +46,22 @@ resource "aws_instance" "controllers" {
} }
} }
# Controller Ignition configs # Fedora CoreOS controllers
data "ct_config" "controller-ignitions" { data "ct_config" "controllers" {
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = var.controller_snippets
}
# Controller Fedora CoreOS configs
data "template_file" "controller-configs" {
count = var.controller_count count = var.controller_count
content = templatefile("${path.module}/butane/controller.yaml", {
template = file("${path.module}/fcc/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records # Cannot use cyclic dependencies on controllers or their DNS records
etcd_name = "etcd${count.index}" etcd_name = "etcd${count.index}"
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}" etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,... # etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered) etcd_initial_cluster = join(",", [
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
])
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
} })
strict = true
snippets = var.controller_snippets
} }
data "template_file" "etcds" {
count = var.controller_count
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
vars = {
index = count.index
cluster_name = var.cluster_name
dns_zone = var.dns_zone
}
}

View File

@ -3,10 +3,8 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
aws = ">= 2.23, <= 5.0" aws = ">= 2.23, <= 5.0"
template = "~> 2.2" null = ">= 2.1"
null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.9"

View File

@ -1,3 +1,7 @@
locals {
ami_id = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
}
data "aws_ami" "fedora-coreos" { data "aws_ami" "fedora-coreos" {
most_recent = true most_recent = true
owners = ["125523088429"] owners = ["125523088429"]

View File

@ -29,7 +29,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -97,7 +97,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
@ -136,7 +136,6 @@ storage:
DefaultCPUAccounting=yes DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes DefaultBlockIOAccounting=yes
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:

View File

@ -3,9 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
aws = ">= 2.23, <= 5.0" aws = ">= 2.23, <= 5.0"
template = "~> 2.2"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.9"

View File

@ -1,6 +1,6 @@
# Workers AutoScaling Group # Workers AutoScaling Group
resource "aws_autoscaling_group" "workers" { resource "aws_autoscaling_group" "workers" {
name = "${var.name}-worker ${aws_launch_configuration.worker.name}" name = "${var.name}-worker"
# count # count
desired_capacity = var.worker_count desired_capacity = var.worker_count
@ -22,6 +22,14 @@ resource "aws_autoscaling_group" "workers" {
var.target_groups, var.target_groups,
]) ])
instance_refresh {
strategy = "Rolling"
preferences {
instance_warmup = 120
min_healthy_percentage = 90
}
}
lifecycle { lifecycle {
# override the default destroy and replace update behavior # override the default destroy and replace update behavior
create_before_destroy = true create_before_destroy = true
@ -42,12 +50,13 @@ resource "aws_autoscaling_group" "workers" {
# Worker template # Worker template
resource "aws_launch_configuration" "worker" { resource "aws_launch_configuration" "worker" {
image_id = var.arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id name_prefix = "${var.name}-worker"
image_id = local.ami_id
instance_type = var.instance_type instance_type = var.instance_type
spot_price = var.spot_price > 0 ? var.spot_price : null spot_price = var.spot_price > 0 ? var.spot_price : null
enable_monitoring = false enable_monitoring = false
user_data = data.ct_config.worker-ignition.rendered user_data = data.ct_config.worker.rendered
# storage # storage
root_block_device { root_block_device {
@ -67,24 +76,16 @@ resource "aws_launch_configuration" "worker" {
} }
} }
# Worker Ignition config # Fedora CoreOS worker
data "ct_config" "worker-ignition" { data "ct_config" "worker" {
content = data.template_file.worker-config.rendered content = templatefile("${path.module}/butane/worker.yaml", {
strict = true
snippets = var.snippets
}
# Worker Fedora CoreOS config
data "template_file" "worker-config" {
template = file("${path.module}/fcc/worker.yaml")
vars = {
kubeconfig = indent(10, var.kubeconfig) kubeconfig = indent(10, var.kubeconfig)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
} })
strict = true
snippets = var.snippets
} }

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: etcd-member.service - name: etcd-member.service
@ -57,7 +58,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -121,7 +122,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \
@ -134,18 +135,15 @@ systemd:
storage: storage:
directories: directories:
- path: /var/lib/etcd - path: /var/lib/etcd
filesystem: root
mode: 0700 mode: 0700
overwrite: true overwrite: true
files: files:
- path: /etc/kubernetes/kubeconfig - path: /etc/kubernetes/kubeconfig
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
${kubeconfig} ${kubeconfig}
- path: /opt/bootstrap/layout - path: /opt/bootstrap/layout
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -168,7 +166,6 @@ storage:
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -183,13 +180,11 @@ storage:
sleep 5 sleep 5
done done
- path: /etc/sysctl.d/max-user-watches.conf - path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
fs.inotify.max_user_watches=16184 fs.inotify.max_user_watches=16184
- path: /etc/etcd/etcd.env - path: /etc/etcd/etcd.env
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |

View File

@ -24,7 +24,7 @@ resource "aws_instance" "controllers" {
instance_type = var.controller_type instance_type = var.controller_type
ami = local.ami_id ami = local.ami_id
user_data = data.ct_config.controller-ignitions.*.rendered[count.index] user_data = data.ct_config.controllers.*.rendered[count.index]
# storage # storage
root_block_device { root_block_device {
@ -47,41 +47,22 @@ resource "aws_instance" "controllers" {
} }
} }
# Controller Ignition configs # Flatcar Linux controllers
data "ct_config" "controller-ignitions" { data "ct_config" "controllers" {
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = var.controller_snippets
}
# Controller Container Linux configs
data "template_file" "controller-configs" {
count = var.controller_count count = var.controller_count
content = templatefile("${path.module}/butane/controller.yaml", {
template = file("${path.module}/cl/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records # Cannot use cyclic dependencies on controllers or their DNS records
etcd_name = "etcd${count.index}" etcd_name = "etcd${count.index}"
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}" etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,... # etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered) etcd_initial_cluster = join(",", [
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
])
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
} })
strict = true
snippets = var.controller_snippets
} }
data "template_file" "etcds" {
count = var.controller_count
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
vars = {
index = count.index
cluster_name = var.cluster_name
dns_zone = var.dns_zone
}
}

View File

@ -3,13 +3,11 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
aws = ">= 2.23, <= 5.0" aws = ">= 2.23, <= 5.0"
template = "~> 2.2" null = ">= 2.1"
null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.11"
} }
} }
} }

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: docker.service - name: docker.service
@ -29,7 +30,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -96,7 +97,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
@ -106,13 +107,11 @@ systemd:
storage: storage:
files: files:
- path: /etc/kubernetes/kubeconfig - path: /etc/kubernetes/kubeconfig
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
${kubeconfig} ${kubeconfig}
- path: /etc/sysctl.d/max-user-watches.conf - path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |

View File

@ -3,12 +3,10 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
aws = ">= 2.23, <= 5.0" aws = ">= 2.23, <= 5.0"
template = "~> 2.2"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.11"
} }
} }
} }

View File

@ -1,6 +1,6 @@
# Workers AutoScaling Group # Workers AutoScaling Group
resource "aws_autoscaling_group" "workers" { resource "aws_autoscaling_group" "workers" {
name = "${var.name}-worker ${aws_launch_configuration.worker.name}" name = "${var.name}-worker"
# count # count
desired_capacity = var.worker_count desired_capacity = var.worker_count
@ -22,6 +22,14 @@ resource "aws_autoscaling_group" "workers" {
var.target_groups, var.target_groups,
]) ])
instance_refresh {
strategy = "Rolling"
preferences {
instance_warmup = 120
min_healthy_percentage = 90
}
}
lifecycle { lifecycle {
# override the default destroy and replace update behavior # override the default destroy and replace update behavior
create_before_destroy = true create_before_destroy = true
@ -42,12 +50,13 @@ resource "aws_autoscaling_group" "workers" {
# Worker template # Worker template
resource "aws_launch_configuration" "worker" { resource "aws_launch_configuration" "worker" {
name_prefix = "${var.name}-worker"
image_id = local.ami_id image_id = local.ami_id
instance_type = var.instance_type instance_type = var.instance_type
spot_price = var.spot_price > 0 ? var.spot_price : null spot_price = var.spot_price > 0 ? var.spot_price : null
enable_monitoring = false enable_monitoring = false
user_data = data.ct_config.worker-ignition.rendered user_data = data.ct_config.worker.rendered
# storage # storage
root_block_device { root_block_device {
@ -67,24 +76,16 @@ resource "aws_launch_configuration" "worker" {
} }
} }
# Worker Ignition config # Flatcar Linux worker
data "ct_config" "worker-ignition" { data "ct_config" "worker" {
content = data.template_file.worker-config.rendered content = templatefile("${path.module}/butane/worker.yaml", {
strict = true
snippets = var.snippets
}
# Worker Container Linux config
data "template_file" "worker-config" {
template = file("${path.module}/cl/worker.yaml")
vars = {
kubeconfig = indent(10, var.kubeconfig) kubeconfig = indent(10, var.kubeconfig)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
} })
strict = true
snippets = var.snippets
} }

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -53,7 +53,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -67,8 +67,8 @@ systemd:
--network host \ --network host \
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/machine-id:/etc/machine-id:ro \ --volume /etc/machine-id:/etc/machine-id:ro \
--volume /usr/lib/os-release:/etc/os-release:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup \ --volume /sys/fs/cgroup:/sys/fs/cgroup \
@ -124,7 +124,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.24.3 quay.io/poseidon/kubelet:v1.24.4
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -219,7 +219,6 @@ storage:
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
ETCD_PEER_CLIENT_CERT_AUTH=true ETCD_PEER_CLIENT_CERT_AUTH=true
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:
@ -244,3 +243,4 @@ passwd:
- name: core - name: core
ssh_authorized_keys: ssh_authorized_keys:
- ${ssh_authorized_key} - ${ssh_authorized_key}

View File

@ -35,7 +35,7 @@ resource "azurerm_linux_virtual_machine" "controllers" {
availability_set_id = azurerm_availability_set.controllers.id availability_set_id = azurerm_availability_set.controllers.id
size = var.controller_type size = var.controller_type
custom_data = base64encode(data.ct_config.controller-ignitions.*.rendered[count.index]) custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
# storage # storage
source_image_id = var.os_image source_image_id = var.os_image
@ -111,41 +111,22 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
} }
# Controller Ignition configs # Fedora CoreOS controllers
data "ct_config" "controller-ignitions" { data "ct_config" "controllers" {
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = var.controller_snippets
}
# Controller Fedora CoreOS configs
data "template_file" "controller-configs" {
count = var.controller_count count = var.controller_count
content = templatefile("${path.module}/butane/controller.yaml", {
template = file("${path.module}/fcc/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records # Cannot use cyclic dependencies on controllers or their DNS records
etcd_name = "etcd${count.index}" etcd_name = "etcd${count.index}"
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}" etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,... # etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered) etcd_initial_cluster = join(",", [
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
])
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
} })
strict = true
snippets = var.controller_snippets
} }
data "template_file" "etcds" {
count = var.controller_count
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
vars = {
index = count.index
cluster_name = var.cluster_name
dns_zone = var.dns_zone
}
}

View File

@ -3,10 +3,8 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = ">= 2.8, < 4.0" azurerm = ">= 2.8, < 4.0"
template = "~> 2.2" null = ">= 2.1"
null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.9"

View File

@ -26,7 +26,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -92,7 +92,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
@ -131,7 +131,6 @@ storage:
DefaultCPUAccounting=yes DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes DefaultBlockIOAccounting=yes
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:

View File

@ -3,9 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = ">= 2.8, < 4.0" azurerm = ">= 2.8, < 4.0"
template = "~> 2.2"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.9"

View File

@ -9,7 +9,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
# instance name prefix for instances in the set # instance name prefix for instances in the set
computer_name_prefix = "${var.name}-worker" computer_name_prefix = "${var.name}-worker"
single_placement_group = false single_placement_group = false
custom_data = base64encode(data.ct_config.worker-ignition.rendered) custom_data = base64encode(data.ct_config.worker.rendered)
# storage # storage
source_image_id = var.os_image source_image_id = var.os_image
@ -70,24 +70,17 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
} }
} }
# Worker Ignition configs # Fedora CoreOS worker
data "ct_config" "worker-ignition" { data "ct_config" "worker" {
content = data.template_file.worker-config.rendered content = templatefile("${path.module}/butane/worker.yaml", {
strict = true
snippets = var.snippets
}
# Worker Fedora CoreOS configs
data "template_file" "worker-config" {
template = file("${path.module}/fcc/worker.yaml")
vars = {
kubeconfig = indent(10, var.kubeconfig) kubeconfig = indent(10, var.kubeconfig)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
} })
strict = true
snippets = var.snippets
} }

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: etcd-member.service - name: etcd-member.service
@ -55,7 +56,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -94,9 +95,9 @@ systemd:
--kubeconfig=/var/lib/kubelet/kubeconfig \ --kubeconfig=/var/lib/kubelet/kubeconfig \
--node-labels=node.kubernetes.io/controller="true" \ --node-labels=node.kubernetes.io/controller="true" \
--pod-manifest-path=/etc/kubernetes/manifests \ --pod-manifest-path=/etc/kubernetes/manifests \
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
--read-only-port=0 \ --read-only-port=0 \
--resolv-conf=/run/systemd/resolve/resolv.conf \ --resolv-conf=/run/systemd/resolve/resolv.conf \
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
--rotate-certificates \ --rotate-certificates \
--volume-plugin-dir=/var/lib/kubelet/volumeplugins --volume-plugin-dir=/var/lib/kubelet/volumeplugins
ExecStart=docker logs -f kubelet ExecStart=docker logs -f kubelet
@ -117,7 +118,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \
@ -130,18 +131,15 @@ systemd:
storage: storage:
directories: directories:
- path: /var/lib/etcd - path: /var/lib/etcd
filesystem: root
mode: 0700 mode: 0700
overwrite: true overwrite: true
files: files:
- path: /etc/kubernetes/kubeconfig - path: /etc/kubernetes/kubeconfig
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
${kubeconfig} ${kubeconfig}
- path: /opt/bootstrap/layout - path: /opt/bootstrap/layout
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -164,7 +162,6 @@ storage:
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -179,13 +176,11 @@ storage:
sleep 5 sleep 5
done done
- path: /etc/sysctl.d/max-user-watches.conf - path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
fs.inotify.max_user_watches=16184 fs.inotify.max_user_watches=16184
- path: /etc/etcd/etcd.env - path: /etc/etcd/etcd.env
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |

View File

@ -41,7 +41,7 @@ resource "azurerm_linux_virtual_machine" "controllers" {
availability_set_id = azurerm_availability_set.controllers.id availability_set_id = azurerm_availability_set.controllers.id
size = var.controller_type size = var.controller_type
custom_data = base64encode(data.ct_config.controller-ignitions.*.rendered[count.index]) custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
# storage # storage
os_disk { os_disk {
@ -130,41 +130,22 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
} }
# Controller Ignition configs # Flatcar Linux controllers
data "ct_config" "controller-ignitions" { data "ct_config" "controllers" {
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = var.controller_snippets
}
# Controller Container Linux configs
data "template_file" "controller-configs" {
count = var.controller_count count = var.controller_count
content = templatefile("${path.module}/butane/controller.yaml", {
template = file("${path.module}/cl/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records # Cannot use cyclic dependencies on controllers or their DNS records
etcd_name = "etcd${count.index}" etcd_name = "etcd${count.index}"
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}" etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,... # etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered) etcd_initial_cluster = join(",", [
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
])
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
} })
strict = true
snippets = var.controller_snippets
} }
data "template_file" "etcds" {
count = var.controller_count
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
vars = {
index = count.index
cluster_name = var.cluster_name
dns_zone = var.dns_zone
}
}

View File

@ -3,13 +3,11 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = ">= 2.8, < 4.0" azurerm = ">= 2.8, < 4.0"
template = "~> 2.2" null = ">= 2.1"
null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.11"
} }
} }
} }

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: docker.service - name: docker.service
@ -27,7 +28,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -92,7 +93,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
@ -102,13 +103,11 @@ systemd:
storage: storage:
files: files:
- path: /etc/kubernetes/kubeconfig - path: /etc/kubernetes/kubeconfig
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
${kubeconfig} ${kubeconfig}
- path: /etc/sysctl.d/max-user-watches.conf - path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |

View File

@ -3,12 +3,10 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = ">= 2.8, < 4.0" azurerm = ">= 2.8, < 4.0"
template = "~> 2.2"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.11"
} }
} }
} }

View File

@ -14,7 +14,7 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
# instance name prefix for instances in the set # instance name prefix for instances in the set
computer_name_prefix = "${var.name}-worker" computer_name_prefix = "${var.name}-worker"
single_placement_group = false single_placement_group = false
custom_data = base64encode(data.ct_config.worker-ignition.rendered) custom_data = base64encode(data.ct_config.worker.rendered)
# storage # storage
os_disk { os_disk {
@ -88,24 +88,16 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
} }
} }
# Worker Ignition configs # Flatcar Linux worker
data "ct_config" "worker-ignition" { data "ct_config" "worker" {
content = data.template_file.worker-config.rendered content = templatefile("${path.module}/butane/worker.yaml", {
strict = true
snippets = var.snippets
}
# Worker Container Linux configs
data "template_file" "worker-config" {
template = file("${path.module}/cl/worker.yaml")
vars = {
kubeconfig = indent(10, var.kubeconfig) kubeconfig = indent(10, var.kubeconfig)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
} })
strict = true
snippets = var.snippets
} }

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name] api_servers = [var.k8s_domain_name]

View File

@ -52,7 +52,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -126,7 +126,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=-/usr/bin/podman rm bootstrap ExecStartPre=-/usr/bin/podman rm bootstrap
ExecStart=/usr/bin/podman run --name bootstrap \ ExecStart=/usr/bin/podman run --name bootstrap \
--network host \ --network host \
@ -229,7 +229,6 @@ storage:
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
ETCD_PEER_CLIENT_CERT_AUTH=true ETCD_PEER_CLIENT_CERT_AUTH=true
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:

View File

@ -25,7 +25,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -127,7 +127,6 @@ storage:
DefaultCPUAccounting=yes DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes DefaultBlockIOAccounting=yes
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:

View File

@ -1,34 +1,26 @@
locals { locals {
remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64" remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64"
remote_initrd = [ remote_initrd = [
"https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img", "--name main https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
"https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img"
] ]
remote_args = [ remote_args = [
"ip=dhcp", "initrd=main",
"rd.neednet=1", "coreos.live.rootfs_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
"coreos.inst.install_dev=${var.install_disk}", "coreos.inst.install_dev=${var.install_disk}",
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}", "coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.inst.image_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
"console=tty0",
"console=ttyS0",
] ]
cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-kernel-x86_64" cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-kernel-x86_64"
cached_initrd = [ cached_initrd = [
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img", "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
"/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img"
] ]
cached_args = [ cached_args = [
"ip=dhcp", "initrd=main",
"rd.neednet=1", "coreos.live.rootfs_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-rootfs.x86_64.img",
"coreos.inst.install_dev=${var.install_disk}", "coreos.inst.install_dev=${var.install_disk}",
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}", "coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.inst.image_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
"console=tty0",
"console=ttyS0",
] ]
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel kernel = var.cached_install ? local.cached_kernel : local.remote_kernel
@ -46,29 +38,22 @@ resource "matchbox_profile" "controllers" {
initrd = local.initrd initrd = local.initrd
args = concat(local.args, var.kernel_args) args = concat(local.args, var.kernel_args)
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index] raw_ignition = data.ct_config.controllers.*.rendered[count.index]
} }
data "ct_config" "controller-ignitions" { # Fedora CoreOS controllers
data "ct_config" "controllers" {
count = length(var.controllers) count = length(var.controllers)
content = templatefile("${path.module}/butane/controller.yaml", {
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
}
data "template_file" "controller-configs" {
count = length(var.controllers)
template = file("${path.module}/fcc/controller.yaml")
vars = {
domain_name = var.controllers.*.domain[count.index] domain_name = var.controllers.*.domain[count.index]
etcd_name = var.controllers.*.name[count.index] etcd_name = var.controllers.*.name[count.index]
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain)) etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
} })
strict = true
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
} }
// Fedora CoreOS worker profile // Fedora CoreOS worker profile
@ -80,28 +65,20 @@ resource "matchbox_profile" "workers" {
initrd = local.initrd initrd = local.initrd
args = concat(local.args, var.kernel_args) args = concat(local.args, var.kernel_args)
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index] raw_ignition = data.ct_config.workers.*.rendered[count.index]
} }
data "ct_config" "worker-ignitions" { # Fedora CoreOS workers
data "ct_config" "workers" {
count = length(var.workers) count = length(var.workers)
content = templatefile("${path.module}/butane/worker.yaml", {
content = data.template_file.worker-configs.*.rendered[count.index]
strict = true
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
}
data "template_file" "worker-configs" {
count = length(var.workers)
template = file("${path.module}/fcc/worker.yaml")
vars = {
domain_name = var.workers.*.domain[count.index] domain_name = var.workers.*.domain[count.index]
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], [])) node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], [])) node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
} })
strict = true
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
} }

View File

@ -3,14 +3,11 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
template = "~> 2.2" null = ">= 2.1"
null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.9"
} }
matchbox = { matchbox = {
source = "poseidon/matchbox" source = "poseidon/matchbox"
version = "~> 0.5.0" version = "~> 0.5.0"

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name] api_servers = [var.k8s_domain_name]

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: etcd-member.service - name: etcd-member.service
@ -63,7 +64,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -126,7 +127,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \
@ -139,21 +140,17 @@ systemd:
storage: storage:
directories: directories:
- path: /var/lib/etcd - path: /var/lib/etcd
filesystem: root
mode: 0700 mode: 0700
overwrite: true overwrite: true
- path: /etc/kubernetes - path: /etc/kubernetes
filesystem: root
mode: 0755 mode: 0755
files: files:
- path: /etc/hostname - path: /etc/hostname
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: inline:
${domain_name} ${domain_name}
- path: /opt/bootstrap/layout - path: /opt/bootstrap/layout
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -176,7 +173,6 @@ storage:
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -191,13 +187,11 @@ storage:
sleep 5 sleep 5
done done
- path: /etc/sysctl.d/max-user-watches.conf - path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
fs.inotify.max_user_watches=16184 fs.inotify.max_user_watches=16184
- path: /etc/etcd/etcd.env - path: /etc/etcd/etcd.env
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: installer.service - name: installer.service
@ -25,12 +26,11 @@ systemd:
storage: storage:
files: files:
- path: /opt/installer - path: /opt/installer
filesystem: root
mode: 0500 mode: 0500
contents: contents:
inline: | inline: |
#!/bin/bash -ex #!/bin/bash -ex
curl --retry 10 "${ignition_endpoint}?{{.request.raw_query}}&os=installed" -o ignition.json curl --retry 10 "${ignition_endpoint}?mac=${mac}&os=installed" -o ignition.json
flatcar-install \ flatcar-install \
-d ${install_disk} \ -d ${install_disk} \
-C ${os_channel} \ -C ${os_channel} \

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: docker.service - name: docker.service
@ -35,7 +36,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -99,17 +100,14 @@ systemd:
storage: storage:
directories: directories:
- path: /etc/kubernetes - path: /etc/kubernetes
filesystem: root
mode: 0755 mode: 0755
files: files:
- path: /etc/hostname - path: /etc/hostname
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: inline:
${domain_name} ${domain_name}
- path: /etc/sysctl.d/max-user-watches.conf - path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |

View File

@ -18,12 +18,10 @@ resource "matchbox_profile" "flatcar-install" {
"initrd=flatcar_production_pxe_image.cpio.gz", "initrd=flatcar_production_pxe_image.cpio.gz",
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}", "flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"flatcar.first_boot=yes", "flatcar.first_boot=yes",
"console=tty0",
"console=ttyS0",
var.kernel_args, var.kernel_args,
]) ])
container_linux_config = data.template_file.install-configs.*.rendered[count.index] raw_ignition = data.ct_config.install.*.rendered[count.index]
} }
// Flatcar Linux Install profile (from matchbox /assets cache) // Flatcar Linux Install profile (from matchbox /assets cache)
@ -42,101 +40,84 @@ resource "matchbox_profile" "cached-flatcar-install" {
"initrd=flatcar_production_pxe_image.cpio.gz", "initrd=flatcar_production_pxe_image.cpio.gz",
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}", "flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"flatcar.first_boot=yes", "flatcar.first_boot=yes",
"console=tty0",
"console=ttyS0",
var.kernel_args, var.kernel_args,
]) ])
container_linux_config = data.template_file.cached-install-configs.*.rendered[count.index] raw_ignition = data.ct_config.cached-install.*.rendered[count.index]
} }
data "template_file" "install-configs" { # Flatcar Linux install
data "ct_config" "install" {
count = length(var.controllers) + length(var.workers) count = length(var.controllers) + length(var.workers)
content = templatefile("${path.module}/butane/install.yaml", {
template = file("${path.module}/cl/install.yaml")
vars = {
os_channel = local.channel os_channel = local.channel
os_version = var.os_version os_version = var.os_version
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint) ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
install_disk = var.install_disk install_disk = var.install_disk
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
# only cached profile adds -b baseurl # only cached profile adds -b baseurl
baseurl_flag = "" baseurl_flag = ""
} })
strict = true
} }
data "template_file" "cached-install-configs" { # Flatcar Linux cached install
data "ct_config" "cached-install" {
count = length(var.controllers) + length(var.workers) count = length(var.controllers) + length(var.workers)
content = templatefile("${path.module}/butane/install.yaml", {
template = file("${path.module}/cl/install.yaml")
vars = {
os_channel = local.channel os_channel = local.channel
os_version = var.os_version os_version = var.os_version
ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint) ignition_endpoint = format("%s/ignition", var.matchbox_http_endpoint)
mac = concat(var.controllers.*.mac, var.workers.*.mac)[count.index]
install_disk = var.install_disk install_disk = var.install_disk
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
# profile uses -b baseurl to install from matchbox cache # profile uses -b baseurl to install from matchbox cache
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/flatcar" baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/flatcar"
} })
strict = true
} }
// Kubernetes Controller profiles // Kubernetes Controller profiles
resource "matchbox_profile" "controllers" { resource "matchbox_profile" "controllers" {
count = length(var.controllers) count = length(var.controllers)
name = format("%s-controller-%s", var.cluster_name, var.controllers.*.name[count.index]) name = format("%s-controller-%s", var.cluster_name, var.controllers.*.name[count.index])
raw_ignition = data.ct_config.controller-ignitions.*.rendered[count.index] raw_ignition = data.ct_config.controllers.*.rendered[count.index]
} }
data "ct_config" "controller-ignitions" { # Flatcar Linux controllers
count = length(var.controllers) data "ct_config" "controllers" {
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
}
data "template_file" "controller-configs" {
count = length(var.controllers) count = length(var.controllers)
content = templatefile("${path.module}/butane/controller.yaml", {
template = file("${path.module}/cl/controller.yaml")
vars = {
domain_name = var.controllers.*.domain[count.index] domain_name = var.controllers.*.domain[count.index]
etcd_name = var.controllers.*.name[count.index] etcd_name = var.controllers.*.name[count.index]
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain)) etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
} })
strict = true
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
} }
// Kubernetes Worker profiles // Kubernetes Worker profiles
resource "matchbox_profile" "workers" { resource "matchbox_profile" "workers" {
count = length(var.workers) count = length(var.workers)
name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index]) name = format("%s-worker-%s", var.cluster_name, var.workers.*.name[count.index])
raw_ignition = data.ct_config.worker-ignitions.*.rendered[count.index] raw_ignition = data.ct_config.workers.*.rendered[count.index]
} }
data "ct_config" "worker-ignitions" { # Flatcar Linux workers
count = length(var.workers) data "ct_config" "workers" {
content = data.template_file.worker-configs.*.rendered[count.index]
strict = true
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
}
data "template_file" "worker-configs" {
count = length(var.workers) count = length(var.workers)
content = templatefile("${path.module}/butane/worker.yaml", {
template = file("${path.module}/cl/worker.yaml")
vars = {
domain_name = var.workers.*.domain[count.index] domain_name = var.workers.*.domain[count.index]
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], [])) node_labels = join(",", lookup(var.worker_node_labels, var.workers.*.name[count.index], []))
node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], [])) node_taints = join(",", lookup(var.worker_node_taints, var.workers.*.name[count.index], []))
} })
strict = true
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
} }

View File

@ -3,14 +3,11 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
template = "~> 2.2" null = ">= 2.1"
null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.9"
} }
matchbox = { matchbox = {
source = "poseidon/matchbox" source = "poseidon/matchbox"
version = "~> 0.5.0" version = "~> 0.5.0"

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -54,7 +54,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -136,7 +136,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.24.3 quay.io/poseidon/kubelet:v1.24.4
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -226,7 +226,6 @@ storage:
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
ETCD_PEER_CLIENT_CERT_AUTH=true ETCD_PEER_CLIENT_CERT_AUTH=true
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:

View File

@ -28,7 +28,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -99,7 +99,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
@ -133,7 +133,6 @@ storage:
DefaultCPUAccounting=yes DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes DefaultBlockIOAccounting=yes
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:

View File

@ -41,11 +41,11 @@ resource "digitalocean_droplet" "controllers" {
size = var.controller_type size = var.controller_type
# network # network
vpc_uuid = digitalocean_vpc.network.id vpc_uuid = digitalocean_vpc.network.id
# TODO: Only official DigitalOcean images support IPv6 # TODO: Only official DigitalOcean images support IPv6
ipv6 = false ipv6 = false
user_data = data.ct_config.controller-ignitions.*.rendered[count.index] user_data = data.ct_config.controllers.*.rendered[count.index]
ssh_keys = var.ssh_fingerprints ssh_keys = var.ssh_fingerprints
tags = [ tags = [
@ -62,39 +62,20 @@ resource "digitalocean_tag" "controllers" {
name = "${var.cluster_name}-controller" name = "${var.cluster_name}-controller"
} }
# Controller Ignition configs # Fedora CoreOS controllers
data "ct_config" "controller-ignitions" { data "ct_config" "controllers" {
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = var.controller_snippets
}
# Controller Fedora CoreOS configs
data "template_file" "controller-configs" {
count = var.controller_count count = var.controller_count
content = templatefile("${path.module}/butane/controller.yaml", {
template = file("${path.module}/fcc/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records # Cannot use cyclic dependencies on controllers or their DNS records
etcd_name = "etcd${count.index}" etcd_name = "etcd${count.index}"
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}" etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,... # etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered) etcd_initial_cluster = join(",", [
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
])
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
} })
strict = true
snippets = var.controller_snippets
} }
data "template_file" "etcds" {
count = var.controller_count
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
vars = {
index = count.index
cluster_name = var.cluster_name
dns_zone = var.dns_zone
}
}

View File

@ -3,14 +3,11 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
template = "~> 2.2" null = ">= 2.1"
null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.9"
} }
digitalocean = { digitalocean = {
source = "digitalocean/digitalocean" source = "digitalocean/digitalocean"
version = ">= 2.12, < 3.0" version = ">= 2.12, < 3.0"

View File

@ -37,11 +37,11 @@ resource "digitalocean_droplet" "workers" {
size = var.worker_type size = var.worker_type
# network # network
vpc_uuid = digitalocean_vpc.network.id vpc_uuid = digitalocean_vpc.network.id
# TODO: Only official DigitalOcean images support IPv6 # TODO: Only official DigitalOcean images support IPv6
ipv6 = false ipv6 = false
user_data = data.ct_config.worker-ignition.rendered user_data = data.ct_config.worker.rendered
ssh_keys = var.ssh_fingerprints ssh_keys = var.ssh_fingerprints
tags = [ tags = [
@ -58,20 +58,12 @@ resource "digitalocean_tag" "workers" {
name = "${var.cluster_name}-worker" name = "${var.cluster_name}-worker"
} }
# Worker Ignition config # Fedora CoreOS worker
data "ct_config" "worker-ignition" { data "ct_config" "worker" {
content = data.template_file.worker-config.rendered content = templatefile("${path.module}/butane/worker.yaml", {
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
})
strict = true strict = true
snippets = var.worker_snippets snippets = var.worker_snippets
} }
# Worker Fedora CoreOS config
data "template_file" "worker-config" {
template = file("${path.module}/fcc/worker.yaml")
vars = {
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}
}

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: etcd-member.service - name: etcd-member.service
@ -65,7 +66,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -129,7 +130,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \
@ -142,15 +143,12 @@ systemd:
storage: storage:
directories: directories:
- path: /var/lib/etcd - path: /var/lib/etcd
filesystem: root
mode: 0700 mode: 0700
overwrite: true overwrite: true
- path: /etc/kubernetes - path: /etc/kubernetes
filesystem: root
mode: 0755 mode: 0755
files: files:
- path: /opt/bootstrap/layout - path: /opt/bootstrap/layout
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -173,7 +171,6 @@ storage:
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -188,13 +185,11 @@ storage:
sleep 5 sleep 5
done done
- path: /etc/sysctl.d/max-user-watches.conf - path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
fs.inotify.max_user_watches=16184 fs.inotify.max_user_watches=16184
- path: /etc/etcd/etcd.env - path: /etc/etcd/etcd.env
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: docker.service - name: docker.service
@ -37,7 +38,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -98,7 +99,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
@ -108,11 +109,9 @@ systemd:
storage: storage:
directories: directories:
- path: /etc/kubernetes - path: /etc/kubernetes
filesystem: root
mode: 0755 mode: 0755
files: files:
- path: /etc/sysctl.d/max-user-watches.conf - path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |

View File

@ -46,11 +46,11 @@ resource "digitalocean_droplet" "controllers" {
size = var.controller_type size = var.controller_type
# network # network
vpc_uuid = digitalocean_vpc.network.id vpc_uuid = digitalocean_vpc.network.id
# TODO: Only official DigitalOcean images support IPv6 # TODO: Only official DigitalOcean images support IPv6
ipv6 = false ipv6 = false
user_data = data.ct_config.controller-ignitions.*.rendered[count.index] user_data = data.ct_config.controllers.*.rendered[count.index]
ssh_keys = var.ssh_fingerprints ssh_keys = var.ssh_fingerprints
tags = [ tags = [
@ -67,39 +67,20 @@ resource "digitalocean_tag" "controllers" {
name = "${var.cluster_name}-controller" name = "${var.cluster_name}-controller"
} }
# Controller Ignition configs # Flatcar Linux controllers
data "ct_config" "controller-ignitions" { data "ct_config" "controllers" {
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = var.controller_snippets
}
# Controller Container Linux configs
data "template_file" "controller-configs" {
count = var.controller_count count = var.controller_count
content = templatefile("${path.module}/butane/controller.yaml", {
template = file("${path.module}/cl/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records # Cannot use cyclic dependencies on controllers or their DNS records
etcd_name = "etcd${count.index}" etcd_name = "etcd${count.index}"
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}" etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,... # etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered) etcd_initial_cluster = join(",", [
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
])
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
} })
strict = true
snippets = var.controller_snippets
} }
data "template_file" "etcds" {
count = var.controller_count
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
vars = {
index = count.index
cluster_name = var.cluster_name
dns_zone = var.dns_zone
}
}

View File

@ -3,14 +3,11 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
template = "~> 2.2" null = ">= 2.1"
null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.11"
} }
digitalocean = { digitalocean = {
source = "digitalocean/digitalocean" source = "digitalocean/digitalocean"
version = ">= 2.12, < 3.0" version = ">= 2.12, < 3.0"

View File

@ -35,11 +35,11 @@ resource "digitalocean_droplet" "workers" {
size = var.worker_type size = var.worker_type
# network # network
vpc_uuid = digitalocean_vpc.network.id vpc_uuid = digitalocean_vpc.network.id
# only official DigitalOcean images support IPv6 # only official DigitalOcean images support IPv6
ipv6 = local.is_official_image ipv6 = local.is_official_image
user_data = data.ct_config.worker-ignition.rendered user_data = data.ct_config.worker.rendered
ssh_keys = var.ssh_fingerprints ssh_keys = var.ssh_fingerprints
tags = [ tags = [
@ -56,20 +56,12 @@ resource "digitalocean_tag" "workers" {
name = "${var.cluster_name}-worker" name = "${var.cluster_name}-worker"
} }
# Worker Ignition config # Flatcar Linux worker
data "ct_config" "worker-ignition" { data "ct_config" "worker" {
content = data.template_file.worker-config.rendered content = templatefile("${path.module}/butane/worker.yaml", {
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
})
strict = true strict = true
snippets = var.worker_snippets snippets = var.worker_snippets
} }
# Worker Container Linux config
data "template_file" "worker-config" {
template = file("${path.module}/cl/worker.yaml")
vars = {
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}
}

1
docs/CNAME Normal file
View File

@ -0,0 +1 @@
typhoon.psdn.io

View File

@ -13,7 +13,7 @@ Create a cluster with ARM64 controller and worker nodes. Container workloads mus
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.4"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
@ -38,7 +38,7 @@ Create a cluster with ARM64 controller and worker nodes. Container workloads mus
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.4"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
@ -64,9 +64,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
``` ```
$ kubectl get nodes -o wide $ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-21-119 Ready <none> 77s v1.24.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8 ip-10-0-21-119 Ready <none> 77s v1.24.4 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-32-166 Ready <none> 80s v1.24.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8 ip-10-0-32-166 Ready <none> 80s v1.24.4 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-5-79 Ready <none> 77s v1.24.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8 ip-10-0-5-79 Ready <none> 77s v1.24.4 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
``` ```
## Hybrid ## Hybrid
@ -77,7 +77,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.4"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
@ -100,7 +100,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.4"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
@ -123,7 +123,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf ```tf
module "gravitas-arm64" { module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.24.4"
# AWS # AWS
vpc_id = module.gravitas.vpc_id vpc_id = module.gravitas.vpc_id
@ -147,7 +147,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf ```tf
module "gravitas-arm64" { module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.24.4"
# AWS # AWS
vpc_id = module.gravitas.vpc_id vpc_id = module.gravitas.vpc_id
@ -172,9 +172,9 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
``` ```
$ kubectl get nodes -o wide $ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-1-73 Ready <none> 111m v1.24.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8 ip-10-0-1-73 Ready <none> 111m v1.24.4 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-22-79... Ready <none> 111m v1.24.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8 ip-10-0-22-79... Ready <none> 111m v1.24.4 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
ip-10-0-24-130 Ready <none> 111m v1.24.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8 ip-10-0-24-130 Ready <none> 111m v1.24.4 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-39-19 Ready <none> 111m v1.24.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8 ip-10-0-39-19 Ready <none> 111m v1.24.4 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
``` ```

View File

@ -12,9 +12,11 @@ Clusters are kept to a minimal Kubernetes control plane by offering components l
## Hosts ## Hosts
Typhoon uses the [Ignition](https://github.com/coreos/ignition) system of Fedora CoreOS and Flatcar Linux to immutably declare a system via first-boot disk provisioning. Fedora CoreOS uses a [Butane Config](https://coreos.github.io/butane/specs/) and Flatcar Linux uses a [Container Linux Config](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/examples.md) (CLC). These define disk partitions, filesystems, systemd units, dropins, config files, mount units, raid arrays, and users. ### Background
Controller and worker instances form a minimal and secure Kubernetes cluster on each platform. Typhoon provides the **snippets** feature to accept Butane or Container Linux Configs to validate and additively merge into instance declarations. This allows advanced host customization and experimentation. Typhoon uses the [Ignition](https://github.com/coreos/ignition) system of Fedora CoreOS and Flatcar Linux to immutably declare a system via first-boot disk provisioning. Human-friendly [Butane Configs](https://coreos.github.io/butane/specs/) define disk partitions, filesystems, systemd units, dropins, config files, mount units, raid arrays, users, and more before being converted to Ignition.
Controller and worker instances form a minimal and secure Kubernetes cluster on each platform. Typhoon provides the **snippets** feature to accept custom Butane Configs that are merged with instance declarations. This allows advanced host customization and experimentation.
!!! note !!! note
Snippets cannot be used to modify an already existing instance, the antithesis of immutable provisioning. Ignition fully declares a system on first boot only. Snippets cannot be used to modify an already existing instance, the antithesis of immutable provisioning. Ignition fully declares a system on first boot only.
@ -25,127 +27,104 @@ Controller and worker instances form a minimal and secure Kubernetes cluster on
!!! danger !!! danger
Edits to snippets for controller instances can (correctly) cause Terraform to observe a diff (if not otherwise suppressed) and propose destroying and recreating controller(s). Recognize that this is destructive since controllers run etcd and are stateful. See [blue/green](/topics/maintenance/#upgrades) clusters. Edits to snippets for controller instances can (correctly) cause Terraform to observe a diff (if not otherwise suppressed) and propose destroying and recreating controller(s). Recognize that this is destructive since controllers run etcd and are stateful. See [blue/green](/topics/maintenance/#upgrades) clusters.
### Fedora CoreOS ### Usage
!!! note
Fedora CoreOS snippets require `terraform-provider-ct` v0.5+
Define a Butane Config ([docs](https://coreos.github.io/butane/specs/), [config](https://github.com/coreos/butane/blob/main/docs/config-fcos-v1_4.md)) in version control near your Terraform workspace directory (e.g. perhaps in a `snippets` subdirectory). You may organize snippets into multiple files, if desired. Define a Butane Config ([docs](https://coreos.github.io/butane/specs/), [config](https://github.com/coreos/butane/blob/main/docs/config-fcos-v1_4.md)) in version control near your Terraform workspace directory (e.g. perhaps in a `snippets` subdirectory). You may organize snippets into multiple files, if desired.
For example, ensure an `/opt/hello` file is created with permissions 0644. For example, ensure an `/opt/hello` file is created with permissions 0644 before boot.
```yaml === "Fedora CoreOS"
# custom-files
variant: fcos
version: 1.4.0
storage:
files:
- path: /opt/hello
contents:
inline: |
Hello World
mode: 0644
```
Reference the FCC contents by location (e.g. `file("./custom-units.yaml")`). On [AWS](/fedora-coreos/aws/#cluster) or [Google Cloud](/fedora-coreos/google-cloud/#cluster) extend the `controller_snippets` or `worker_snippets` list variables. ```yaml
# custom-files.yaml
variant: fcos
version: 1.4.0
storage:
files:
- path: /opt/hello
contents:
inline: |
Hello World
mode: 0644
```
```tf === "Flatcar Linux"
module "nemo" {
...
controller_count = 1 ```yaml
worker_count = 2 # custom-files.yaml
controller_snippets = [ variant: flatcar
file("./custom-files"), version: 1.0.0
file("./custom-units"), storage:
] files:
worker_snippets = [ - path: /opt/hello
file("./custom-files"), contents:
file("./custom-units")", inline: |
] Hello World
... mode: 0644
} ```
```
On [Bare-Metal](/fedora-coreos/bare-metal/#cluster), different FCCs may be used for each node (since hardware may be heterogeneous). Extend the `snippets` map variable by mapping a controller or worker name key to a list of snippets. Or ensure a systemd unit `hello.service` is created.
```tf === "Fedora CoreOS"
module "mercury" {
...
snippets = {
"node2" = [file("./units/hello.yaml")]
"node3" = [
file("./units/world.yaml"),
file("./units/hello.yaml"),
]
}
...
}
```
### Flatcar Linux ```yaml
# custom-units.yaml
Define a Container Linux Config (CLC) ([config](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/configuration.md), [examples](https://github.com/coreos/container-linux-config-transpiler/blob/master/doc/examples.md)) in version control near your Terraform workspace directory (e.g. perhaps in a `snippets` subdirectory). You may organize snippets into multiple files, if desired. variant: fcos
version: 1.4.0
For example, ensure an `/opt/hello` file is created with permissions 0644. systemd:
units:
```yaml - name: hello.service
# custom-files enabled: true
storage:
files:
- path: /opt/hello
filesystem: root
contents:
inline: |
Hello World
mode: 0644
```
Or ensure a systemd unit `hello.service` is created and a dropin `50-etcd-cluster.conf` is added for `etcd-member.service`.
```yaml
# custom-units
systemd:
units:
- name: hello.service
enable: true
contents: |
[Unit]
Description=Hello World
[Service]
Type=oneshot
ExecStart=/usr/bin/echo Hello World!
[Install]
WantedBy=multi-user.target
- name: etcd-member.service
enable: true
dropins:
- name: 50-etcd-cluster.conf
contents: | contents: |
Environment="ETCD_LOG_PACKAGE_LEVELS=etcdserver=WARNING,security=DEBUG" [Unit]
``` Description=Hello World
[Service]
Type=oneshot
ExecStart=/usr/bin/echo Hello World!
[Install]
WantedBy=multi-user.target
```
=== "Flatcar Linux"
```yaml
# custom-units.yaml
variant: flatcar
version: 1.0.0
systemd:
units:
- name: hello.service
enabled: true
contents: |
[Unit]
Description=Hello World
[Service]
Type=oneshot
ExecStart=/usr/bin/echo Hello World!
[Install]
WantedBy=multi-user.target
```
Reference the Butane contents by location (e.g. `file("./custom-units.yaml")`). On [AWS](/fedora-coreos/aws/#cluster), [Azure](/fedora-coreos/azure/#cluster), [DigitalOcean](/fedora-coreos/digital-ocean/#cluster), or [Google Cloud](/fedora-coreos/google-cloud/#cluster) extend the `controller_snippets` or `worker_snippets` list variables.
Reference the CLC contents by location (e.g. `file("./custom-units.yaml")`). On [AWS](/flatcar-linux/aws/#cluster), [Azure](/flatcar-linux/azure/#cluster), [DigitalOcean](/flatcar-linux/digital-ocean/#cluster), or [Google Cloud](/flatcar-linux/google-cloud/#cluster) extend the `controller_snippets` or `worker_snippets` list variables.
```tf ```tf
module "nemo" { module "nemo" {
... ...
controller_count = 1
worker_count = 2 worker_count = 2
controller_snippets = [ controller_snippets = [
file("./custom-files"), file("./custom-files.yaml"),
file("./custom-units"), file("./custom-units.yaml"),
] ]
worker_snippets = [ worker_snippets = [
file("./custom-files"), file("./custom-files.yaml"),
file("./custom-units")", file("./custom-units.yaml")",
] ]
... ...
} }
``` ```
On [Bare-Metal](/flatcar-linux/bare-metal/#cluster), different CLCs may be used for each node (since hardware may be heterogeneous). Extend the `snippets` map variable by mapping a controller or worker name key to a list of snippets. On [Bare-Metal](/fedora-coreos/bare-metal/#cluster), different Butane configs may be used for each node (since hardware may be heterogeneous). Extend the `snippets` map variable by mapping a controller or worker name key to a list of snippets.
```tf ```tf
module "mercury" { module "mercury" {

View File

@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.4"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf ```tf
module "yavin-pool" { module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.24.4"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.4"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf ```tf
module "yavin-pool" { module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.24.4"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"

View File

@ -19,7 +19,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf ```tf
module "tempest-worker-pool" { module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.24.4"
# AWS # AWS
vpc_id = module.tempest.vpc_id vpc_id = module.tempest.vpc_id
@ -42,7 +42,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf ```tf
module "tempest-worker-pool" { module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.24.4"
# AWS # AWS
vpc_id = module.tempest.vpc_id vpc_id = module.tempest.vpc_id
@ -111,7 +111,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf ```tf
module "ramius-worker-pool" { module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.24.4"
# Azure # Azure
region = module.ramius.region region = module.ramius.region
@ -137,7 +137,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf ```tf
module "ramius-worker-pool" { module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.24.4"
# Azure # Azure
region = module.ramius.region region = module.ramius.region
@ -207,7 +207,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf ```tf
module "yavin-worker-pool" { module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.24.4"
# Google Cloud # Google Cloud
region = "europe-west2" region = "europe-west2"
@ -231,7 +231,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf ```tf
module "yavin-worker-pool" { module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.24.4"
# Google Cloud # Google Cloud
region = "europe-west2" region = "europe-west2"
@ -262,11 +262,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
``` ```
$ kubectl get nodes $ kubectl get nodes
NAME STATUS AGE VERSION NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.24.3 yavin-controller-0.c.example-com.internal Ready 6m v1.24.4
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.24.3 yavin-worker-jrbf.c.example-com.internal Ready 5m v1.24.4
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.24.3 yavin-worker-mzdm.c.example-com.internal Ready 5m v1.24.4
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.24.3 yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.24.4
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.24.3 yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.24.4
``` ```
### Variables ### Variables

View File

@ -9,8 +9,8 @@ Typhoon supports [Fedora CoreOS](https://getfedora.org/coreos/) and [Flatcar Lin
Together, they diversify Typhoon to support a range of container technologies. Together, they diversify Typhoon to support a range of container technologies.
* Fedora CoreOS: rpm-ostree, podman, moby * Fedora CoreOS: rpm-ostree, podman, containerd
* Flatcar Linux: Gentoo core, rkt-fly, docker * Flatcar Linux: Gentoo core, docker, containerd
## Host Properties ## Host Properties
@ -19,7 +19,7 @@ Together, they diversify Typhoon to support a range of container technologies.
| Kernel | ~5.10.x | ~5.16.x | | Kernel | ~5.10.x | ~5.16.x |
| systemd | 249 | 249 | | systemd | 249 | 249 |
| Username | core | core | | Username | core | core |
| Ignition system | Ignition v2.x spec | Ignition v3.x spec | | Ignition system | Ignition v3.x spec | Ignition v3.x spec |
| storage driver | overlay2 (extfs) | overlay2 (xfs) | | storage driver | overlay2 (extfs) | overlay2 (xfs) |
| logging driver | json-file | journald | | logging driver | json-file | journald |
| cgroup driver | systemd | systemd | | cgroup driver | systemd | systemd |

View File

@ -1,6 +1,6 @@
# AWS # AWS
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on AWS with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.24.4 cluster on AWS with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -51,11 +51,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
aws = { aws = {
source = "hashicorp/aws" source = "hashicorp/aws"
version = "4.22.0" version = "4.26.0"
} }
} }
} }
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
```tf ```tf
module "tempest" { module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.24.4"
# AWS # AWS
cluster_name = "tempest" cluster_name = "tempest"
@ -111,7 +111,7 @@ Plan the resources to be created.
```sh ```sh
$ terraform plan $ terraform plan
Plan: 81 to add, 0 to change, 0 to destroy. Plan: 109 to add, 0 to change, 0 to destroy.
``` ```
Apply the changes to create the cluster. Apply the changes to create the cluster.
@ -123,7 +123,7 @@ module.tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
module.tempest.null_resource.bootstrap: Still creating... (5m0s elapsed) module.tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
module.tempest.null_resource.bootstrap: Creation complete after 5m8s (ID: 3961816482286168143) module.tempest.null_resource.bootstrap: Creation complete after 5m8s (ID: 3961816482286168143)
Apply complete! Resources: 98 added, 0 changed, 0 destroyed. Apply complete! Resources: 109 added, 0 changed, 0 destroyed.
``` ```
In 4-8 minutes, the Kubernetes cluster will be ready. In 4-8 minutes, the Kubernetes cluster will be ready.
@ -145,9 +145,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config $ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.24.3 ip-10-0-3-155 Ready <none> 10m v1.24.4
ip-10-0-26-65 Ready <none> 10m v1.24.3 ip-10-0-26-65 Ready <none> 10m v1.24.4
ip-10-0-41-21 Ready <none> 10m v1.24.3 ip-10-0-41-21 Ready <none> 10m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Azure # Azure
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on Azure with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.24.4 cluster on Azure with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -48,11 +48,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
azurerm = { azurerm = {
source = "hashicorp/azurerm" source = "hashicorp/azurerm"
version = "3.14.0" version = "3.18.0"
} }
} }
} }
@ -64,18 +64,18 @@ Additional configuration options are described in the `azurerm` provider [docs](
Fedora CoreOS publishes images for Azure, but does not yet upload them. Azure allows custom images to be uploaded to a storage account bucket and imported. Fedora CoreOS publishes images for Azure, but does not yet upload them. Azure allows custom images to be uploaded to a storage account bucket and imported.
[Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow). [Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image, decompress it, and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow).
``` ```
xz -d fedora-coreos-31.20200323.3.2-azure.x86_64.vhd.xz xz -d fedora-coreos-36.20220716.3.1-azure.x86_64.vhd.xz
``` ```
Create an Azure disk (note disk ID) and create an Azure image from it (note image ID). Create an Azure disk (note disk ID) and create an Azure image from it (note image ID).
``` ```
az disk create --name fedora-coreos-31.20200323.3.2 -g GROUP --source https://BUCKET.blob.core.windows.net/fedora-coreos/fedora-coreos-31.20200323.3.2-azure.x86_64.vhd az disk create --name fedora-coreos-36.20220716.3.1 -g GROUP --source https://BUCKET.blob.core.windows.net/fedora-coreos/fedora-coreos-36.20220716.3.1-azure.x86_64.vhd
az image create --name fedora-coreos-31.20200323.3.2 -g GROUP --os-type=linux --source /subscriptions/some/path/providers/Microsoft.Compute/disks/fedora-coreos-31.20200323.3.2 az image create --name fedora-coreos-36.20220716.3.1 -g GROUP --os-type=linux --source /subscriptions/some/path/providers/Microsoft.Compute/disks/fedora-coreos-36.20220716.3.1
``` ```
Set the [os_image](#variables) in the next step. Set the [os_image](#variables) in the next step.
@ -86,7 +86,7 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
```tf ```tf
module "ramius" { module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.24.4"
# Azure # Azure
cluster_name = "ramius" cluster_name = "ramius"
@ -95,7 +95,7 @@ module "ramius" {
dns_zone_group = "example-group" dns_zone_group = "example-group"
# configuration # configuration
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-31.20200323.3.2" os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-36.20220716.3.1"
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..." ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
# optional # optional
@ -161,9 +161,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config $ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.24.3 ramius-controller-0 Ready <none> 24m v1.24.4
ramius-worker-000001 Ready <none> 25m v1.24.3 ramius-worker-000001 Ready <none> 25m v1.24.4
ramius-worker-000002 Ready <none> 24m v1.24.3 ramius-worker-000002 Ready <none> 24m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Bare-Metal # Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.24.3 cluster on bare-metal with Fedora CoreOS. In this tutorial, we'll network boot and provision a Kubernetes v1.24.4 cluster on bare-metal with Fedora CoreOS.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition. First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -138,11 +138,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
matchbox = { matchbox = {
source = "poseidon/matchbox" source = "poseidon/matchbox"
version = "0.5.0" version = "0.5.2"
} }
} }
} }
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
```tf ```tf
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.24.4"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -283,9 +283,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config $ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.24.3 node1.example.com Ready <none> 10m v1.24.4
node2.example.com Ready <none> 10m v1.24.3 node2.example.com Ready <none> 10m v1.24.4
node3.example.com Ready <none> 10m v1.24.3 node3.example.com Ready <none> 10m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# DigitalOcean # DigitalOcean
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on DigitalOcean with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.24.4 cluster on DigitalOcean with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -51,11 +51,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
digitalocean = { digitalocean = {
source = "digitalocean/digitalocean" source = "digitalocean/digitalocean"
version = "2.21.0" version = "2.22.1"
} }
} }
} }
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
```tf ```tf
module "nemo" { module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.24.4"
# Digital Ocean # Digital Ocean
cluster_name = "nemo" cluster_name = "nemo"
@ -155,9 +155,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config $ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.24.3 10.132.110.130 Ready <none> 10m v1.24.4
10.132.115.81 Ready <none> 10m v1.24.3 10.132.115.81 Ready <none> 10m v1.24.4
10.132.124.107 Ready <none> 10m v1.24.3 10.132.124.107 Ready <none> 10m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud # Google Cloud
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on Google Compute Engine with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.24.4 cluster on Google Compute Engine with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -52,11 +52,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
google = { google = {
source = "hashicorp/google" source = "hashicorp/google"
version = "4.29.0" version = "4.32.0"
} }
} }
} }
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kuberne
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=development-sha" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.4"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -112,7 +112,7 @@ Plan the resources to be created.
```sh ```sh
$ terraform plan $ terraform plan
Plan: 64 to add, 0 to change, 0 to destroy. Plan: 78 to add, 0 to change, 0 to destroy.
``` ```
Apply the changes to create the cluster. Apply the changes to create the cluster.
@ -125,7 +125,7 @@ module.yavin.null_resource.bootstrap: Still creating... (5m30s elapsed)
module.yavin.null_resource.bootstrap: Still creating... (5m40s elapsed) module.yavin.null_resource.bootstrap: Still creating... (5m40s elapsed)
module.yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358) module.yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358)
Apply complete! Resources: 62 added, 0 changed, 0 destroyed. Apply complete! Resources: 78 added, 0 changed, 0 destroyed.
``` ```
In 4-8 minutes, the Kubernetes cluster will be ready. In 4-8 minutes, the Kubernetes cluster will be ready.
@ -147,9 +147,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.3 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.4
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.3 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.4
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.3 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# AWS # AWS
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on AWS with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.24.4 cluster on AWS with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -51,11 +51,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
aws = { aws = {
source = "hashicorp/aws" source = "hashicorp/aws"
version = "4.22.0" version = "4.26.0"
} }
} }
} }
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/flatcar-linux/kubernetes`.
```tf ```tf
module "tempest" { module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.24.4"
# AWS # AWS
cluster_name = "tempest" cluster_name = "tempest"
@ -111,7 +111,7 @@ Plan the resources to be created.
```sh ```sh
$ terraform plan $ terraform plan
Plan: 80 to add, 0 to change, 0 to destroy. Plan: 109 to add, 0 to change, 0 to destroy.
``` ```
Apply the changes to create the cluster. Apply the changes to create the cluster.
@ -123,7 +123,7 @@ module.tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
module.tempest.null_resource.bootstrap: Still creating... (5m0s elapsed) module.tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
module.tempest.null_resource.bootstrap: Creation complete after 11m8s (ID: 3961816482286168143) module.tempest.null_resource.bootstrap: Creation complete after 11m8s (ID: 3961816482286168143)
Apply complete! Resources: 98 added, 0 changed, 0 destroyed. Apply complete! Resources: 109 added, 0 changed, 0 destroyed.
``` ```
In 4-8 minutes, the Kubernetes cluster will be ready. In 4-8 minutes, the Kubernetes cluster will be ready.
@ -145,9 +145,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config $ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.24.3 ip-10-0-3-155 Ready <none> 10m v1.24.4
ip-10-0-26-65 Ready <none> 10m v1.24.3 ip-10-0-26-65 Ready <none> 10m v1.24.4
ip-10-0-41-21 Ready <none> 10m v1.24.3 ip-10-0-41-21 Ready <none> 10m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Azure # Azure
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on Azure with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.24.4 cluster on Azure with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -48,11 +48,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
azurerm = { azurerm = {
source = "hashicorp/azurerm" source = "hashicorp/azurerm"
version = "3.14.0" version = "3.18.0"
} }
} }
} }
@ -75,7 +75,7 @@ Define a Kubernetes cluster using the module `azure/flatcar-linux/kubernetes`.
```tf ```tf
module "ramius" { module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.24.4"
# Azure # Azure
cluster_name = "ramius" cluster_name = "ramius"
@ -149,9 +149,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config $ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.24.3 ramius-controller-0 Ready <none> 24m v1.24.4
ramius-worker-000001 Ready <none> 25m v1.24.3 ramius-worker-000001 Ready <none> 25m v1.24.4
ramius-worker-000002 Ready <none> 24m v1.24.3 ramius-worker-000002 Ready <none> 24m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Bare-Metal # Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.24.3 cluster on bare-metal with Flatcar Linux. In this tutorial, we'll network boot and provision a Kubernetes v1.24.4 cluster on bare-metal with Flatcar Linux.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition. First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -138,11 +138,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
matchbox = { matchbox = {
source = "poseidon/matchbox" source = "poseidon/matchbox"
version = "0.5.0" version = "0.5.2"
} }
} }
} }
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/flatcar-linux/kubernete
```tf ```tf
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.24.4"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -269,10 +269,10 @@ To watch the bootstrap process in detail, SSH to the first controller and journa
``` ```
$ ssh core@node1.example.com $ ssh core@node1.example.com
$ journalctl -f -u bootstrap $ journalctl -f -u bootstrap
rkt[1750]: The connection to the server cluster.example.com:6443 was refused - did you specify the right host or port? The connection to the server cluster.example.com:6443 was refused - did you specify the right host or port?
rkt[1750]: Waiting for static pod control plane Waiting for static pod control plane
... ...
rkt[1750]: serviceaccount/calico-node unchanged serviceaccount/calico-node unchanged
systemd[1]: Started Kubernetes control plane. systemd[1]: Started Kubernetes control plane.
``` ```
@ -293,9 +293,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config $ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.24.3 node1.example.com Ready <none> 10m v1.24.4
node2.example.com Ready <none> 10m v1.24.3 node2.example.com Ready <none> 10m v1.24.4
node3.example.com Ready <none> 10m v1.24.3 node3.example.com Ready <none> 10m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# DigitalOcean # DigitalOcean
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on DigitalOcean with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.24.4 cluster on DigitalOcean with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -51,11 +51,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
digitalocean = { digitalocean = {
source = "digitalocean/digitalocean" source = "digitalocean/digitalocean"
version = "2.21.0" version = "2.22.1"
} }
} }
} }
@ -63,13 +63,13 @@ terraform {
### Flatcar Linux Images ### Flatcar Linux Images
Flatcar Linux publishes DigitalOcean images, but does not yet upload them. DigitalOcean allows [custom images](https://blog.digitalocean.com/custom-images/) to be uploaded via URLor file. Flatcar Linux publishes DigitalOcean images, but does not yet upload them. DigitalOcean allows [custom images](https://blog.digitalocean.com/custom-images/) to be uploaded via a URL or file.
[Download](https://www.flatcar-linux.org/releases/) the Flatcar Linux DigitalOcean bin image. Rename the image with the channel and version (to refer to these images over time) and [upload](https://cloud.digitalocean.com/images/custom_images) it as a custom image. Choose a Flatcar Linux [release](https://www.flatcar-linux.org/releases/) from Flatcar's file [server](https://stable.release.flatcar-linux.net/amd64-usr/). Copy the URL to the `flatcar_production_digitalocean_image.bin.bz2`, import it into DigitalOcean, and name it as a custom image. Add a data reference to the image in Terraform:
```tf ```tf
data "digitalocean_image" "flatcar-stable-2303-4-0" { data "digitalocean_image" "flatcar-stable-3227-2-0" {
name = "flatcar-stable-2303.4.0.bin.bz2" name = "flatcar-stable-3227.2.0.bin.bz2"
} }
``` ```
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/flatcar-linux/kubern
```tf ```tf
module "nemo" { module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.24.4"
# Digital Ocean # Digital Ocean
cluster_name = "nemo" cluster_name = "nemo"
@ -155,9 +155,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config $ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.24.3 10.132.110.130 Ready <none> 10m v1.24.4
10.132.115.81 Ready <none> 10m v1.24.3 10.132.115.81 Ready <none> 10m v1.24.4
10.132.124.107 Ready <none> 10m v1.24.3 10.132.124.107 Ready <none> 10m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud # Google Cloud
In this tutorial, we'll create a Kubernetes v1.24.3 cluster on Google Compute Engine with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.24.4 cluster on Google Compute Engine with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -52,11 +52,11 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "0.10.0" version = "0.11.0"
} }
google = { google = {
source = "hashicorp/google" source = "hashicorp/google"
version = "4.29.0" version = "4.32.0"
} }
} }
} }
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/flatcar-linux/kuberne
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.24.4"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -112,7 +112,7 @@ Plan the resources to be created.
```sh ```sh
$ terraform plan $ terraform plan
Plan: 64 to add, 0 to change, 0 to destroy. Plan: 78 to add, 0 to change, 0 to destroy.
``` ```
Apply the changes to create the cluster. Apply the changes to create the cluster.
@ -125,7 +125,7 @@ module.yavin.null_resource.bootstrap: Still creating... (5m30s elapsed)
module.yavin.null_resource.bootstrap: Still creating... (5m40s elapsed) module.yavin.null_resource.bootstrap: Still creating... (5m40s elapsed)
module.yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358) module.yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358)
Apply complete! Resources: 62 added, 0 changed, 0 destroyed. Apply complete! Resources: 78 added, 0 changed, 0 destroyed.
``` ```
In 4-8 minutes, the Kubernetes cluster will be ready. In 4-8 minutes, the Kubernetes cluster will be ready.
@ -147,9 +147,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.3 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.4
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.3 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.4
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.3 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#hosts) customization * Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#hosts) customization
@ -61,7 +61,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.4"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -99,9 +99,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.3 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.24.4
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.3 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.24.4
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.3 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.24.4
``` ```
List the pods. List the pods.

View File

@ -13,19 +13,37 @@ Typhoon provides tagged releases to allow clusters to be versioned using ordinar
``` ```
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.24.4"
... ...
} }
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.24.3" source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.24.4"
... ...
} }
``` ```
Master is updated regularly, so it is recommended to [pin](https://www.terraform.io/docs/modules/sources.html) modules to a [release tag](https://github.com/poseidon/typhoon/releases) or [commit](https://github.com/poseidon/typhoon/commits/master) hash. Pinning ensures `terraform get --update` only fetches the desired version. Main is updated regularly, so it is recommended to [pin](https://www.terraform.io/docs/modules/sources.html) modules to a [release tag](https://github.com/poseidon/typhoon/releases) or [commit](https://github.com/poseidon/typhoon/commits/main) hash. Pinning ensures `terraform get --update` only fetches the desired version.
## Upgrades ## Terraform Versions
Typhoon modules support Terraform v0.13.x and higher. Poseidon publishes [providers](/topics/security/#terraform-providers) to the Terraform Provider Registry for automatic install via `terraform init`.
| Typhoon Release | Terraform version |
|-------------------|---------------------|
| v1.21.2 - ? | v0.13.x, v0.14.4+, v0.15.x, v1.0.x |
| v1.21.1 - v1.21.1 | v0.13.x, v0.14.4+, v0.15.x |
| v1.20.2 - v1.21.0 | v0.13.x, v0.14.4+ |
| v1.20.0 - v1.20.2 | v0.13.x |
| v1.18.8 - v1.19.4 | v0.12.26+, v0.13.x |
| v1.15.0 - v1.18.8 | v0.12.x |
| v1.10.3 - v1.15.0 | v0.11.x |
| v1.9.2 - v1.10.2 | v0.10.4+ or v0.11.x |
| v1.7.3 - v1.9.1 | v0.10.x |
| v1.6.4 - v1.7.2 | v0.9.x |
## Cluster Upgrades
Typhoon recommends upgrading clusters using a blue-green replacement strategy and migrating workloads. Typhoon recommends upgrading clusters using a blue-green replacement strategy and migrating workloads.
@ -127,9 +145,99 @@ Typhoon supports multi-controller clusters, so it is possible to upgrade a clust
!!! warning !!! warning
Typhoon does not support or document node replacement as an upgrade strategy. It limits Typhoon's ability to make infrastructure and architectural changes between tagged releases. Typhoon does not support or document node replacement as an upgrade strategy. It limits Typhoon's ability to make infrastructure and architectural changes between tagged releases.
### Upgrade terraform-provider-ct ## Node Configuration Updates
The [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin parses, validates, and converts Fedora CoreOS or Flatcar Linux Configs into Ignition user-data for provisioning instances. Since Typhoon v1.12.2+, the plugin can be updated in-place so that on apply, only workers will be replaced. Typhoon worker instance groups (default workers and [worker pools](../advanced/worker-pools.md)) on AWS and Google Cloud gradually rolling replace worker instances when configuration changes are applied.
### AWS
On AWS, worker instances belong to an auto-scaling group. When an auto-scaling group's launch configuration changes, an AWS [Instance Refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) gradually replaces worker instances.
Instance refresh creates surge instances, waits for a warm-up period, then deletes old instances.
```diff
module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/VARIANT/kubernetes?ref=VERSION"
# AWS
cluster_name = "tempest"
...
# optional
worker_count = 2
- worker_type = "t3.small"
+ worker_type = "t3a.small"
# change from on-demand to spot
+ worker_price = "0.0309"
# default is 30GB
+ disk_size = 50
# change worker snippets
+ worker_snippets = [
+ file("butane/feature.yaml"),
+ ]
}
```
Applying edits to most worker fields will start an instance refresh:
* `worker_type`
* `disk_*`
* `worker_price` (i.e. spot)
* `worker_target_groups`
* `worker_snippets`
However, changing `os_stream`/`os_channel` or new AMIs becoming available will NOT change the launch configuration or trigger an Instance Refresh. This allows Fedora CoreOS or Flatcar Linux to auto-update themselves via reboots and avoids unexpected terraform diffs for new AMIs.
!!! note
Before Typhoon v1.24.4, worker nodes only used new launch configurations when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
### Google Cloud
On Google Cloud, worker instances belong to a [managed instance group](https://cloud.google.com/compute/docs/instance-groups#managed_instance_groups). When a group's launch template changes, a [rolling update](https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups) gradually replaces worker instances.
The rolling update creates surge instances, waits for instances to be healthy, then deletes old instances.
```diff
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/VARIANT/kubernetes?ref=VERSION"
# Google Cloud
cluster_name = "yavin"
...
# optional
worker_count = 2
+ worker_type = "n2-standard-2"
+ worker_preemptible = true
# default is 30GB
+ disk_size = 50
# change worker snippets
+ worker_snippets = [
+ file("butane/feature.yaml"),
+ ]
}
```
Applying edits to most worker fields will start an instance refresh:
* `worker_type`
* `disk_*`
* `worker_preemptible` (i.e. spot)
* `worker_snippets`
However, changing `os_stream`/`os_channel` or new compute images becoming available will NOT change the launch template or update instances. This allows Fedora CoreOS or Flatcar Linux to auto-update themselves via reboots and avoids unexpected terraform diffs for new AMIs.
!!! note
Before Typhoon v1.24.4, worker nodes only used new launch templates when replaced manually (or due to failure). If you must change node configuration manually, it's still possible. Create a new [worker pool](../advanced/worker-pools.md), then scale down the old worker pool as desired.
## Upgrade poseidon/ct
The [poseidon/ct](https://github.com/poseidon/terraform-provider-ct) Terraform provider plugin parses, validates, and converts Butane Configs to Ignition user-data for provisioning instances. Since Typhoon v1.12.2+, the plugin can be updated in-place so that on apply, only workers will be replaced.
Update the version of the `ct` plugin in each Terraform working directory. Typhoon clusters managed in the working directory **must** be v1.12.2 or higher. Update the version of the `ct` plugin in each Terraform working directory. Typhoon clusters managed in the working directory **must** be v1.12.2 or higher.
@ -140,8 +248,8 @@ terraform {
required_providers { required_providers {
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
- version = "0.8.0" - version = "0.10.0"
+ version = "0.9.0" + version = "0.11.0"
} }
... ...
} }
@ -155,11 +263,11 @@ terraform init
terraform plan terraform plan
``` ```
Apply the change. Worker nodes' user-data will be changed and workers will be replaced. Rollout happens slightly differently on each platform: Apply the change. If worker nodes' user-data is changed and workers will be replaced. Rollout happens slightly differently on each platform:
#### AWS #### AWS
AWS creates a new worker ASG, then removes the old ASG. New workers join the cluster and old workers disappear. `terraform apply` will hang during this process. See AWS node [config updates](#aws).
#### Azure #### Azure
@ -187,24 +295,4 @@ Expect downtime.
#### Google Cloud #### Google Cloud
Google Cloud creates a new worker template and edits the worker instance group instantly. Manually terminate workers and replacement workers will use the user-data. See Google Cloud node [config updates](#google-cloud).
## Terraform Versions
Terraform [v0.13](https://www.hashicorp.com/blog/announcing-hashicorp-terraform-0-13) introduced major changes to the provider plugin system. Terraform `init` can automatically install both `hashicorp` and `poseidon` provider plugins, eliminating the need to manually install plugin binaries.
Typhoon modules have been updated for v0.13.x. Poseidon publishes [providers](/topics/security/#terraform-providers) to the Terraform Provider Registry for usage with v0.13+.
| Typhoon Release | Terraform version |
|-------------------|---------------------|
| v1.21.2 - ? | v0.13.x, v0.14.4+, v0.15.x, v1.0.x |
| v1.21.1 - v1.21.1 | v0.13.x, v0.14.4+, v0.15.x |
| v1.20.2 - v1.21.0 | v0.13.x, v0.14.4+ |
| v1.20.0 - v1.20.2 | v0.13.x |
| v1.18.8 - v1.19.4 | v0.12.26+, v0.13.x |
| v1.15.0 - v1.18.8 | v0.12.x |
| v1.10.3 - v1.15.0 | v0.11.x |
| v1.9.2 - v1.10.2 | v0.10.4+ or v0.11.x |
| v1.7.3 - v1.9.1 | v0.10.x |
| v1.6.4 - v1.7.2 | v0.9.x |

View File

@ -81,6 +81,31 @@ Typhoon publishes Terraform providers to the Terraform Registry, GPG signed by 0
| ct | [github](https://github.com/poseidon/terraform-provider-ct) | [poseidon/ct](https://registry.terraform.io/providers/poseidon/ct/latest) | | ct | [github](https://github.com/poseidon/terraform-provider-ct) | [poseidon/ct](https://registry.terraform.io/providers/poseidon/ct/latest) |
| matchbox | [github](https://github.com/poseidon/terraform-provider-matchbox) | [poseidon/matchbox](https://registry.terraform.io/providers/poseidon/matchbox/latest) | | matchbox | [github](https://github.com/poseidon/terraform-provider-matchbox) | [poseidon/matchbox](https://registry.terraform.io/providers/poseidon/matchbox/latest) |
## kube-system
| Name | user | hostNet | privileged |
|----------------|--------|---------|------------|
| kube-apiserver | nobody | true | false |
| kube-controller-manager | nobody | true | false |
| kube-scheduler | nobody | true | false |
| coredns | NA | false | false |
| kube-proxy | root | true | true |
| cilium | root | true | true |
| calico | root | true | true |
| flannel | root | true | true |
| Name | priorityClassName |
|-------------------------|-------------------|
| kube-apiserver | system-cluster-critical |
| kube-controller-manager | system-cluster-critical |
| kube-scheduler | system-cluster-critical |
| coredns | system-cluster-critical |
| kube-proxy | system-node-critical |
| cilium | system-node-critical |
| calico | system-node-critical |
| flannel | system-node-critical |
## Disclosures ## Disclosures
If you find security issues, please email `security@psdn.io`. If the issue lies in upstream Kubernetes, please inform upstream Kubernetes as well. If you find security issues, please email `security@psdn.io`. If the issue lies in upstream Kubernetes, please inform upstream Kubernetes as well.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -75,10 +75,10 @@ resource "google_compute_instance_group" "controllers" {
) )
} }
# TCP health check for apiserver # Health check for kube-apiserver
resource "google_compute_health_check" "apiserver" { resource "google_compute_health_check" "apiserver" {
name = "${var.cluster_name}-apiserver-tcp-health" name = "${var.cluster_name}-apiserver-health"
description = "TCP health check for kube-apiserver" description = "Health check for kube-apiserver"
timeout_sec = 5 timeout_sec = 5
check_interval_sec = 5 check_interval_sec = 5
@ -86,7 +86,7 @@ resource "google_compute_health_check" "apiserver" {
healthy_threshold = 1 healthy_threshold = 1
unhealthy_threshold = 3 unhealthy_threshold = 3
tcp_health_check { ssl_health_check {
port = "6443" port = "6443"
} }
} }

View File

@ -1,10 +1,10 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
etcd_servers = google_dns_record_set.etcds.*.name etcd_servers = [for fqdn in google_dns_record_set.etcds.*.name : trimsuffix(fqdn, ".")]
networking = var.networking networking = var.networking
network_mtu = 1440 network_mtu = 1440
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr

View File

@ -53,7 +53,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -67,8 +67,8 @@ systemd:
--network host \ --network host \
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup \ --volume /sys/fs/cgroup:/sys/fs/cgroup \
@ -124,7 +124,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.24.3 quay.io/poseidon/kubelet:v1.24.4
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -219,7 +219,6 @@ storage:
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
ETCD_PEER_CLIENT_CERT_AUTH=true ETCD_PEER_CLIENT_CERT_AUTH=true
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:
@ -244,4 +243,3 @@ passwd:
- name: core - name: core
ssh_authorized_keys: ssh_authorized_keys:
- ${ssh_authorized_key} - ${ssh_authorized_key}

View File

@ -35,7 +35,7 @@ resource "google_compute_instance" "controllers" {
machine_type = var.controller_type machine_type = var.controller_type
metadata = { metadata = {
user-data = data.ct_config.controller-ignitions.*.rendered[count.index] user-data = data.ct_config.controllers.*.rendered[count.index]
} }
boot_disk { boot_disk {
@ -66,41 +66,22 @@ resource "google_compute_instance" "controllers" {
} }
} }
# Controller Ignition configs # Fedora CoreOS controllers
data "ct_config" "controller-ignitions" { data "ct_config" "controllers" {
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = var.controller_snippets
}
# Controller Fedora CoreOS configs
data "template_file" "controller-configs" {
count = var.controller_count count = var.controller_count
content = templatefile("${path.module}/butane/controller.yaml", {
template = file("${path.module}/fcc/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records # Cannot use cyclic dependencies on controllers or their DNS records
etcd_name = "etcd${count.index}" etcd_name = "etcd${count.index}"
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}" etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,... # etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered) etcd_initial_cluster = join(",", [
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
])
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
} })
strict = true
snippets = var.controller_snippets
} }
data "template_file" "etcds" {
count = var.controller_count
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
vars = {
index = count.index
cluster_name = var.cluster_name
dns_zone = var.dns_zone
}
}

View File

@ -196,6 +196,24 @@ resource "google_compute_firewall" "allow-ingress" {
target_tags = ["${var.cluster_name}-worker"] target_tags = ["${var.cluster_name}-worker"]
} }
resource "google_compute_firewall" "google-worker-health-checks" {
name = "${var.cluster_name}-worker-health"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [10256]
}
# https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs
source_ranges = [
"35.191.0.0/16",
"130.211.0.0/22",
]
target_tags = ["${var.cluster_name}-worker"]
}
resource "google_compute_firewall" "google-ingress-health-checks" { resource "google_compute_firewall" "google-ingress-health-checks" {
name = "${var.cluster_name}-ingress-health" name = "${var.cluster_name}-ingress-health"
network = google_compute_network.network.name network = google_compute_network.network.name

View File

@ -3,10 +3,8 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
google = ">= 2.19, < 5.0" google = ">= 2.19, < 5.0"
template = "~> 2.2" null = ">= 2.1"
null = ">= 2.1"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.9"

View File

@ -26,7 +26,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -92,7 +92,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
@ -131,7 +131,6 @@ storage:
DefaultCPUAccounting=yes DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes DefaultBlockIOAccounting=yes
- path: /etc/fedora-coreos/iptables-legacy.stamp
- path: /etc/containerd/config.toml - path: /etc/containerd/config.toml
overwrite: true overwrite: true
contents: contents:

View File

@ -3,9 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
google = ">= 2.19, < 5.0" google = ">= 2.19, < 5.0"
template = "~> 2.2"
ct = { ct = {
source = "poseidon/ct" source = "poseidon/ct"
version = "~> 0.9" version = "~> 0.9"

View File

@ -1,6 +1,6 @@
# Managed instance group of workers # Managed instance group of workers
resource "google_compute_region_instance_group_manager" "workers" { resource "google_compute_region_instance_group_manager" "workers" {
name = "${var.name}-worker-group" name = "${var.name}-worker"
description = "Compute instance group of ${var.name} workers" description = "Compute instance group of ${var.name} workers"
# instance name prefix for instances in the group # instance name prefix for instances in the group
@ -11,6 +11,16 @@ resource "google_compute_region_instance_group_manager" "workers" {
instance_template = google_compute_instance_template.worker.self_link instance_template = google_compute_instance_template.worker.self_link
} }
# Roll out MIG instance template changes by replacing instances.
# - Surge to create new instances, then delete old instances.
# - Replace ensures new Ignition is picked up
update_policy {
type = "PROACTIVE"
max_surge_fixed = 3
max_unavailable_fixed = 0
minimal_action = "REPLACE"
}
target_size = var.worker_count target_size = var.worker_count
target_pools = [google_compute_target_pool.workers.self_link] target_pools = [google_compute_target_pool.workers.self_link]
@ -23,21 +33,46 @@ resource "google_compute_region_instance_group_manager" "workers" {
name = "https" name = "https"
port = "443" port = "443"
} }
auto_healing_policies {
health_check = google_compute_health_check.worker.id
initial_delay_sec = 300
}
}
# Health check for worker node
resource "google_compute_health_check" "worker" {
name = "${var.name}-worker-health"
description = "Health check for worker node"
timeout_sec = 20
check_interval_sec = 30
healthy_threshold = 1
unhealthy_threshold = 6
http_health_check {
port = "10256"
request_path = "/healthz"
}
} }
# Worker instance template # Worker instance template
resource "google_compute_instance_template" "worker" { resource "google_compute_instance_template" "worker" {
name_prefix = "${var.name}-worker-" name_prefix = "${var.name}-worker-"
description = "Worker Instance template" description = "${var.name} worker instance template"
machine_type = var.machine_type machine_type = var.machine_type
metadata = { metadata = {
user-data = data.ct_config.worker-ignition.rendered user-data = data.ct_config.worker.rendered
} }
scheduling { scheduling {
automatic_restart = var.preemptible ? false : true provisioning_model = var.preemptible ? "SPOT" : "STANDARD"
preemptible = var.preemptible preemptible = var.preemptible
automatic_restart = var.preemptible ? false : true
# Spot instances with termination action DELETE cannot be used with MIGs
instance_termination_action = var.preemptible ? "STOP" : null
} }
disk { disk {
@ -49,10 +84,8 @@ resource "google_compute_instance_template" "worker" {
network_interface { network_interface {
network = var.network network = var.network
# Ephemeral external IP # Ephemeral external IP
access_config { access_config {}
}
} }
can_ip_forward = true can_ip_forward = true
@ -72,24 +105,16 @@ resource "google_compute_instance_template" "worker" {
} }
} }
# Worker Ignition config # Fedora CoreOS worker
data "ct_config" "worker-ignition" { data "ct_config" "worker" {
content = data.template_file.worker-config.rendered content = templatefile("${path.module}/butane/worker.yaml", {
strict = true
snippets = var.snippets
}
# Worker Fedora CoreOS config
data "template_file" "worker-config" {
template = file("${path.module}/fcc/worker.yaml")
vars = {
kubeconfig = indent(10, var.kubeconfig) kubeconfig = indent(10, var.kubeconfig)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
node_labels = join(",", var.node_labels) node_labels = join(",", var.node_labels)
node_taints = join(",", var.node_taints) node_taints = join(",", var.node_taints)
} })
strict = true
snippets = var.snippets
} }

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.24.3 (upstream) * Kubernetes v1.24.4 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -75,10 +75,10 @@ resource "google_compute_instance_group" "controllers" {
) )
} }
# TCP health check for apiserver # Health check for kube-apiserver
resource "google_compute_health_check" "apiserver" { resource "google_compute_health_check" "apiserver" {
name = "${var.cluster_name}-apiserver-tcp-health" name = "${var.cluster_name}-apiserver-health"
description = "TCP health check for kube-apiserver" description = "Health check for kube-apiserver"
timeout_sec = 5 timeout_sec = 5
check_interval_sec = 5 check_interval_sec = 5
@ -86,7 +86,7 @@ resource "google_compute_health_check" "apiserver" {
healthy_threshold = 1 healthy_threshold = 1
unhealthy_threshold = 3 unhealthy_threshold = 3
tcp_health_check { ssl_health_check {
port = "6443" port = "6443"
} }
} }

View File

@ -1,10 +1,10 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=77981d7fd420061506a1529563d551f904fb4849" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=31bbef90242934f7f648d546ae8c0c314074501b"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
etcd_servers = google_dns_record_set.etcds.*.name etcd_servers = [for fqdn in google_dns_record_set.etcds.*.name : trimsuffix(fqdn, ".")]
networking = var.networking networking = var.networking
network_mtu = 1440 network_mtu = 1440
pod_cidr = var.pod_cidr pod_cidr = var.pod_cidr

View File

@ -1,4 +1,5 @@
--- variant: flatcar
version: 1.0.0
systemd: systemd:
units: units:
- name: etcd-member.service - name: etcd-member.service
@ -55,7 +56,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -94,9 +95,9 @@ systemd:
--kubeconfig=/var/lib/kubelet/kubeconfig \ --kubeconfig=/var/lib/kubelet/kubeconfig \
--node-labels=node.kubernetes.io/controller="true" \ --node-labels=node.kubernetes.io/controller="true" \
--pod-manifest-path=/etc/kubernetes/manifests \ --pod-manifest-path=/etc/kubernetes/manifests \
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
--read-only-port=0 \ --read-only-port=0 \
--resolv-conf=/run/systemd/resolve/resolv.conf \ --resolv-conf=/run/systemd/resolve/resolv.conf \
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
--rotate-certificates \ --rotate-certificates \
--volume-plugin-dir=/var/lib/kubelet/volumeplugins --volume-plugin-dir=/var/lib/kubelet/volumeplugins
ExecStart=docker logs -f kubelet ExecStart=docker logs -f kubelet
@ -117,7 +118,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.3 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.24.4
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \
@ -130,18 +131,15 @@ systemd:
storage: storage:
directories: directories:
- path: /var/lib/etcd - path: /var/lib/etcd
filesystem: root
mode: 0700 mode: 0700
overwrite: true overwrite: true
files: files:
- path: /etc/kubernetes/kubeconfig - path: /etc/kubernetes/kubeconfig
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
${kubeconfig} ${kubeconfig}
- path: /opt/bootstrap/layout - path: /opt/bootstrap/layout
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -164,7 +162,6 @@ storage:
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
filesystem: root
mode: 0544 mode: 0544
contents: contents:
inline: | inline: |
@ -179,13 +176,11 @@ storage:
sleep 5 sleep 5
done done
- path: /etc/sysctl.d/max-user-watches.conf - path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |
fs.inotify.max_user_watches=16184 fs.inotify.max_user_watches=16184
- path: /etc/etcd/etcd.env - path: /etc/etcd/etcd.env
filesystem: root
mode: 0644 mode: 0644
contents: contents:
inline: | inline: |

View File

@ -35,7 +35,7 @@ resource "google_compute_instance" "controllers" {
machine_type = var.controller_type machine_type = var.controller_type
metadata = { metadata = {
user-data = data.ct_config.controller-ignitions.*.rendered[count.index] user-data = data.ct_config.controllers.*.rendered[count.index]
} }
boot_disk { boot_disk {
@ -66,41 +66,22 @@ resource "google_compute_instance" "controllers" {
} }
} }
# Controller Ignition configs # Flatcar Linux controllers
data "ct_config" "controller-ignitions" { data "ct_config" "controllers" {
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = var.controller_snippets
}
# Controller Container Linux configs
data "template_file" "controller-configs" {
count = var.controller_count count = var.controller_count
content = templatefile("${path.module}/butane/controller.yaml", {
template = file("${path.module}/cl/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records # Cannot use cyclic dependencies on controllers or their DNS records
etcd_name = "etcd${count.index}" etcd_name = "etcd${count.index}"
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}" etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,... # etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered) etcd_initial_cluster = join(",", [
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
])
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet) kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10) cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix cluster_domain_suffix = var.cluster_domain_suffix
} })
strict = true
snippets = var.controller_snippets
} }
data "template_file" "etcds" {
count = var.controller_count
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
vars = {
index = count.index
cluster_name = var.cluster_name
dns_zone = var.dns_zone
}
}

View File

@ -1,6 +1,6 @@
# Flatcar Linux most recent image from channel # Flatcar Linux most recent image from channel
data "google_compute_image" "flatcar-linux" { data "google_compute_image" "flatcar-linux" {
project = "kinvolk-public" project = "kinvolk-public"
family = "${var.os_image}" family = var.os_image
} }

Some files were not shown because too many files have changed in this diff Show More