Compare commits

..

12 Commits

Author SHA1 Message Date
d7f55c4e46 Remove use of deprecated key_algorithm field in TLS assets
* Fixes warning about use of deprecated field `key_algorithm` in
the `hashicorp/tls` provider. The key algorithm can now be inferred
directly from the private key so resources don't have to output
and pass around the algorithm
2022-04-20 19:52:03 -07:00
80c6e2e7e6 Update Kubernetes from v1.23.5 to v1.23.6
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1236
2022-04-20 19:39:05 -07:00
fddd8ac69d Fix Flatcar Linux nodes on Google Cloud not ignoring image changes
* Add `boot_disk[0].initialize_params` to the ignored fields for the
controller nodes
* Nodes will auto-update, Terraform should not attempt to delete and
recreate nodes (especially controllers!). Lack of this ignore causes
Terraform to propose deleting controller nodes when Flatcar Linux
releases new images
* Matches the configuration on Typhoon Fedora CoreOS (which does not
have the issue)
2022-04-20 18:53:00 -07:00
2f7d2a92e0 Update Cilium and Calico CNI providers
* Update Cilium from v1.11.3 to v1.11.4
* Update Calico from v3.22.1 to v3.22.2
2022-04-19 08:28:52 -07:00
6cd6bb38de Bump mkdocs-material from 8.2.8 to 8.2.9
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.2.8 to 8.2.9.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.2.8...8.2.9)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-04-12 07:53:43 -07:00
d91408258b Update nginx-ingress, Prometheus, and Grafana addons 2022-04-04 08:53:29 -07:00
2df1873b7f Update Cilium from v1.11.2 to v1.11.3
* https://github.com/cilium/cilium/releases/tag/v1.11.3
2022-04-01 16:44:30 -07:00
93ebfc7dd0 Allow upgrading Azure Terraform Provider to v3.x
* Change subnet references to source and destinations prefixes
(plural)
* Remove references to a resource group in some load balancing
components, which no longer require it (inferred)
* Rename `worker_address_prefix` output to `worker_address_prefixes`
2022-04-01 16:36:53 -07:00
5365ce8204 Mount /etc/machine-id from host into Kubelet
* Kubelet node's System UUID can be detected from the sysfs
filesystem without a host mount, but if you need to distinguish
between the host's machine-id and SystemUUID
* On cloud platforms, MachineID and SystemUUID are identical,
but on bare-metal the two differ
2022-04-01 16:32:06 -07:00
2ad33cebaf Bump mkdocs-material from 8.2.5 to 8.2.8
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 8.2.5 to 8.2.8.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/8.2.5...8.2.8)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-28 10:20:10 -07:00
a26abcf5b1 Bump mkdocs from 1.2.3 to 1.3.0
Bumps [mkdocs](https://github.com/mkdocs/mkdocs) from 1.2.3 to 1.3.0.
- [Release notes](https://github.com/mkdocs/mkdocs/releases)
- [Commits](https://github.com/mkdocs/mkdocs/compare/1.2.3...1.3.0)

---
updated-dependencies:
- dependency-name: mkdocs
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-28 10:07:34 -07:00
b8c4629548 Bump pymdown-extensions from 9.2 to 9.3
Bumps [pymdown-extensions](https://github.com/facelessuser/pymdown-extensions) from 9.2 to 9.3.
- [Release notes](https://github.com/facelessuser/pymdown-extensions/releases)
- [Commits](https://github.com/facelessuser/pymdown-extensions/compare/9.2...9.3)

---
updated-dependencies:
- dependency-name: pymdown-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-21 10:35:37 -07:00
79 changed files with 621 additions and 612 deletions

View File

@ -4,6 +4,28 @@ Notable changes between versions.
## Latest ## Latest
* Kubernetes [v1.23.6](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1236)
* Update Cilium from v1.11.2 to [v1.11.4](https://github.com/cilium/cilium/releases/tag/v1.11.4)
* Rename Cilium DaemonSet from `cilium-agent` to `cilium` to match Cilium CLI tools ([#303](https://github.com/poseidon/terraform-render-bootstrap/pull/303))
* Update Calico from v3.22.1 to [v3.22.2](https://github.com/projectcalico/calico/releases/tag/v3.22.2)
* Remove deprecated use of `key_algorithm` in `hashicorp/tls` resources
### Azure
* Allow upgrading Azure Terraform provider to v3.x ([#1144](https://github.com/poseidon/typhoon/pull/1144))
* Rename `worker_address_prefix` output to `worker_address_prefixes`
### Google Cloud
* Fix issue on Flatcar Linux with controller nodes not ignoring os image changes ([#1149](https://github.com/poseidon/typhoon/pull/1149))
* Nodes will auto-update, Terraform should not attempt to delete/recreate them
### Addons
* Update nginx-ingress from v1.1.2 to [v1.1.3](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.1.3)
* Update Prometheus from v2.33.5 to [v2.34.0](https://github.com/prometheus/prometheus/releases/tag/v2.34.0)
* Update Grafana from v8.4.4 to [v8.4.5](https://github.com/grafana/grafana/releases/tag/v8.4.5)
## v1.23.5 ## v1.23.5
* Kubernetes [v1.23.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1235) * Kubernetes [v1.23.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#v1235)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
@ -62,7 +62,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.6"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -101,9 +101,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.5 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.6
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.5 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.6
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.5 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -24,7 +24,7 @@ spec:
type: RuntimeDefault type: RuntimeDefault
containers: containers:
- name: grafana - name: grafana
image: docker.io/grafana/grafana:8.4.3 image: docker.io/grafana/grafana:8.4.5
env: env:
- name: GF_PATHS_CONFIG - name: GF_PATHS_CONFIG
value: "/etc/grafana/custom.ini" value: "/etc/grafana/custom.ini"

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault type: RuntimeDefault
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2 image: k8s.gcr.io/ingress-nginx/controller:v1.1.3
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --controller-class=k8s.io/public - --controller-class=k8s.io/public

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault type: RuntimeDefault
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2 image: k8s.gcr.io/ingress-nginx/controller:v1.1.3
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --controller-class=k8s.io/public - --controller-class=k8s.io/public

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault type: RuntimeDefault
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2 image: k8s.gcr.io/ingress-nginx/controller:v1.1.3
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --controller-class=k8s.io/public - --controller-class=k8s.io/public

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault type: RuntimeDefault
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2 image: k8s.gcr.io/ingress-nginx/controller:v1.1.3
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --controller-class=k8s.io/public - --controller-class=k8s.io/public

View File

@ -23,7 +23,7 @@ spec:
type: RuntimeDefault type: RuntimeDefault
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: k8s.gcr.io/ingress-nginx/controller:v1.1.2 image: k8s.gcr.io/ingress-nginx/controller:v1.1.3
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --controller-class=k8s.io/public - --controller-class=k8s.io/public

View File

@ -21,7 +21,7 @@ spec:
serviceAccountName: prometheus serviceAccountName: prometheus
containers: containers:
- name: prometheus - name: prometheus
image: quay.io/prometheus/prometheus:v2.33.5 image: quay.io/prometheus/prometheus:v2.34.0
args: args:
- --web.listen-address=0.0.0.0:9090 - --web.listen-address=0.0.0.0:9090
- --config.file=/etc/prometheus/prometheus.yaml - --config.file=/etc/prometheus/prometheus.yaml

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -56,7 +56,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -71,6 +71,7 @@ systemd:
--network host \ --network host \
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
@ -126,7 +127,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.23.5 quay.io/poseidon/kubelet:v1.23.6
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:

View File

@ -29,7 +29,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -44,6 +44,7 @@ systemd:
--network host \ --network host \
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
@ -94,7 +95,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -57,7 +57,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -121,7 +121,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \

View File

@ -29,7 +29,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -96,7 +96,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -53,7 +53,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -67,6 +67,7 @@ systemd:
--network host \ --network host \
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
@ -121,7 +122,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.23.5 quay.io/poseidon/kubelet:v1.23.6
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:

View File

@ -53,8 +53,6 @@ resource "azurerm_lb" "cluster" {
} }
resource "azurerm_lb_rule" "apiserver" { resource "azurerm_lb_rule" "apiserver" {
resource_group_name = azurerm_resource_group.cluster.name
name = "apiserver" name = "apiserver"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
frontend_ip_configuration_name = "apiserver" frontend_ip_configuration_name = "apiserver"
@ -67,8 +65,6 @@ resource "azurerm_lb_rule" "apiserver" {
} }
resource "azurerm_lb_rule" "ingress-http" { resource "azurerm_lb_rule" "ingress-http" {
resource_group_name = azurerm_resource_group.cluster.name
name = "ingress-http" name = "ingress-http"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
frontend_ip_configuration_name = "ingress" frontend_ip_configuration_name = "ingress"
@ -82,8 +78,6 @@ resource "azurerm_lb_rule" "ingress-http" {
} }
resource "azurerm_lb_rule" "ingress-https" { resource "azurerm_lb_rule" "ingress-https" {
resource_group_name = azurerm_resource_group.cluster.name
name = "ingress-https" name = "ingress-https"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
frontend_ip_configuration_name = "ingress" frontend_ip_configuration_name = "ingress"
@ -98,8 +92,6 @@ resource "azurerm_lb_rule" "ingress-https" {
# Worker outbound TCP/UDP SNAT # Worker outbound TCP/UDP SNAT
resource "azurerm_lb_outbound_rule" "worker-outbound" { resource "azurerm_lb_outbound_rule" "worker-outbound" {
resource_group_name = azurerm_resource_group.cluster.name
name = "worker" name = "worker"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
frontend_ip_configuration { frontend_ip_configuration {
@ -126,8 +118,6 @@ resource "azurerm_lb_backend_address_pool" "worker" {
# TCP health check for apiserver # TCP health check for apiserver
resource "azurerm_lb_probe" "apiserver" { resource "azurerm_lb_probe" "apiserver" {
resource_group_name = azurerm_resource_group.cluster.name
name = "apiserver" name = "apiserver"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
protocol = "Tcp" protocol = "Tcp"
@ -141,8 +131,6 @@ resource "azurerm_lb_probe" "apiserver" {
# HTTP health check for ingress # HTTP health check for ingress
resource "azurerm_lb_probe" "ingress" { resource "azurerm_lb_probe" "ingress" {
resource_group_name = azurerm_resource_group.cluster.name
name = "ingress" name = "ingress"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
protocol = "Http" protocol = "Http"

View File

@ -41,4 +41,3 @@ resource "azurerm_subnet_network_security_group_association" "worker" {
subnet_id = azurerm_subnet.worker.id subnet_id = azurerm_subnet.worker.id
network_security_group_id = azurerm_network_security_group.worker.id network_security_group_id = azurerm_network_security_group.worker.id
} }

View File

@ -43,9 +43,9 @@ output "worker_security_group_name" {
value = azurerm_network_security_group.worker.name value = azurerm_network_security_group.worker.name
} }
output "worker_address_prefix" { output "worker_address_prefixes" {
description = "Worker network subnet CIDR address (for source/destination)" description = "Worker network subnet CIDR addresses (for source/destination)"
value = azurerm_subnet.worker.address_prefix value = azurerm_subnet.worker.address_prefixes
} }
# Outputs for custom load balancing # Outputs for custom load balancing

View File

@ -10,171 +10,171 @@ resource "azurerm_network_security_group" "controller" {
resource "azurerm_network_security_rule" "controller-icmp" { resource "azurerm_network_security_rule" "controller-icmp" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-icmp" name = "allow-icmp"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "1995" priority = "1995"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Icmp" protocol = "Icmp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "*" destination_port_range = "*"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-ssh" { resource "azurerm_network_security_rule" "controller-ssh" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-ssh" name = "allow-ssh"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2000" priority = "2000"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "22" destination_port_range = "22"
source_address_prefix = "*" source_address_prefix = "*"
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-etcd" { resource "azurerm_network_security_rule" "controller-etcd" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-etcd" name = "allow-etcd"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2005" priority = "2005"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "2379-2380" destination_port_range = "2379-2380"
source_address_prefix = azurerm_subnet.controller.address_prefix source_address_prefixes = azurerm_subnet.controller.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow Prometheus to scrape etcd metrics # Allow Prometheus to scrape etcd metrics
resource "azurerm_network_security_rule" "controller-etcd-metrics" { resource "azurerm_network_security_rule" "controller-etcd-metrics" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-etcd-metrics" name = "allow-etcd-metrics"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2010" priority = "2010"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "2381" destination_port_range = "2381"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow Prometheus to scrape kube-proxy metrics # Allow Prometheus to scrape kube-proxy metrics
resource "azurerm_network_security_rule" "controller-kube-proxy" { resource "azurerm_network_security_rule" "controller-kube-proxy" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-kube-proxy-metrics" name = "allow-kube-proxy-metrics"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2011" priority = "2011"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "10249" destination_port_range = "10249"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics # Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
resource "azurerm_network_security_rule" "controller-kube-metrics" { resource "azurerm_network_security_rule" "controller-kube-metrics" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-kube-metrics" name = "allow-kube-metrics"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2012" priority = "2012"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "10257-10259" destination_port_range = "10257-10259"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-apiserver" { resource "azurerm_network_security_rule" "controller-apiserver" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-apiserver" name = "allow-apiserver"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2015" priority = "2015"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "6443" destination_port_range = "6443"
source_address_prefix = "*" source_address_prefix = "*"
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-cilium-health" { resource "azurerm_network_security_rule" "controller-cilium-health" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
count = var.networking == "cilium" ? 1 : 0 count = var.networking == "cilium" ? 1 : 0
name = "allow-cilium-health" name = "allow-cilium-health"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2019" priority = "2019"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "4240" destination_port_range = "4240"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-vxlan" { resource "azurerm_network_security_rule" "controller-vxlan" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-vxlan" name = "allow-vxlan"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2020" priority = "2020"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Udp" protocol = "Udp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "4789" destination_port_range = "4789"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-linux-vxlan" { resource "azurerm_network_security_rule" "controller-linux-vxlan" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-linux-vxlan" name = "allow-linux-vxlan"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2021" priority = "2021"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Udp" protocol = "Udp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "8472" destination_port_range = "8472"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow Prometheus to scrape node-exporter daemonset # Allow Prometheus to scrape node-exporter daemonset
resource "azurerm_network_security_rule" "controller-node-exporter" { resource "azurerm_network_security_rule" "controller-node-exporter" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-node-exporter" name = "allow-node-exporter"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2025" priority = "2025"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "9100" destination_port_range = "9100"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow apiserver to access kubelet's for exec, log, port-forward # Allow apiserver to access kubelet's for exec, log, port-forward
@ -191,8 +191,8 @@ resource "azurerm_network_security_rule" "controller-kubelet" {
destination_port_range = "10250" destination_port_range = "10250"
# allow Prometheus to scrape kubelet metrics too # allow Prometheus to scrape kubelet metrics too
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound # Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
@ -240,139 +240,139 @@ resource "azurerm_network_security_group" "worker" {
resource "azurerm_network_security_rule" "worker-icmp" { resource "azurerm_network_security_rule" "worker-icmp" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-icmp" name = "allow-icmp"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "1995" priority = "1995"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Icmp" protocol = "Icmp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "*" destination_port_range = "*"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-ssh" { resource "azurerm_network_security_rule" "worker-ssh" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-ssh" name = "allow-ssh"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2000" priority = "2000"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "22" destination_port_range = "22"
source_address_prefix = azurerm_subnet.controller.address_prefix source_address_prefixes = azurerm_subnet.controller.address_prefixes
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-http" { resource "azurerm_network_security_rule" "worker-http" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-http" name = "allow-http"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2005" priority = "2005"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "80" destination_port_range = "80"
source_address_prefix = "*" source_address_prefix = "*"
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-https" { resource "azurerm_network_security_rule" "worker-https" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-https" name = "allow-https"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2010" priority = "2010"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "443" destination_port_range = "443"
source_address_prefix = "*" source_address_prefix = "*"
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-cilium-health" { resource "azurerm_network_security_rule" "worker-cilium-health" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
count = var.networking == "cilium" ? 1 : 0 count = var.networking == "cilium" ? 1 : 0
name = "allow-cilium-health" name = "allow-cilium-health"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2014" priority = "2014"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "4240" destination_port_range = "4240"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-vxlan" { resource "azurerm_network_security_rule" "worker-vxlan" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-vxlan" name = "allow-vxlan"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2015" priority = "2015"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Udp" protocol = "Udp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "4789" destination_port_range = "4789"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-linux-vxlan" { resource "azurerm_network_security_rule" "worker-linux-vxlan" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-linux-vxlan" name = "allow-linux-vxlan"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2016" priority = "2016"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Udp" protocol = "Udp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "8472" destination_port_range = "8472"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
# Allow Prometheus to scrape node-exporter daemonset # Allow Prometheus to scrape node-exporter daemonset
resource "azurerm_network_security_rule" "worker-node-exporter" { resource "azurerm_network_security_rule" "worker-node-exporter" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-node-exporter" name = "allow-node-exporter"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2020" priority = "2020"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "9100" destination_port_range = "9100"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
# Allow Prometheus to scrape kube-proxy # Allow Prometheus to scrape kube-proxy
resource "azurerm_network_security_rule" "worker-kube-proxy" { resource "azurerm_network_security_rule" "worker-kube-proxy" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-kube-proxy" name = "allow-kube-proxy"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2024" priority = "2024"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "10249" destination_port_range = "10249"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
# Allow apiserver to access kubelet's for exec, log, port-forward # Allow apiserver to access kubelet's for exec, log, port-forward
@ -389,8 +389,8 @@ resource "azurerm_network_security_rule" "worker-kubelet" {
destination_port_range = "10250" destination_port_range = "10250"
# allow Prometheus to scrape kubelet metrics too # allow Prometheus to scrape kubelet metrics too
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound # Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound

View File

@ -3,7 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = "~> 2.8" azurerm = ">= 2.8, < 4.0"
template = "~> 2.2" template = "~> 2.2"
null = ">= 2.1" null = ">= 2.1"

View File

@ -26,7 +26,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -41,6 +41,7 @@ systemd:
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup \ --volume /sys/fs/cgroup:/sys/fs/cgroup \
@ -89,7 +90,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true

View File

@ -3,7 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = "~> 2.8" azurerm = ">= 2.8, < 4.0"
template = "~> 2.2" template = "~> 2.2"
ct = { ct = {

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -55,7 +55,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -117,7 +117,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \

View File

@ -53,8 +53,6 @@ resource "azurerm_lb" "cluster" {
} }
resource "azurerm_lb_rule" "apiserver" { resource "azurerm_lb_rule" "apiserver" {
resource_group_name = azurerm_resource_group.cluster.name
name = "apiserver" name = "apiserver"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
frontend_ip_configuration_name = "apiserver" frontend_ip_configuration_name = "apiserver"
@ -67,8 +65,6 @@ resource "azurerm_lb_rule" "apiserver" {
} }
resource "azurerm_lb_rule" "ingress-http" { resource "azurerm_lb_rule" "ingress-http" {
resource_group_name = azurerm_resource_group.cluster.name
name = "ingress-http" name = "ingress-http"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
frontend_ip_configuration_name = "ingress" frontend_ip_configuration_name = "ingress"
@ -82,8 +78,6 @@ resource "azurerm_lb_rule" "ingress-http" {
} }
resource "azurerm_lb_rule" "ingress-https" { resource "azurerm_lb_rule" "ingress-https" {
resource_group_name = azurerm_resource_group.cluster.name
name = "ingress-https" name = "ingress-https"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
frontend_ip_configuration_name = "ingress" frontend_ip_configuration_name = "ingress"
@ -98,8 +92,6 @@ resource "azurerm_lb_rule" "ingress-https" {
# Worker outbound TCP/UDP SNAT # Worker outbound TCP/UDP SNAT
resource "azurerm_lb_outbound_rule" "worker-outbound" { resource "azurerm_lb_outbound_rule" "worker-outbound" {
resource_group_name = azurerm_resource_group.cluster.name
name = "worker" name = "worker"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
frontend_ip_configuration { frontend_ip_configuration {
@ -126,8 +118,6 @@ resource "azurerm_lb_backend_address_pool" "worker" {
# TCP health check for apiserver # TCP health check for apiserver
resource "azurerm_lb_probe" "apiserver" { resource "azurerm_lb_probe" "apiserver" {
resource_group_name = azurerm_resource_group.cluster.name
name = "apiserver" name = "apiserver"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
protocol = "Tcp" protocol = "Tcp"
@ -141,8 +131,6 @@ resource "azurerm_lb_probe" "apiserver" {
# HTTP health check for ingress # HTTP health check for ingress
resource "azurerm_lb_probe" "ingress" { resource "azurerm_lb_probe" "ingress" {
resource_group_name = azurerm_resource_group.cluster.name
name = "ingress" name = "ingress"
loadbalancer_id = azurerm_lb.cluster.id loadbalancer_id = azurerm_lb.cluster.id
protocol = "Http" protocol = "Http"

View File

@ -41,4 +41,3 @@ resource "azurerm_subnet_network_security_group_association" "worker" {
subnet_id = azurerm_subnet.worker.id subnet_id = azurerm_subnet.worker.id
network_security_group_id = azurerm_network_security_group.worker.id network_security_group_id = azurerm_network_security_group.worker.id
} }

View File

@ -43,9 +43,9 @@ output "worker_security_group_name" {
value = azurerm_network_security_group.worker.name value = azurerm_network_security_group.worker.name
} }
output "worker_address_prefix" { output "worker_address_prefixes" {
description = "Worker network subnet CIDR address (for source/destination)" description = "Worker network subnet CIDR addresses (for source/destination)"
value = azurerm_subnet.worker.address_prefix value = azurerm_subnet.worker.address_prefixes
} }
# Outputs for custom load balancing # Outputs for custom load balancing

View File

@ -10,171 +10,171 @@ resource "azurerm_network_security_group" "controller" {
resource "azurerm_network_security_rule" "controller-icmp" { resource "azurerm_network_security_rule" "controller-icmp" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-icmp" name = "allow-icmp"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "1995" priority = "1995"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Icmp" protocol = "Icmp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "*" destination_port_range = "*"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-ssh" { resource "azurerm_network_security_rule" "controller-ssh" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-ssh" name = "allow-ssh"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2000" priority = "2000"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "22" destination_port_range = "22"
source_address_prefix = "*" source_address_prefix = "*"
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-etcd" { resource "azurerm_network_security_rule" "controller-etcd" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-etcd" name = "allow-etcd"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2005" priority = "2005"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "2379-2380" destination_port_range = "2379-2380"
source_address_prefix = azurerm_subnet.controller.address_prefix source_address_prefixes = azurerm_subnet.controller.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow Prometheus to scrape etcd metrics # Allow Prometheus to scrape etcd metrics
resource "azurerm_network_security_rule" "controller-etcd-metrics" { resource "azurerm_network_security_rule" "controller-etcd-metrics" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-etcd-metrics" name = "allow-etcd-metrics"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2010" priority = "2010"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "2381" destination_port_range = "2381"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow Prometheus to scrape kube-proxy metrics # Allow Prometheus to scrape kube-proxy metrics
resource "azurerm_network_security_rule" "controller-kube-proxy" { resource "azurerm_network_security_rule" "controller-kube-proxy" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-kube-proxy-metrics" name = "allow-kube-proxy-metrics"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2011" priority = "2011"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "10249" destination_port_range = "10249"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics # Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
resource "azurerm_network_security_rule" "controller-kube-metrics" { resource "azurerm_network_security_rule" "controller-kube-metrics" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-kube-metrics" name = "allow-kube-metrics"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2012" priority = "2012"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "10257-10259" destination_port_range = "10257-10259"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-apiserver" { resource "azurerm_network_security_rule" "controller-apiserver" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-apiserver" name = "allow-apiserver"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2015" priority = "2015"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "6443" destination_port_range = "6443"
source_address_prefix = "*" source_address_prefix = "*"
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-cilium-health" { resource "azurerm_network_security_rule" "controller-cilium-health" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
count = var.networking == "cilium" ? 1 : 0 count = var.networking == "cilium" ? 1 : 0
name = "allow-cilium-health" name = "allow-cilium-health"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2019" priority = "2019"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "4240" destination_port_range = "4240"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-vxlan" { resource "azurerm_network_security_rule" "controller-vxlan" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-vxlan" name = "allow-vxlan"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2020" priority = "2020"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Udp" protocol = "Udp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "4789" destination_port_range = "4789"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
resource "azurerm_network_security_rule" "controller-linux-vxlan" { resource "azurerm_network_security_rule" "controller-linux-vxlan" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-linux-vxlan" name = "allow-linux-vxlan"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2021" priority = "2021"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Udp" protocol = "Udp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "8472" destination_port_range = "8472"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow Prometheus to scrape node-exporter daemonset # Allow Prometheus to scrape node-exporter daemonset
resource "azurerm_network_security_rule" "controller-node-exporter" { resource "azurerm_network_security_rule" "controller-node-exporter" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-node-exporter" name = "allow-node-exporter"
network_security_group_name = azurerm_network_security_group.controller.name network_security_group_name = azurerm_network_security_group.controller.name
priority = "2025" priority = "2025"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "9100" destination_port_range = "9100"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Allow apiserver to access kubelet's for exec, log, port-forward # Allow apiserver to access kubelet's for exec, log, port-forward
@ -191,8 +191,8 @@ resource "azurerm_network_security_rule" "controller-kubelet" {
destination_port_range = "10250" destination_port_range = "10250"
# allow Prometheus to scrape kubelet metrics too # allow Prometheus to scrape kubelet metrics too
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.controller.address_prefix destination_address_prefixes = azurerm_subnet.controller.address_prefixes
} }
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound # Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
@ -240,139 +240,139 @@ resource "azurerm_network_security_group" "worker" {
resource "azurerm_network_security_rule" "worker-icmp" { resource "azurerm_network_security_rule" "worker-icmp" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-icmp" name = "allow-icmp"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "1995" priority = "1995"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Icmp" protocol = "Icmp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "*" destination_port_range = "*"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-ssh" { resource "azurerm_network_security_rule" "worker-ssh" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-ssh" name = "allow-ssh"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2000" priority = "2000"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "22" destination_port_range = "22"
source_address_prefix = azurerm_subnet.controller.address_prefix source_address_prefixes = azurerm_subnet.controller.address_prefixes
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-http" { resource "azurerm_network_security_rule" "worker-http" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-http" name = "allow-http"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2005" priority = "2005"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "80" destination_port_range = "80"
source_address_prefix = "*" source_address_prefix = "*"
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-https" { resource "azurerm_network_security_rule" "worker-https" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-https" name = "allow-https"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2010" priority = "2010"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "443" destination_port_range = "443"
source_address_prefix = "*" source_address_prefix = "*"
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-cilium-health" { resource "azurerm_network_security_rule" "worker-cilium-health" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
count = var.networking == "cilium" ? 1 : 0 count = var.networking == "cilium" ? 1 : 0
name = "allow-cilium-health" name = "allow-cilium-health"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2014" priority = "2014"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "4240" destination_port_range = "4240"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-vxlan" { resource "azurerm_network_security_rule" "worker-vxlan" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-vxlan" name = "allow-vxlan"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2015" priority = "2015"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Udp" protocol = "Udp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "4789" destination_port_range = "4789"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
resource "azurerm_network_security_rule" "worker-linux-vxlan" { resource "azurerm_network_security_rule" "worker-linux-vxlan" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-linux-vxlan" name = "allow-linux-vxlan"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2016" priority = "2016"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Udp" protocol = "Udp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "8472" destination_port_range = "8472"
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
# Allow Prometheus to scrape node-exporter daemonset # Allow Prometheus to scrape node-exporter daemonset
resource "azurerm_network_security_rule" "worker-node-exporter" { resource "azurerm_network_security_rule" "worker-node-exporter" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-node-exporter" name = "allow-node-exporter"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2020" priority = "2020"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "9100" destination_port_range = "9100"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
# Allow Prometheus to scrape kube-proxy # Allow Prometheus to scrape kube-proxy
resource "azurerm_network_security_rule" "worker-kube-proxy" { resource "azurerm_network_security_rule" "worker-kube-proxy" {
resource_group_name = azurerm_resource_group.cluster.name resource_group_name = azurerm_resource_group.cluster.name
name = "allow-kube-proxy" name = "allow-kube-proxy"
network_security_group_name = azurerm_network_security_group.worker.name network_security_group_name = azurerm_network_security_group.worker.name
priority = "2024" priority = "2024"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "10249" destination_port_range = "10249"
source_address_prefix = azurerm_subnet.worker.address_prefix source_address_prefixes = azurerm_subnet.worker.address_prefixes
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
# Allow apiserver to access kubelet's for exec, log, port-forward # Allow apiserver to access kubelet's for exec, log, port-forward
@ -389,8 +389,8 @@ resource "azurerm_network_security_rule" "worker-kubelet" {
destination_port_range = "10250" destination_port_range = "10250"
# allow Prometheus to scrape kubelet metrics too # allow Prometheus to scrape kubelet metrics too
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix] source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
destination_address_prefix = azurerm_subnet.worker.address_prefix destination_address_prefixes = azurerm_subnet.worker.address_prefixes
} }
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound # Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound

View File

@ -3,7 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = "~> 2.8" azurerm = ">= 2.8, < 4.0"
template = "~> 2.2" template = "~> 2.2"
null = ">= 2.1" null = ">= 2.1"

View File

@ -27,7 +27,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -92,7 +92,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true

View File

@ -3,7 +3,7 @@
terraform { terraform {
required_version = ">= 0.13.0, < 2.0.0" required_version = ">= 0.13.0, < 2.0.0"
required_providers { required_providers {
azurerm = "~> 2.8" azurerm = ">= 2.8, < 4.0"
template = "~> 2.2" template = "~> 2.2"
ct = { ct = {

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name] api_servers = [var.k8s_domain_name]

View File

@ -52,7 +52,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -67,6 +67,7 @@ systemd:
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup \ --volume /sys/fs/cgroup:/sys/fs/cgroup \
@ -123,7 +124,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=-/usr/bin/podman rm bootstrap ExecStartPre=-/usr/bin/podman rm bootstrap
ExecStart=/usr/bin/podman run --name bootstrap \ ExecStart=/usr/bin/podman run --name bootstrap \
--network host \ --network host \

View File

@ -25,7 +25,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -40,6 +40,7 @@ systemd:
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup \ --volume /sys/fs/cgroup:/sys/fs/cgroup \

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name] api_servers = [var.k8s_domain_name]

View File

@ -63,7 +63,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -126,7 +126,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \

View File

@ -35,7 +35,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -54,7 +54,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -70,6 +70,7 @@ systemd:
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup \ --volume /sys/fs/cgroup:/sys/fs/cgroup \
@ -133,7 +134,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.23.5 quay.io/poseidon/kubelet:v1.23.6
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:

View File

@ -28,7 +28,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -44,6 +44,7 @@ systemd:
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup \ --volume /sys/fs/cgroup:/sys/fs/cgroup \
@ -96,7 +97,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -65,7 +65,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -129,7 +129,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \

View File

@ -37,7 +37,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -98,7 +98,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true

View File

@ -13,7 +13,7 @@ Create a cluster with ARM64 controller and worker nodes. Container workloads mus
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.6"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
@ -38,7 +38,7 @@ Create a cluster with ARM64 controller and worker nodes. Container workloads mus
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.6"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
@ -64,9 +64,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
``` ```
$ kubectl get nodes -o wide $ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-21-119 Ready <none> 77s v1.23.5 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8 ip-10-0-21-119 Ready <none> 77s v1.23.6 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-32-166 Ready <none> 80s v1.23.5 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8 ip-10-0-32-166 Ready <none> 80s v1.23.6 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
ip-10-0-5-79 Ready <none> 77s v1.23.5 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8 ip-10-0-5-79 Ready <none> 77s v1.23.6 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
``` ```
## Hybrid ## Hybrid
@ -77,7 +77,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.6"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
@ -100,7 +100,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf ```tf
module "gravitas" { module "gravitas" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.6"
# AWS # AWS
cluster_name = "gravitas" cluster_name = "gravitas"
@ -123,7 +123,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf ```tf
module "gravitas-arm64" { module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.23.6"
# AWS # AWS
vpc_id = module.gravitas.vpc_id vpc_id = module.gravitas.vpc_id
@ -147,7 +147,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
```tf ```tf
module "gravitas-arm64" { module "gravitas-arm64" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.23.6"
# AWS # AWS
vpc_id = module.gravitas.vpc_id vpc_id = module.gravitas.vpc_id
@ -172,9 +172,9 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
``` ```
$ kubectl get nodes -o wide $ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-1-73 Ready <none> 111m v1.23.5 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8 ip-10-0-1-73 Ready <none> 111m v1.23.6 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-22-79... Ready <none> 111m v1.23.5 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8 ip-10-0-22-79... Ready <none> 111m v1.23.6 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
ip-10-0-24-130 Ready <none> 111m v1.23.5 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8 ip-10-0-24-130 Ready <none> 111m v1.23.6 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
ip-10-0-39-19 Ready <none> 111m v1.23.5 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8 ip-10-0-39-19 Ready <none> 111m v1.23.6 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
``` ```

View File

@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.6"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
```tf ```tf
module "yavin-pool" { module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.6"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.6"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
```tf ```tf
module "yavin-pool" { module "yavin-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.6"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"

View File

@ -19,7 +19,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf ```tf
module "tempest-worker-pool" { module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.23.6"
# AWS # AWS
vpc_id = module.tempest.vpc_id vpc_id = module.tempest.vpc_id
@ -42,7 +42,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
```tf ```tf
module "tempest-worker-pool" { module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.23.6"
# AWS # AWS
vpc_id = module.tempest.vpc_id vpc_id = module.tempest.vpc_id
@ -111,7 +111,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf ```tf
module "ramius-worker-pool" { module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.23.6"
# Azure # Azure
region = module.ramius.region region = module.ramius.region
@ -137,7 +137,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
```tf ```tf
module "ramius-worker-pool" { module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.23.6"
# Azure # Azure
region = module.ramius.region region = module.ramius.region
@ -207,7 +207,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf ```tf
module "yavin-worker-pool" { module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.23.6"
# Google Cloud # Google Cloud
region = "europe-west2" region = "europe-west2"
@ -231,7 +231,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
```tf ```tf
module "yavin-worker-pool" { module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.23.6"
# Google Cloud # Google Cloud
region = "europe-west2" region = "europe-west2"
@ -262,11 +262,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
``` ```
$ kubectl get nodes $ kubectl get nodes
NAME STATUS AGE VERSION NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.23.5 yavin-controller-0.c.example-com.internal Ready 6m v1.23.6
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.23.5 yavin-worker-jrbf.c.example-com.internal Ready 5m v1.23.6
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.23.5 yavin-worker-mzdm.c.example-com.internal Ready 5m v1.23.6
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.23.5 yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.23.6
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.23.5 yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.23.6
``` ```
### Variables ### Variables

View File

@ -53,16 +53,16 @@ Add firewall rules to the worker security group.
resource "azurerm_network_security_rule" "some-app" { resource "azurerm_network_security_rule" "some-app" {
resource_group_name = "${module.ramius.resource_group_name}" resource_group_name = "${module.ramius.resource_group_name}"
name = "some-app" name = "some-app"
network_security_group_name = module.ramius.worker_security_group_name network_security_group_name = module.ramius.worker_security_group_name
priority = "3001" priority = "3001"
access = "Allow" access = "Allow"
direction = "Inbound" direction = "Inbound"
protocol = "Tcp" protocol = "Tcp"
source_port_range = "*" source_port_range = "*"
destination_port_range = "30333" destination_port_range = "30333"
source_address_prefix = "*" source_address_prefix = "*"
destination_address_prefix = module.ramius.worker_address_prefix destination_address_prefixes = module.ramius.worker_address_prefixes
} }
``` ```

View File

@ -1,6 +1,6 @@
# AWS # AWS
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on AWS with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.23.6 cluster on AWS with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
```tf ```tf
module "tempest" { module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.23.6"
# AWS # AWS
cluster_name = "tempest" cluster_name = "tempest"
@ -145,9 +145,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config $ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.23.5 ip-10-0-3-155 Ready <none> 10m v1.23.6
ip-10-0-26-65 Ready <none> 10m v1.23.5 ip-10-0-26-65 Ready <none> 10m v1.23.6
ip-10-0-41-21 Ready <none> 10m v1.23.5 ip-10-0-41-21 Ready <none> 10m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Azure # Azure
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on Azure with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.23.6 cluster on Azure with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -86,7 +86,7 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
```tf ```tf
module "ramius" { module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.23.6"
# Azure # Azure
cluster_name = "ramius" cluster_name = "ramius"
@ -161,9 +161,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config $ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.23.5 ramius-controller-0 Ready <none> 24m v1.23.6
ramius-worker-000001 Ready <none> 25m v1.23.5 ramius-worker-000001 Ready <none> 25m v1.23.6
ramius-worker-000002 Ready <none> 24m v1.23.5 ramius-worker-000002 Ready <none> 24m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Bare-Metal # Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.23.5 cluster on bare-metal with Fedora CoreOS. In this tutorial, we'll network boot and provision a Kubernetes v1.23.6 cluster on bare-metal with Fedora CoreOS.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition. First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
```tf ```tf
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.23.6"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -283,9 +283,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config $ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.23.5 node1.example.com Ready <none> 10m v1.23.6
node2.example.com Ready <none> 10m v1.23.5 node2.example.com Ready <none> 10m v1.23.6
node3.example.com Ready <none> 10m v1.23.5 node3.example.com Ready <none> 10m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# DigitalOcean # DigitalOcean
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on DigitalOcean with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.23.6 cluster on DigitalOcean with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
```tf ```tf
module "nemo" { module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.23.6"
# Digital Ocean # Digital Ocean
cluster_name = "nemo" cluster_name = "nemo"
@ -155,9 +155,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config $ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.23.5 10.132.110.130 Ready <none> 10m v1.23.6
10.132.115.81 Ready <none> 10m v1.23.5 10.132.115.81 Ready <none> 10m v1.23.6
10.132.124.107 Ready <none> 10m v1.23.5 10.132.124.107 Ready <none> 10m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud # Google Cloud
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on Google Compute Engine with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.23.6 cluster on Google Compute Engine with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -147,9 +147,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.5 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.6
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.5 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.6
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.5 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# AWS # AWS
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on AWS with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.23.6 cluster on AWS with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/flatcar-linux/kubernetes`.
```tf ```tf
module "tempest" { module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.23.6"
# AWS # AWS
cluster_name = "tempest" cluster_name = "tempest"
@ -145,9 +145,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config $ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.23.5 ip-10-0-3-155 Ready <none> 10m v1.23.6
ip-10-0-26-65 Ready <none> 10m v1.23.5 ip-10-0-26-65 Ready <none> 10m v1.23.6
ip-10-0-41-21 Ready <none> 10m v1.23.5 ip-10-0-41-21 Ready <none> 10m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Azure # Azure
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on Azure with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.23.6 cluster on Azure with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -75,7 +75,7 @@ Define a Kubernetes cluster using the module `azure/flatcar-linux/kubernetes`.
```tf ```tf
module "ramius" { module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.23.6"
# Azure # Azure
cluster_name = "ramius" cluster_name = "ramius"
@ -149,9 +149,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config $ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.23.5 ramius-controller-0 Ready <none> 24m v1.23.6
ramius-worker-000001 Ready <none> 25m v1.23.5 ramius-worker-000001 Ready <none> 25m v1.23.6
ramius-worker-000002 Ready <none> 24m v1.23.5 ramius-worker-000002 Ready <none> 24m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Bare-Metal # Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.23.5 cluster on bare-metal with Flatcar Linux. In this tutorial, we'll network boot and provision a Kubernetes v1.23.6 cluster on bare-metal with Flatcar Linux.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition. First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/flatcar-linux/kubernete
```tf ```tf
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.23.6"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -293,9 +293,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config $ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.23.5 node1.example.com Ready <none> 10m v1.23.6
node2.example.com Ready <none> 10m v1.23.5 node2.example.com Ready <none> 10m v1.23.6
node3.example.com Ready <none> 10m v1.23.5 node3.example.com Ready <none> 10m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# DigitalOcean # DigitalOcean
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on DigitalOcean with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.23.6 cluster on DigitalOcean with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/flatcar-linux/kubern
```tf ```tf
module "nemo" { module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.23.6"
# Digital Ocean # Digital Ocean
cluster_name = "nemo" cluster_name = "nemo"
@ -155,9 +155,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config $ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.23.5 10.132.110.130 Ready <none> 10m v1.23.6
10.132.115.81 Ready <none> 10m v1.23.5 10.132.115.81 Ready <none> 10m v1.23.6
10.132.124.107 Ready <none> 10m v1.23.5 10.132.124.107 Ready <none> 10m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud # Google Cloud
In this tutorial, we'll create a Kubernetes v1.23.5 cluster on Google Compute Engine with Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.23.6 cluster on Google Compute Engine with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/flatcar-linux/kuberne
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.23.6"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -147,9 +147,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.5 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.6
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.5 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.6
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.5 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#hosts) customization * Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#hosts) customization
@ -61,7 +61,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.6"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -99,9 +99,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.5 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.23.6
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.5 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.23.6
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.5 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.23.6
``` ```
List the pods. List the pods.

View File

@ -13,12 +13,12 @@ Typhoon provides tagged releases to allow clusters to be versioned using ordinar
``` ```
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.23.6"
... ...
} }
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.23.5" source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.23.6"
... ...
} }
``` ```

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -53,7 +53,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -68,6 +68,7 @@ systemd:
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup \ --volume /sys/fs/cgroup:/sys/fs/cgroup \
@ -121,7 +122,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.23.5 quay.io/poseidon/kubelet:v1.23.6
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:

View File

@ -26,7 +26,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -41,6 +41,7 @@ systemd:
--volume /etc/cni/net.d:/etc/cni/net.d:ro,z \ --volume /etc/cni/net.d:/etc/cni/net.d:ro,z \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \ --volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /usr/lib/os-release:/etc/os-release:ro \ --volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/machine-id:/etc/machine-id:ro \
--volume /lib/modules:/lib/modules:ro \ --volume /lib/modules:/lib/modules:ro \
--volume /run:/run \ --volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup \ --volume /sys/fs/cgroup:/sys/fs/cgroup \
@ -89,7 +90,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.23.5 (upstream) * Kubernetes v1.23.6 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e5bdb6f6c67461ca3a1cd3449f4703189f14d3e4" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=7a18a221bb0b04c01b0bed52f45b82c0ce5f42ab"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -55,7 +55,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -117,7 +117,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
WorkingDirectory=/opt/bootstrap WorkingDirectory=/opt/bootstrap
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStart=/usr/bin/docker run \ ExecStart=/usr/bin/docker run \
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \ -v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
-v /opt/bootstrap/assets:/assets:ro \ -v /opt/bootstrap/assets:/assets:ro \

View File

@ -59,7 +59,10 @@ resource "google_compute_instance" "controllers" {
tags = ["${var.cluster_name}-controller"] tags = ["${var.cluster_name}-controller"]
lifecycle { lifecycle {
ignore_changes = [metadata] ignore_changes = [
metadata,
boot_disk[0].initialize_params
]
} }
} }

View File

@ -27,7 +27,7 @@ systemd:
After=docker.service After=docker.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
ExecStartPre=/bin/mkdir -p /etc/cni/net.d ExecStartPre=/bin/mkdir -p /etc/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -92,7 +92,7 @@ systemd:
[Unit] [Unit]
Description=Delete Kubernetes node on shutdown Description=Delete Kubernetes node on shutdown
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.5 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.23.6
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true

View File

@ -1,4 +1,4 @@
mkdocs==1.2.3 mkdocs==1.3.0
mkdocs-material==8.2.5 mkdocs-material==8.2.9
pygments==2.11.2 pygments==2.11.2
pymdown-extensions==9.2 pymdown-extensions==9.3