Compare commits

...

10 Commits

Author SHA1 Message Date
d45dfdbf91 Update nginx-ingress from v0.34.1 to v0.35.0
* Repo changed to k8s.gcr.io/ingress-nginx/controller
* https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v0.35.0
2020-08-29 13:38:28 -07:00
d7e0536838 Add code group blocks to improve worker pool docs
* Show Fedora CoreOS and Flatcar Linux examples in
separate tabs, rather than trying to show one
* Add copyright footer for the poseidon org
2020-08-28 00:25:12 -07:00
8dd221a57c Add fleetlock docs and links to addons
* Add links to fleetlock for Fedora CoreOS reboot coordination
* https://github.com/poseidon/fleetlock
2020-08-28 00:02:24 -07:00
f17bb4cf61 Update mkdocs-material from v5.5.6 to v5.5.9 2020-08-27 09:20:18 -07:00
44f1fe620a Update recommended Terraform provider versions
* Sync Terraform provider plugins with those used internally
2020-08-27 09:18:39 -07:00
a504264e24 Update Grafana from v7.1.4 to v7.1.5
* https://github.com/grafana/grafana/releases/tag/v7.1.5
2020-08-27 08:52:07 -07:00
88cf7273dc Update Kubernetes from v1.18.8 to v1.19.0
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md
2020-08-27 08:50:01 -07:00
58def65a09 Update Grafana from v7.1.3 to v7.1.4
* https://github.com/grafana/grafana/releases/tag/v7.1.4
2020-08-22 15:40:09 -07:00
cd7fd29194 Update etcd from v3.4.10 to v3.4.12
* https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.4.md
2020-08-19 21:25:41 -07:00
aafa38476a Fix SELinux race condition on non-bootstrap controllers in multi-controller (#808)
* Fix race condition for bootstrap-secrets SELinux context on non-bootstrap controllers in multi-controller FCOS clusters
* On first boot from disk on non-bootstrap controllers, adding bootstrap-secrets races with kubelet.service starting, which can cause the secrets assets to have the wrong label until kubelet.service restarts (service, reboot, auto-update)
* This can manifest as `kube-apiserver`, `kube-controller-manager`, and `kube-scheduler` pods crashlooping on spare controllers on first cluster creation
2020-08-19 21:18:10 -07:00
66 changed files with 364 additions and 221 deletions

View File

@ -4,7 +4,23 @@ Notable changes between versions.
## Latest ## Latest
### v1.18.8 * Kubernetes [v1.19.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#v1190)
* Update etcd from v3.4.10 to [v3.4.12](https://github.com/etcd-io/etcd/releases/tag/v3.4.12)
* Update Calico from v3.15.1 to [v3.15.2](https://docs.projectcalico.org/v3.15/release-notes/)
### Fedora CoreOS
* Fix race condition during bootstrap of multi-controller clusters ([#808](https://github.com/poseidon/typhoon/pull/808))
* Fix SELinux label of bootstrap-secrets on non-bootstrap controllers
### Addons
* Introduce [fleetlock](https://github.com/poseidon/fleetlock) for Fedora CoreOS reboot coordination ([#814](https://github.com/poseidon/typhoon/pull/814))
* Update nginx-ingress from v0.34.1 to [v0.35.0](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v0.35.0)
* Repository changed to `k8s.gcr.io/ingress-nginx/controller`
* Update Grafana from v7.1.3 to [v7.1.5](https://github.com/grafana/grafana/releases/tag/v7.1.5)
## v1.18.8
* Kubernetes [v1.18.8](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1188) * Kubernetes [v1.18.8](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1188)
* Migrate from Terraform v0.12.x to v0.13.x ([#804](https://github.com/poseidon/typhoon/pull/804)) (**action required**) * Migrate from Terraform v0.12.x to v0.13.x ([#804](https://github.com/poseidon/typhoon/pull/804)) (**action required**)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
@ -54,7 +54,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.19.0"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -93,9 +93,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.8 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.19.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.8 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.19.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.8 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -23,7 +23,7 @@ spec:
spec: spec:
containers: containers:
- name: grafana - name: grafana
image: docker.io/grafana/grafana:7.1.3 image: docker.io/grafana/grafana:7.1.5
env: env:
- name: GF_PATHS_CONFIG - name: GF_PATHS_CONFIG
value: "/etc/grafana/custom.ini" value: "/etc/grafana/custom.ini"

View File

@ -22,7 +22,7 @@ spec:
spec: spec:
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1 image: k8s.gcr.io/ingress-nginx/controller:v0.35.0
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --ingress-class=public - --ingress-class=public
@ -47,7 +47,6 @@ spec:
containerPort: 10254 containerPort: 10254
hostPort: 10254 hostPort: 10254
livenessProbe: livenessProbe:
failureThreshold: 3
httpGet: httpGet:
path: /healthz path: /healthz
port: 10254 port: 10254
@ -55,15 +54,16 @@ spec:
initialDelaySeconds: 10 initialDelaySeconds: 10
periodSeconds: 10 periodSeconds: 10
successThreshold: 1 successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5 timeoutSeconds: 5
readinessProbe: readinessProbe:
failureThreshold: 3
httpGet: httpGet:
path: /healthz path: /healthz
port: 10254 port: 10254
scheme: HTTP scheme: HTTP
periodSeconds: 10 periodSeconds: 10
successThreshold: 1 successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5 timeoutSeconds: 5
lifecycle: lifecycle:
preStop: preStop:

View File

@ -22,7 +22,7 @@ spec:
spec: spec:
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1 image: k8s.gcr.io/ingress-nginx/controller:v0.35.0
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --ingress-class=public - --ingress-class=public
@ -47,7 +47,6 @@ spec:
containerPort: 10254 containerPort: 10254
hostPort: 10254 hostPort: 10254
livenessProbe: livenessProbe:
failureThreshold: 3
httpGet: httpGet:
path: /healthz path: /healthz
port: 10254 port: 10254
@ -55,15 +54,16 @@ spec:
initialDelaySeconds: 10 initialDelaySeconds: 10
periodSeconds: 10 periodSeconds: 10
successThreshold: 1 successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5 timeoutSeconds: 5
readinessProbe: readinessProbe:
failureThreshold: 3
httpGet: httpGet:
path: /healthz path: /healthz
port: 10254 port: 10254
scheme: HTTP scheme: HTTP
periodSeconds: 10 periodSeconds: 10
successThreshold: 1 successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5 timeoutSeconds: 5
lifecycle: lifecycle:
preStop: preStop:

View File

@ -1,7 +1,7 @@
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: ingress-controller-public name: nginx-ingress-controller
namespace: ingress namespace: ingress
spec: spec:
replicas: 2 replicas: 2
@ -10,19 +10,19 @@ spec:
maxUnavailable: 1 maxUnavailable: 1
selector: selector:
matchLabels: matchLabels:
name: ingress-controller-public name: nginx-ingress-controller
phase: prod phase: prod
template: template:
metadata: metadata:
labels: labels:
name: ingress-controller-public name: nginx-ingress-controller
phase: prod phase: prod
annotations: annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default' seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec: spec:
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1 image: k8s.gcr.io/ingress-nginx/controller:v0.35.0
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --ingress-class=public - --ingress-class=public
@ -76,4 +76,3 @@ spec:
runAsUser: 101 # www-data runAsUser: 101 # www-data
restartPolicy: Always restartPolicy: Always
terminationGracePeriodSeconds: 300 terminationGracePeriodSeconds: 300

View File

@ -22,7 +22,7 @@ spec:
spec: spec:
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1 image: k8s.gcr.io/ingress-nginx/controller:v0.35.0
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --ingress-class=public - --ingress-class=public
@ -47,7 +47,6 @@ spec:
containerPort: 10254 containerPort: 10254
hostPort: 10254 hostPort: 10254
livenessProbe: livenessProbe:
failureThreshold: 3
httpGet: httpGet:
path: /healthz path: /healthz
port: 10254 port: 10254
@ -55,15 +54,16 @@ spec:
initialDelaySeconds: 10 initialDelaySeconds: 10
periodSeconds: 10 periodSeconds: 10
successThreshold: 1 successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5 timeoutSeconds: 5
readinessProbe: readinessProbe:
failureThreshold: 3
httpGet: httpGet:
path: /healthz path: /healthz
port: 10254 port: 10254
scheme: HTTP scheme: HTTP
periodSeconds: 10 periodSeconds: 10
successThreshold: 1 successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5 timeoutSeconds: 5
lifecycle: lifecycle:
preStop: preStop:

View File

@ -22,7 +22,7 @@ spec:
spec: spec:
containers: containers:
- name: nginx-ingress-controller - name: nginx-ingress-controller
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1 image: k8s.gcr.io/ingress-nginx/controller:v0.35.0
args: args:
- /nginx-ingress-controller - /nginx-ingress-controller
- --ingress-class=public - --ingress-class=public
@ -47,7 +47,6 @@ spec:
containerPort: 10254 containerPort: 10254
hostPort: 10254 hostPort: 10254
livenessProbe: livenessProbe:
failureThreshold: 3
httpGet: httpGet:
path: /healthz path: /healthz
port: 10254 port: 10254
@ -55,15 +54,16 @@ spec:
initialDelaySeconds: 10 initialDelaySeconds: 10
periodSeconds: 10 periodSeconds: 10
successThreshold: 1 successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5 timeoutSeconds: 5
readinessProbe: readinessProbe:
failureThreshold: 3
httpGet: httpGet:
path: /healthz path: /healthz
port: 10254 port: 10254
scheme: HTTP scheme: HTTP
periodSeconds: 10 periodSeconds: 10
successThreshold: 1 successThreshold: 1
failureThreshold: 3
timeoutSeconds: 5 timeoutSeconds: 5
lifecycle: lifecycle:
preStop: preStop:

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf - name: 40-etcd-cluster.conf
contents: | contents: |
[Service] [Service]
Environment="ETCD_IMAGE_TAG=v3.4.10" Environment="ETCD_IMAGE_TAG=v3.4.12"
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd" Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
Environment="RKT_RUN_ARGS=--insecure-options=image" Environment="RKT_RUN_ARGS=--insecure-options=image"
Environment="ETCD_NAME=${etcd_name}" Environment="ETCD_NAME=${etcd_name}"
@ -52,7 +52,7 @@ systemd:
Description=Kubelet Description=Kubelet
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver} Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -134,7 +134,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \ --volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \ --mount volume=script,target=/apply \
--insecure-options=image \ --insecure-options=image \
docker://quay.io/poseidon/kubelet:v1.18.8 \ docker://quay.io/poseidon/kubelet:v1.19.0 \
--net=host \ --net=host \
--dns=host \ --dns=host \
--exec=/apply --exec=/apply

View File

@ -25,7 +25,7 @@ systemd:
Description=Kubelet Description=Kubelet
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver} Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -129,7 +129,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \ --volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \ --mount volume=config,target=/etc/kubernetes \
--insecure-options=image \ --insecure-options=image \
docker://quay.io/poseidon/kubelet:v1.18.8 \ docker://quay.io/poseidon/kubelet:v1.19.0 \
--net=host \ --net=host \
--dns=host \ --dns=host \
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname) --exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -28,7 +28,7 @@ systemd:
--network host \ --network host \
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \ --volume /var/lib/etcd:/var/lib/etcd:rw,Z \
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \ --volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
quay.io/coreos/etcd:v3.4.10 quay.io/coreos/etcd:v3.4.12
ExecStop=/usr/bin/podman stop etcd ExecStop=/usr/bin/podman stop etcd
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
@ -55,7 +55,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -124,7 +124,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.18.8 quay.io/poseidon/kubelet:v1.19.0
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -160,6 +160,7 @@ storage:
mv manifests /opt/bootstrap/assets/manifests mv manifests /opt/bootstrap/assets/manifests
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
chcon -R -u system_u -t container_file_t /etc/kubernetes/bootstrap-secrets
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
mode: 0544 mode: 0544
contents: contents:

View File

@ -25,7 +25,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -89,7 +89,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.8 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME' ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.19.0 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
storage: storage:

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf - name: 40-etcd-cluster.conf
contents: | contents: |
[Service] [Service]
Environment="ETCD_IMAGE_TAG=v3.4.10" Environment="ETCD_IMAGE_TAG=v3.4.12"
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd" Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
Environment="RKT_RUN_ARGS=--insecure-options=image" Environment="RKT_RUN_ARGS=--insecure-options=image"
Environment="ETCD_NAME=${etcd_name}" Environment="ETCD_NAME=${etcd_name}"
@ -52,7 +52,7 @@ systemd:
Description=Kubelet Description=Kubelet
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver} Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -134,7 +134,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \ --volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \ --mount volume=script,target=/apply \
--insecure-options=image \ --insecure-options=image \
docker://quay.io/poseidon/kubelet:v1.18.8 \ docker://quay.io/poseidon/kubelet:v1.19.0 \
--net=host \ --net=host \
--dns=host \ --dns=host \
--exec=/apply --exec=/apply

View File

@ -25,7 +25,7 @@ systemd:
Description=Kubelet Description=Kubelet
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver} Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -129,7 +129,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \ --volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \ --mount volume=config,target=/etc/kubernetes \
--insecure-options=image \ --insecure-options=image \
docker://quay.io/poseidon/kubelet:v1.18.8 \ docker://quay.io/poseidon/kubelet:v1.19.0 \
--net=host \ --net=host \
--dns=host \ --dns=host \
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]') --exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]')

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -28,7 +28,7 @@ systemd:
--network host \ --network host \
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \ --volume /var/lib/etcd:/var/lib/etcd:rw,Z \
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \ --volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
quay.io/coreos/etcd:v3.4.10 quay.io/coreos/etcd:v3.4.12
ExecStop=/usr/bin/podman stop etcd ExecStop=/usr/bin/podman stop etcd
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
@ -54,7 +54,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -123,7 +123,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.18.8 quay.io/poseidon/kubelet:v1.19.0
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -159,6 +159,7 @@ storage:
mv manifests /opt/bootstrap/assets/manifests mv manifests /opt/bootstrap/assets/manifests
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
chcon -R -u system_u -t container_file_t /etc/kubernetes/bootstrap-secrets
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
mode: 0544 mode: 0544
contents: contents:

View File

@ -24,7 +24,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -88,7 +88,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.8 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME' ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.19.0 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
storage: storage:

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name] api_servers = [var.k8s_domain_name]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf - name: 40-etcd-cluster.conf
contents: | contents: |
[Service] [Service]
Environment="ETCD_IMAGE_TAG=v3.4.10" Environment="ETCD_IMAGE_TAG=v3.4.12"
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd" Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
Environment="RKT_RUN_ARGS=--insecure-options=image" Environment="RKT_RUN_ARGS=--insecure-options=image"
Environment="ETCD_NAME=${etcd_name}" Environment="ETCD_NAME=${etcd_name}"
@ -60,7 +60,7 @@ systemd:
Description=Kubelet Description=Kubelet
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver} Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -147,7 +147,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \ --volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \ --mount volume=script,target=/apply \
--insecure-options=image \ --insecure-options=image \
docker://quay.io/poseidon/kubelet:v1.18.8 \ docker://quay.io/poseidon/kubelet:v1.19.0 \
--net=host \ --net=host \
--dns=host \ --dns=host \
--exec=/apply --exec=/apply

View File

@ -33,7 +33,7 @@ systemd:
Description=Kubelet Description=Kubelet
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver} Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name] api_servers = [var.k8s_domain_name]

View File

@ -28,7 +28,7 @@ systemd:
--network host \ --network host \
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \ --volume /var/lib/etcd:/var/lib/etcd:rw,Z \
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \ --volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
quay.io/coreos/etcd:v3.4.10 quay.io/coreos/etcd:v3.4.12
ExecStop=/usr/bin/podman stop etcd ExecStop=/usr/bin/podman stop etcd
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
@ -53,7 +53,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -134,7 +134,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.18.8 quay.io/poseidon/kubelet:v1.19.0
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -170,6 +170,7 @@ storage:
mv manifests /opt/bootstrap/assets/manifests mv manifests /opt/bootstrap/assets/manifests
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
chcon -R -u system_u -t container_file_t /etc/kubernetes/bootstrap-secrets
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
mode: 0544 mode: 0544
contents: contents:

View File

@ -23,7 +23,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf - name: 40-etcd-cluster.conf
contents: | contents: |
[Service] [Service]
Environment="ETCD_IMAGE_TAG=v3.4.10" Environment="ETCD_IMAGE_TAG=v3.4.12"
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd" Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
Environment="RKT_RUN_ARGS=--insecure-options=image" Environment="RKT_RUN_ARGS=--insecure-options=image"
Environment="ETCD_NAME=${etcd_name}" Environment="ETCD_NAME=${etcd_name}"
@ -62,7 +62,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -144,7 +144,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \ --volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \ --mount volume=script,target=/apply \
--insecure-options=image \ --insecure-options=image \
docker://quay.io/poseidon/kubelet:v1.18.8 \ docker://quay.io/poseidon/kubelet:v1.19.0 \
--net=host \ --net=host \
--dns=host \ --dns=host \
--exec=/apply --exec=/apply

View File

@ -35,7 +35,7 @@ systemd:
After=coreos-metadata.service After=coreos-metadata.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
EnvironmentFile=/run/metadata/coreos EnvironmentFile=/run/metadata/coreos
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -134,7 +134,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \ --volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \ --mount volume=config,target=/etc/kubernetes \
--insecure-options=image \ --insecure-options=image \
docker://quay.io/poseidon/kubelet:v1.18.8 \ docker://quay.io/poseidon/kubelet:v1.19.0 \
--net=host \ --net=host \
--dns=host \ --dns=host \
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname) --exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/) customization * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -28,7 +28,7 @@ systemd:
--network host \ --network host \
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \ --volume /var/lib/etcd:/var/lib/etcd:rw,Z \
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \ --volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
quay.io/coreos/etcd:v3.4.10 quay.io/coreos/etcd:v3.4.12
ExecStop=/usr/bin/podman stop etcd ExecStop=/usr/bin/podman stop etcd
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
@ -55,7 +55,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -135,7 +135,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.18.8 quay.io/poseidon/kubelet:v1.19.0
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -166,6 +166,7 @@ storage:
mv manifests /opt/bootstrap/assets/manifests mv manifests /opt/bootstrap/assets/manifests
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
chcon -R -u system_u -t container_file_t /etc/kubernetes/bootstrap-secrets
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
mode: 0544 mode: 0544
contents: contents:

View File

@ -26,7 +26,7 @@ systemd:
After=afterburn.service After=afterburn.service
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
EnvironmentFile=/run/metadata/afterburn EnvironmentFile=/run/metadata/afterburn
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
@ -98,7 +98,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.8 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME' ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.19.0 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
storage: storage:

39
docs/addons/fleetlock.md Normal file
View File

@ -0,0 +1,39 @@
## fleetlock
[fleetlock](https://github.com/poseidon/fleetlock) is a reboot coordinator for Fedora CoreOS nodes. It implements the [FleetLock](https://github.com/coreos/airlock/pull/1/files) protocol for use as a [Zincati](https://github.com/coreos/zincati) lock [strategy](https://github.com/coreos/zincati/blob/master/docs/usage/updates-strategy.md) backend.
Declare a Zincati `fleet_lock` strategy when provisioning Fedora CoreOS nodes via [snippets](/advanced/customization/#hosts).
```yaml
variant: fcos
version: 1.0.0
storage:
files:
- path: /etc/zincati/config.d/55-update-strategy.toml
contents:
inline: |
[updates]
strategy = "fleet_lock"
[updates.fleet_lock]
base_url = "http://10.3.0.15/"
```
```tf
module "nemo" {
...
controller_snippets = [
file("./snippets/zincati-strategy.yaml"),
]
worker_snippets = [
file("./snippets/zincati-strategy.yaml"),
]
}
```
Apply fleetlock based on the example manifests.
```sh
git clone git@github.com:poseidon/fleetlock.git
kubectl apply -f examples/k8s
```

View File

@ -1,8 +1,9 @@
# Addons # Addons
Every Typhoon cluster is verified to work well with several post-install addons. Typhoon clusters are verified to work well with several post-install addons.
* Nginx [Ingress Controller](ingress.md) * Nginx [Ingress Controller](ingress.md)
* [Prometheus](prometheus.md) * [Prometheus](prometheus.md)
* [Grafana](grafana.md) * [Grafana](grafana.md)
* [fleetlock](fleetlock.md)

View File

@ -15,9 +15,34 @@ Internal Terraform Modules:
Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster). Define a worker pool using the AWS internal `workers` module. Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster). Define a worker pool using the AWS internal `workers` module.
=== "Fedora CoreOS"
```tf ```tf
module "tempest-worker-pool" { module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.14.3" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.19.0"
# AWS
vpc_id = module.tempest.vpc_id
subnet_ids = module.tempest.subnet_ids
security_groups = module.tempest.worker_security_groups
# configuration
name = "tempest-pool"
kubeconfig = module.tempest.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional
worker_count = 2
instance_type = "m5.large"
os_stream = "next"
}
```
=== "Flatcar Linux"
```tf
module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.19.0"
# AWS # AWS
vpc_id = module.tempest.vpc_id vpc_id = module.tempest.vpc_id
@ -65,13 +90,13 @@ The AWS internal `workers` module supports a number of [variables](https://githu
|:-----|:------------|:--------|:--------| |:-----|:------------|:--------|:--------|
| worker_count | Number of instances | 1 | 3 | | worker_count | Number of instances | 1 | 3 |
| instance_type | EC2 instance type | "t3.small" | "t3.medium" | | instance_type | EC2 instance type | "t3.small" | "t3.medium" |
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alph, flatcar-edge | | os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge |
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" | | os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" |
| disk_size | Size of the EBS volume in GB | 40 | 100 | | disk_size | Size of the EBS volume in GB | 40 | 100 |
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 | | disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 | | disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
| spot_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 | | spot_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 |
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) | | snippets | Fedora CoreOS or Container Linux Config snippets | [] | [examples](/advanced/customization/) |
| service_cidr | Must match `service_cidr` of cluster | "10.3.0.0/16" | "10.3.0.0/24" | | service_cidr | Must match `service_cidr` of cluster | "10.3.0.0/16" | "10.3.0.0/24" |
| node_labels | List of initial node labels | [] | ["worker-pool=foo"] | | node_labels | List of initial node labels | [] | ["worker-pool=foo"] |
@ -81,9 +106,11 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluster). Define a worker pool using the Azure internal `workers` module. Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluster). Define a worker pool using the Azure internal `workers` module.
=== "Fedora CoreOS"
```tf ```tf
module "ramius-worker-pool" { module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.19.0"
# Azure # Azure
region = module.ramius.region region = module.ramius.region
@ -101,6 +128,33 @@ module "ramius-worker-pool" {
worker_count = 2 worker_count = 2
vm_type = "Standard_F4" vm_type = "Standard_F4"
priority = "Spot" priority = "Spot"
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-31.20200323.3.2"
}
```
=== "Flatcar Linux"
```tf
module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.19.0"
# Azure
region = module.ramius.region
resource_group_name = module.ramius.resource_group_name
subnet_id = module.ramius.subnet_id
security_group_id = module.ramius.security_group_id
backend_address_pool_id = module.ramius.backend_address_pool_id
# configuration
name = "ramius-spot"
kubeconfig = module.ramius.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional
worker_count = 2
vm_type = "Standard_F4"
priority = "Spot"
os_image = "flatcar-beta"
} }
``` ```
@ -147,9 +201,11 @@ Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricin
Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-cloud.md#cluster). Define a worker pool using the Google Cloud internal `workers` module. Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-cloud.md#cluster). Define a worker pool using the Google Cloud internal `workers` module.
=== "Fedora CoreOS"
```tf ```tf
module "yavin-worker-pool" { module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.19.0"
# Google Cloud # Google Cloud
region = "europe-west2" region = "europe-west2"
@ -164,7 +220,31 @@ module "yavin-worker-pool" {
# optional # optional
worker_count = 2 worker_count = 2
machine_type = "n1-standard-16" machine_type = "n1-standard-16"
os_image = "coreos-beta" os_stream = "testing"
preemptible = true
}
```
=== "Flatcar Linux"
```tf
module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.19.0"
# Google Cloud
region = "europe-west2"
network = module.yavin.network_name
cluster_name = "yavin"
# configuration
name = "yavin-16x"
kubeconfig = module.yavin.kubeconfig
ssh_authorized_key = var.ssh_authorized_key
# optional
worker_count = 2
machine_type = "n1-standard-16"
os_image = "flatcar-linux-2303-4-0" # custom
preemptible = true preemptible = true
} }
``` ```
@ -180,11 +260,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
``` ```
$ kubectl get nodes $ kubectl get nodes
NAME STATUS AGE VERSION NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.18.8 yavin-controller-0.c.example-com.internal Ready 6m v1.19.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.18.8 yavin-worker-jrbf.c.example-com.internal Ready 5m v1.19.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.18.8 yavin-worker-mzdm.c.example-com.internal Ready 5m v1.19.0
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.18.8 yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.19.0
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.18.8 yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.19.0
``` ```
### Variables ### Variables

View File

@ -16,10 +16,10 @@ Together, they diversify Typhoon to support a range of container technologies.
| Property | Flatcar Linux | Fedora CoreOS | | Property | Flatcar Linux | Fedora CoreOS |
|-------------------|---------------------------------|---------------| |-------------------|---------------------------------|---------------|
| Kernel | ~4.19.x | ~5.5.x | | Kernel | ~4.19.x | ~5.7.x |
| systemd | 241 | 243 | | systemd | 241 | 243 |
| Ignition system | Ignition v2.x spec | Ignition v3.x spec | | Ignition system | Ignition v2.x spec | Ignition v3.x spec |
| Container Engine | docker 18.06.3-ce | docker 18.09.8 | | Container Engine | docker 18.06.3-ce | docker 19.03.11 |
| storage driver | overlay2 (extfs) | overlay2 (xfs) | | storage driver | overlay2 (extfs) | overlay2 (xfs) |
| logging driver | json-file | journald | | logging driver | json-file | journald |
| cgroup driver | cgroupfs (except Flatcar edge) | systemd | | cgroup driver | cgroupfs (except Flatcar edge) | systemd |
@ -37,8 +37,8 @@ Together, they diversify Typhoon to support a range of container technologies.
| control plane images | upstream images | upstream images | | control plane images | upstream images | upstream images |
| on-host etcd | rkt-fly | podman | | on-host etcd | rkt-fly | podman |
| on-host kubelet | rkt-fly | podman | | on-host kubelet | rkt-fly | podman |
| CNI plugins | calico or flannel | calico or flannel | | CNI plugins | calico, cilium, flannel | calico, cilium, flannel |
| coordinated drain & OS update | [CLUO](https://github.com/coreos/container-linux-update-operator) addon | (planned) | | coordinated drain & OS update | [FLUO](https://github.com/kinvolk/flatcar-linux-update-operator) addon | [fleetlock](https://github.com/poseidon/fleetlock) |
## Directory Locations ## Directory Locations

View File

@ -1,6 +1,6 @@
# AWS # AWS
In this tutorial, we'll create a Kubernetes v1.18.8 cluster on AWS with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.19.0 cluster on AWS with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -55,7 +55,7 @@ terraform {
} }
aws = { aws = {
source = "hashicorp/aws" source = "hashicorp/aws"
version = "3.2.0" version = "3.3.0"
} }
} }
} }
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
```tf ```tf
module "tempest" { module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.19.0"
# AWS # AWS
cluster_name = "tempest" cluster_name = "tempest"
@ -145,9 +145,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config $ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.18.8 ip-10-0-3-155 Ready <none> 10m v1.19.0
ip-10-0-26-65 Ready <none> 10m v1.18.8 ip-10-0-26-65 Ready <none> 10m v1.19.0
ip-10-0-41-21 Ready <none> 10m v1.18.8 ip-10-0-41-21 Ready <none> 10m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Azure # Azure
In this tutorial, we'll create a Kubernetes v1.18.8 cluster on Azure with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.19.0 cluster on Azure with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -52,7 +52,7 @@ terraform {
} }
azurerm = { azurerm = {
source = "hashicorp/azurerm" source = "hashicorp/azurerm"
version = "2.23.0" version = "2.25.0"
} }
} }
} }
@ -86,7 +86,7 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
```tf ```tf
module "ramius" { module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.19.0"
# Azure # Azure
cluster_name = "ramius" cluster_name = "ramius"
@ -161,9 +161,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config $ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.18.8 ramius-controller-0 Ready <none> 24m v1.19.0
ramius-worker-000001 Ready <none> 25m v1.18.8 ramius-worker-000001 Ready <none> 25m v1.19.0
ramius-worker-000002 Ready <none> 24m v1.18.8 ramius-worker-000002 Ready <none> 24m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Bare-Metal # Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.18.8 cluster on bare-metal with Fedora CoreOS. In this tutorial, we'll network boot and provision a Kubernetes v1.19.0 cluster on bare-metal with Fedora CoreOS.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition. First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
```tf ```tf
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.19.0"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -283,9 +283,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config $ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.18.8 node1.example.com Ready <none> 10m v1.19.0
node2.example.com Ready <none> 10m v1.18.8 node2.example.com Ready <none> 10m v1.19.0
node3.example.com Ready <none> 10m v1.18.8 node3.example.com Ready <none> 10m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# DigitalOcean # DigitalOcean
In this tutorial, we'll create a Kubernetes v1.18.8 cluster on DigitalOcean with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.19.0 cluster on DigitalOcean with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
```tf ```tf
module "nemo" { module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.19.0"
# Digital Ocean # Digital Ocean
cluster_name = "nemo" cluster_name = "nemo"
@ -155,9 +155,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config $ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.18.8 10.132.110.130 Ready <none> 10m v1.19.0
10.132.115.81 Ready <none> 10m v1.18.8 10.132.115.81 Ready <none> 10m v1.19.0
10.132.124.107 Ready <none> 10m v1.18.8 10.132.124.107 Ready <none> 10m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud # Google Cloud
In this tutorial, we'll create a Kubernetes v1.18.8 cluster on Google Compute Engine with Fedora CoreOS. In this tutorial, we'll create a Kubernetes v1.19.0 cluster on Google Compute Engine with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -56,7 +56,7 @@ terraform {
} }
google = { google = {
source = "hashicorp/google" source = "hashicorp/google"
version = "3.34.0" version = "3.36.0"
} }
} }
} }
@ -147,9 +147,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.8 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.19.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.8 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.19.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.8 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# AWS # AWS
In this tutorial, we'll create a Kubernetes v1.18.8 cluster on AWS with CoreOS Container Linux or Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.19.0 cluster on AWS with CoreOS Container Linux or Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -55,7 +55,7 @@ terraform {
} }
aws = { aws = {
source = "hashicorp/aws" source = "hashicorp/aws"
version = "3.2.0" version = "3.3.0"
} }
} }
} }
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
```tf ```tf
module "tempest" { module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.19.0"
# AWS # AWS
cluster_name = "tempest" cluster_name = "tempest"
@ -145,9 +145,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config $ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.18.8 ip-10-0-3-155 Ready <none> 10m v1.19.0
ip-10-0-26-65 Ready <none> 10m v1.18.8 ip-10-0-26-65 Ready <none> 10m v1.19.0
ip-10-0-41-21 Ready <none> 10m v1.18.8 ip-10-0-41-21 Ready <none> 10m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Azure # Azure
In this tutorial, we'll create a Kubernetes v1.18.8 cluster on Azure with CoreOS Container Linux or Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.19.0 cluster on Azure with CoreOS Container Linux or Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -52,7 +52,7 @@ terraform {
} }
azurerm = { azurerm = {
source = "hashicorp/azurerm" source = "hashicorp/azurerm"
version = "2.23.0" version = "2.25.0"
} }
} }
} }
@ -75,7 +75,7 @@ Define a Kubernetes cluster using the module `azure/container-linux/kubernetes`.
```tf ```tf
module "ramius" { module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.19.0"
# Azure # Azure
cluster_name = "ramius" cluster_name = "ramius"
@ -149,9 +149,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config $ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.18.8 ramius-controller-0 Ready <none> 24m v1.19.0
ramius-worker-000001 Ready <none> 25m v1.18.8 ramius-worker-000001 Ready <none> 25m v1.19.0
ramius-worker-000002 Ready <none> 24m v1.18.8 ramius-worker-000002 Ready <none> 24m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Bare-Metal # Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.18.8 cluster on bare-metal with CoreOS Container Linux or Flatcar Linux. In this tutorial, we'll network boot and provision a Kubernetes v1.19.0 cluster on bare-metal with CoreOS Container Linux or Flatcar Linux.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition. First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
```tf ```tf
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.19.0"
# bare-metal # bare-metal
cluster_name = "mercury" cluster_name = "mercury"
@ -293,9 +293,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config $ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.18.8 node1.example.com Ready <none> 10m v1.19.0
node2.example.com Ready <none> 10m v1.18.8 node2.example.com Ready <none> 10m v1.19.0
node3.example.com Ready <none> 10m v1.18.8 node3.example.com Ready <none> 10m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# DigitalOcean # DigitalOcean
In this tutorial, we'll create a Kubernetes v1.18.8 cluster on DigitalOcean with CoreOS Container Linux or Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.19.0 cluster on DigitalOcean with CoreOS Container Linux or Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
```tf ```tf
module "nemo" { module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.19.0"
# Digital Ocean # Digital Ocean
cluster_name = "nemo" cluster_name = "nemo"
@ -155,9 +155,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config $ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes $ kubectl get nodes
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.18.8 10.132.110.130 Ready <none> 10m v1.19.0
10.132.115.81 Ready <none> 10m v1.18.8 10.132.115.81 Ready <none> 10m v1.19.0
10.132.124.107 Ready <none> 10m v1.18.8 10.132.124.107 Ready <none> 10m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud # Google Cloud
In this tutorial, we'll create a Kubernetes v1.18.8 cluster on Google Compute Engine with CoreOS Container Linux or Flatcar Linux. In this tutorial, we'll create a Kubernetes v1.19.0 cluster on Google Compute Engine with CoreOS Container Linux or Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -56,7 +56,7 @@ terraform {
} }
google = { google = {
source = "hashicorp/google" source = "hashicorp/google"
version = "3.34.0" version = "3.36.0"
} }
} }
} }
@ -92,7 +92,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.19.0"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -167,9 +167,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.8 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.19.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.8 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.19.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.8 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#container-linux) customization * Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#container-linux) customization
@ -53,7 +53,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf ```tf
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.19.0"
# Google Cloud # Google Cloud
cluster_name = "yavin" cluster_name = "yavin"
@ -91,9 +91,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config $ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes $ kubectl get nodes
NAME ROLES STATUS AGE VERSION NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.8 yavin-controller-0.c.example-com.internal <none> Ready 6m v1.19.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.8 yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.19.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.8 yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.19.0
``` ```
List the pods. List the pods.

View File

@ -13,12 +13,12 @@ Typhoon provides tagged releases to allow clusters to be versioned using ordinar
``` ```
module "yavin" { module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.19.0"
... ...
} }
module "mercury" { module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.18.8" source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.19.0"
... ...
} }
``` ```

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf - name: 40-etcd-cluster.conf
contents: | contents: |
[Service] [Service]
Environment="ETCD_IMAGE_TAG=v3.4.10" Environment="ETCD_IMAGE_TAG=v3.4.12"
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd" Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
Environment="RKT_RUN_ARGS=--insecure-options=image" Environment="RKT_RUN_ARGS=--insecure-options=image"
Environment="ETCD_NAME=${etcd_name}" Environment="ETCD_NAME=${etcd_name}"
@ -52,7 +52,7 @@ systemd:
Description=Kubelet Description=Kubelet
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -132,7 +132,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \ --volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \ --mount volume=script,target=/apply \
--insecure-options=image \ --insecure-options=image \
docker://quay.io/poseidon/kubelet:v1.18.8 \ docker://quay.io/poseidon/kubelet:v1.19.0 \
--net=host \ --net=host \
--dns=host \ --dns=host \
--exec=/apply --exec=/apply

View File

@ -25,7 +25,7 @@ systemd:
Description=Kubelet Description=Kubelet
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -127,7 +127,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \ --volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \ --mount volume=config,target=/etc/kubernetes \
--insecure-options=image \ --insecure-options=image \
docker://quay.io/poseidon/kubelet:v1.18.8 \ docker://quay.io/poseidon/kubelet:v1.19.0 \
--net=host \ --net=host \
--dns=host \ --dns=host \
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname) --exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a> ## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.18.8 (upstream) * Kubernetes v1.19.0 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking * Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing * On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests) # Kubernetes assets (kubeconfig, manifests)
module "bootstrap" { module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=8ef2fe7c992a8c15d696bd3e3a97be713b025e64" source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=79343f02aea7c69bb03dab2051aa95248c0471d7"
cluster_name = var.cluster_name cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)] api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -28,7 +28,7 @@ systemd:
--network host \ --network host \
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \ --volume /var/lib/etcd:/var/lib/etcd:rw,Z \
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \ --volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
quay.io/coreos/etcd:v3.4.10 quay.io/coreos/etcd:v3.4.12
ExecStop=/usr/bin/podman stop etcd ExecStop=/usr/bin/podman stop etcd
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
@ -54,7 +54,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -123,7 +123,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \ --volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \ --volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \ --entrypoint=/apply \
quay.io/poseidon/kubelet:v1.18.8 quay.io/poseidon/kubelet:v1.19.0
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap ExecStartPost=-/usr/bin/podman stop bootstrap
storage: storage:
@ -159,6 +159,7 @@ storage:
mv manifests /opt/bootstrap/assets/manifests mv manifests /opt/bootstrap/assets/manifests
mv manifests-networking/* /opt/bootstrap/assets/manifests/ mv manifests-networking/* /opt/bootstrap/assets/manifests/
rm -rf assets auth static-manifests tls manifests-networking rm -rf assets auth static-manifests tls manifests-networking
chcon -R -u system_u -t container_file_t /etc/kubernetes/bootstrap-secrets
- path: /opt/bootstrap/apply - path: /opt/bootstrap/apply
mode: 0544 mode: 0544
contents: contents:

View File

@ -24,7 +24,7 @@ systemd:
Description=Kubelet (System Container) Description=Kubelet (System Container)
Wants=rpc-statd.service Wants=rpc-statd.service
[Service] [Service]
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.8 Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.19.0
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /opt/cni/bin
@ -88,7 +88,7 @@ systemd:
Type=oneshot Type=oneshot
RemainAfterExit=true RemainAfterExit=true
ExecStart=/bin/true ExecStart=/bin/true
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.8 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME' ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.19.0 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
storage: storage:

View File

@ -1,6 +1,7 @@
site_name: Typhoon site_name: Typhoon
site_description: A minimal and free Kubernetes distribution site_description: A minimal and free Kubernetes distribution
site_author: Dalton Hubble site_author: Dalton Hubble
copyright: Poseidon Labs
repo_name: poseidon/typhoon repo_name: poseidon/typhoon
repo_url: https://github.com/poseidon/typhoon repo_url: https://github.com/poseidon/typhoon
theme: theme:
@ -41,6 +42,7 @@ markdown_extensions:
- pymdownx.mark - pymdownx.mark
- pymdownx.smartsymbols - pymdownx.smartsymbols
- pymdownx.superfences - pymdownx.superfences
- pymdownx.tabbed
- pymdownx.tasklist: - pymdownx.tasklist:
custom_checkbox: true custom_checkbox: true
- pymdownx.tilde - pymdownx.tilde
@ -82,3 +84,4 @@ nav:
- 'Nginx Ingress': 'addons/ingress.md' - 'Nginx Ingress': 'addons/ingress.md'
- 'Prometheus': 'addons/prometheus.md' - 'Prometheus': 'addons/prometheus.md'
- 'Grafana': 'addons/grafana.md' - 'Grafana': 'addons/grafana.md'
- 'fleetlock': 'addons/fleetlock.md'

View File

@ -1,4 +1,4 @@
mkdocs==1.1.2 mkdocs==1.1.2
mkdocs-material==5.5.6 mkdocs-material==5.5.9
pygments==2.6.1 pygments==2.6.1
pymdown-extensions==7.1.0 pymdown-extensions==7.1.0