mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 19:01:34 +02:00
Compare commits
51 Commits
Author | SHA1 | Date | |
---|---|---|---|
6df6bf904a | |||
5fba20d358 | |||
a8d3d3bb12 | |||
9ea6d2c245 | |||
507aac9b78 | |||
dfd2a0ec23 | |||
e3bf7d8f9b | |||
49050320ce | |||
74e025c9e4 | |||
257a49ce37 | |||
df3f40bcce | |||
32886cfba1 | |||
0ba2c1a4da | |||
430d139a5b | |||
7c6ab21b94 | |||
21178868db | |||
9dcf35e393 | |||
81b6f54169 | |||
7bce15975c | |||
1f83ae7dbb | |||
a10a1cee9f | |||
a79ad34ba3 | |||
99a11442c7 | |||
d27f367004 | |||
e9c8520359 | |||
37f00a3882 | |||
4cfafeaa07 | |||
90e23f5822 | |||
6234147948 | |||
c25c59058c | |||
bc9b808d44 | |||
4b0203fdb2 | |||
331566e1f7 | |||
04520e447c | |||
413585681b | |||
96711d7f17 | |||
c9059d3fe9 | |||
a287920169 | |||
8dc170b9d9 | |||
aed1a5f33d | |||
31d02b0221 | |||
8f875f80f5 | |||
16c0b9152b | |||
99dbce67a3 | |||
20bfd69780 | |||
ba44408b76 | |||
455175d9e6 | |||
d45804b1f6 | |||
907a96916f | |||
187bb17d39 | |||
abc31c3711 |
33
.github/ISSUE_TEMPLATE.md
vendored
33
.github/ISSUE_TEMPLATE.md
vendored
@ -1,33 +0,0 @@
|
||||
<!-- Fill in either the 'Bug' or 'Feature Request' section -->
|
||||
|
||||
## Bug
|
||||
|
||||
### Environment
|
||||
|
||||
* Platform: aws, azure, bare-metal, google-cloud, digital-ocean
|
||||
* OS: fedora-coreos, flatcar-linux
|
||||
* Release: Typhoon version or Git SHA (reporting latest is **not** helpful)
|
||||
* Terraform: `terraform version` (reporting latest is **not** helpful)
|
||||
* Plugins: Provider plugin versions (reporting latest is **not** helpful)
|
||||
|
||||
### Problem
|
||||
|
||||
Describe the problem.
|
||||
|
||||
### Desired Behavior
|
||||
|
||||
Describe the goal.
|
||||
|
||||
### Steps to Reproduce
|
||||
|
||||
Provide clear steps to reproduce the issue unless already covered.
|
||||
|
||||
## Feature Request
|
||||
|
||||
### Feature
|
||||
|
||||
Describe the feature and what problem it solves.
|
||||
|
||||
### Tradeoffs
|
||||
|
||||
What are the pros and cons of this feature? How will it be exercised and maintained?
|
39
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
39
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@ -0,0 +1,39 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Report a bug to improve the project
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
<!-- READ: Issues are used to receive focused bug reports from users and to track planned future enhancements by the authors. Topics like cluster operation, support, debugging help, advice, and Kubernetes concepts are out of scope and should not use issues-->
|
||||
|
||||
**Description**
|
||||
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**Steps to Reproduce**
|
||||
|
||||
Provide clear steps to reproduce the bug.
|
||||
|
||||
- [ ] Relevant error messages if appropriate (concise, not a dump of everything).
|
||||
- [ ] Explored using a vanilla cluster from the [tutorials](https://typhoon.psdn.io/#documentation). Ruled out [customizations](https://typhoon.psdn.io/advanced/customization/).
|
||||
|
||||
**Expected behavior**
|
||||
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Environment**
|
||||
|
||||
* Platform: aws, azure, bare-metal, google-cloud, digital-ocean
|
||||
* OS: fedora-coreos, flatcar-linux (include release version)
|
||||
* Release: Typhoon version or Git SHA (reporting latest is **not** helpful)
|
||||
* Terraform: `terraform version` (reporting latest is **not** helpful)
|
||||
* Plugins: Provider plugin versions (reporting latest is **not** helpful)
|
||||
|
||||
**Possible Solution**
|
||||
|
||||
<!-- Most bug reports should have some inkling about solutions. Otherwise, your report may be less of a bug and more of a support request (see top).-->
|
||||
|
||||
Link to a PR or description.
|
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
blank_issues_enabled: true
|
||||
contact_links:
|
||||
- name: Security
|
||||
url: https://typhoon.psdn.io/topics/security/
|
||||
about: Report security vulnerabilities
|
15
.github/issue_template.md
vendored
Normal file
15
.github/issue_template.md
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
<!-- READ: Issues are used to receive focused bug reports from users and to track planned future enhancements by the authors. Topics like cluster operation, support, debugging help, advice, and Kubernetes concepts are out of scope and should not use issues-->
|
||||
|
||||
## Enhancement
|
||||
|
||||
### Overview
|
||||
|
||||
One paragraph explanation of the enhancement.
|
||||
|
||||
### Motivation
|
||||
|
||||
Describe the motivation and what problem this solves.
|
||||
|
||||
### Tradeoffs
|
||||
|
||||
What are the pros and cons of this feature? How will it be exercised and maintained?
|
93
CHANGES.md
93
CHANGES.md
@ -2,8 +2,99 @@
|
||||
|
||||
Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
## v1.18.6
|
||||
|
||||
* Kubernetes [v1.18.6](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1186)
|
||||
* Update Calico from v3.15.0 to [v3.15.1](https://docs.projectcalico.org/v3.15/release-notes/)
|
||||
* Update Cilium from v1.8.0 to [v1.8.1](https://github.com/cilium/cilium/releases/tag/v1.8.1)
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from v0.33.0 to [v0.34.1](https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.34.1)
|
||||
* [ingress-nginx](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v0.34.0) will publish images only to gcr.io
|
||||
* Update Prometheus from v2.19.1 to [v2.19.2](https://github.com/prometheus/prometheus/releases/tag/v2.19.2)
|
||||
* Update Grafana from v7.0.4 to [v7.0.6](https://github.com/grafana/grafana/releases/tag/v7.0.6)
|
||||
|
||||
## v1.18.5
|
||||
|
||||
* Kubernetes [v1.18.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1185)
|
||||
* Add Cilium v1.8.0 as a (experimental) CNI provider option ([#760](https://github.com/poseidon/typhoon/pull/760))
|
||||
* Set `networking` to "cilium" to enable
|
||||
* Update Calico from v3.14.1 to [v3.15.0](https://docs.projectcalico.org/v3.15/release-notes/)
|
||||
|
||||
#### DigitalOcean
|
||||
|
||||
* Isolate each cluster in an independent DigitalOcean VPC ([#776](https://github.com/poseidon/typhoon/pull/776))
|
||||
* Create droplets in a VPC per cluster (matches Typhoon AWS, Azure, and GCP)
|
||||
* Require `terraform-provider-digitalocean` v1.16.0+ (action required)
|
||||
* Output `vpc_id` for use with an attached DigitalOcean [loadbalancer](https://github.com/poseidon/typhoon/blob/v1.18.5/docs/architecture/digitalocean.md#custom-load-balancer)
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
#### Google Cloud
|
||||
|
||||
* Promote Fedora CoreOS to stable
|
||||
* Remove `os_image` variable deprecated in v1.18.3 ([#777](https://github.com/poseidon/typhoon/pull/777))
|
||||
* Use `os_stream` to select a Fedora CoreOS image stream
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
#### Azure
|
||||
|
||||
* Allow using Flatcar Linux Edge by setting `os_image` to "flatcar-edge" ([#778](https://github.com/poseidon/typhoon/pull/778))
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update Prometheus from v2.19.0 to [v2.19.1](https://github.com/prometheus/prometheus/releases/tag/v2.19.1)
|
||||
* Update Grafana from v7.0.3 to [v7.0.4](https://github.com/grafana/grafana/releases/tag/v7.0.4)
|
||||
|
||||
## v1.18.4
|
||||
|
||||
* Kubernetes [v1.18.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1184)
|
||||
* Update Kubelet image publishing ([#749](https://github.com/poseidon/typhoon/pull/749))
|
||||
* Build Kubelet images internally and publish to Quay and Dockerhub
|
||||
* [quay.io/poseidon/kubelet](https://quay.io/repository/poseidon/kubelet) (official)
|
||||
* [docker.io/psdn/kubelet](https://hub.docker.com/r/psdn/kubelet) (fallback)
|
||||
* Continue offering automated image builds with an alternate tag strategy (see [docs](https://typhoon.psdn.io/topics/security/#container-images))
|
||||
* [Document](https://typhoon.psdn.io/advanced/customization/#kubelet) use of alternate Kubelet images during registry incidents
|
||||
* Update Calico from v3.14.0 to [v3.14.1](https://docs.projectcalico.org/v3.14/release-notes/)
|
||||
* Fix [CVE-2020-13597](https://github.com/kubernetes/kubernetes/issues/91507)
|
||||
* Rename controller NoSchedule taint from `node-role.kubernetes.io/master` to `node-role.kubernetes.io/controller` ([#764](https://github.com/poseidon/typhoon/pull/764))
|
||||
* Tolerate the new taint name for workloads that may run on controller nodes
|
||||
* Remove node label `node.kubernetes.io/master` from controller nodes ([#764](https://github.com/poseidon/typhoon/pull/764))
|
||||
* Use `node.kubernetes.io/controller` (present since v1.9.5, [#160](https://github.com/poseidon/typhoon/pull/160)) to node select controllers
|
||||
* Remove unused Kubelet `-lock-file` and `-exit-on-lock-contention` ([#758](https://github.com/poseidon/typhoon/pull/758))
|
||||
|
||||
### Fedora CoreOS
|
||||
|
||||
#### Azure
|
||||
|
||||
* Use `strict` Fedora CoreOS Config (FCC) snippet parsing ([#755](https://github.com/poseidon/typhoon/pull/755))
|
||||
* Reduce Calico vxlan interface MTU to maintain performance ([#767](https://github.com/poseidon/typhoon/pull/766))
|
||||
|
||||
#### AWS
|
||||
|
||||
* Fix Kubelet service race with hostname update ([#766](https://github.com/poseidon/typhoon/pull/766))
|
||||
* Wait for a hostname to avoid Kubelet trying to register as `localhost`
|
||||
|
||||
### Flatcar Linux
|
||||
|
||||
* Use `strict` Container Linux Config (CLC) snippet parsing ([#755](https://github.com/poseidon/typhoon/pull/755))
|
||||
* Require `terraform-provider-ct` v0.4+, recommend v0.5+ (**action required**)
|
||||
|
||||
### Addons
|
||||
|
||||
* Update nginx-ingress from v0.32.0 to [v0.33.0](https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.33.0)
|
||||
* Update Prometheus from v2.18.1 to [v2.19.0](https://github.com/prometheus/prometheus/releases/tag/v2.19.0)
|
||||
* Update node-exporter from v1.0.0-rc.1 to [v1.0.1](https://github.com/prometheus/node_exporter/releases/tag/v1.0.1)
|
||||
* Update kube-state-metrics from v1.9.6 to v1.9.7
|
||||
* Update Grafana from v7.0.0 to v7.0.3
|
||||
|
||||
## v1.18.3
|
||||
|
||||
* Kubernetes [v1.18.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183)
|
||||
* Use Kubelet [TLS bootstrap](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) with bootstrap token authentication ([#713](https://github.com/poseidon/typhoon/pull/713))
|
||||
* Enable Node [Authorization](https://kubernetes.io/docs/reference/access-authn-authz/node/) and [NodeRestriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) to reduce authorization scope
|
||||
* Renew Kubelet certificates every 72 hours
|
||||
@ -43,7 +134,7 @@ Notable changes between versions.
|
||||
|
||||
#### Google
|
||||
|
||||
* Support Fedora CoreOS [image streams](https://docs.fedoraproject.org/en-US/fedora-coreos/update-streams/) ([#723](https://github.com/poseidon/typhoon/pull/722))
|
||||
* Support Fedora CoreOS [image streams](https://docs.fedoraproject.org/en-US/fedora-coreos/update-streams/) ([#723](https://github.com/poseidon/typhoon/pull/723))
|
||||
* Add `os_stream` variable to set the stream to `stable` (default), `testing`, or `next`
|
||||
* Deprecate `os_image` variable. Manual image uploads are no longer needed
|
||||
|
||||
|
14
README.md
14
README.md
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, or other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
@ -29,7 +29,7 @@ Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
|
||||
| Azure | Fedora CoreOS | [azure/fedora-coreos/kubernetes](azure/fedora-coreos/kubernetes) | alpha |
|
||||
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](bare-metal/fedora-coreos/kubernetes) | beta |
|
||||
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](digital-ocean/fedora-coreos/kubernetes) | beta |
|
||||
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | beta |
|
||||
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | stable |
|
||||
|
||||
Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/).
|
||||
|
||||
@ -54,7 +54,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -93,9 +93,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.6
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.6
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: grafana
|
||||
image: docker.io/grafana/grafana:7.0.0
|
||||
image: docker.io/grafana/grafana:7.0.6
|
||||
env:
|
||||
- name: GF_PATHS_CONFIG
|
||||
value: "/etc/grafana/custom.ini"
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
|
||||
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
|
||||
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
|
||||
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
|
||||
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
|
||||
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -20,7 +20,7 @@ spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.18.1
|
||||
image: quay.io/prometheus/prometheus:v2.19.2
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: quay.io/coreos/kube-state-metrics:v1.9.6
|
||||
image: quay.io/coreos/kube-state-metrics:v1.9.7
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
|
@ -28,7 +28,7 @@ spec:
|
||||
hostPID: true
|
||||
containers:
|
||||
- name: node-exporter
|
||||
image: quay.io/prometheus/node-exporter:v1.0.0-rc.1
|
||||
image: quay.io/prometheus/node-exporter:v1.0.1
|
||||
args:
|
||||
- --path.procfs=/host/proc
|
||||
- --path.sysfs=/host/sys
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -2,7 +2,7 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
enable: true
|
||||
enabled: true
|
||||
dropins:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
@ -28,11 +28,11 @@ systemd:
|
||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -46,13 +46,13 @@ systemd:
|
||||
RequiredBy=kubelet.service
|
||||
RequiredBy=etcd-member.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -102,16 +102,13 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -137,7 +134,7 @@ systemd:
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.3 \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
@ -191,6 +188,7 @@ storage:
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
|
@ -49,10 +49,10 @@ resource "aws_instance" "controllers" {
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
pretty_print = false
|
||||
snippets = var.controller_snippets
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
|
@ -13,6 +13,30 @@ resource "aws_security_group" "controller" {
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-icmp" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 8
|
||||
to_port = 0
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-icmp-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 8
|
||||
to_port = 0
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-ssh" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
@ -44,39 +68,31 @@ resource "aws_security_group_rule" "controller-etcd-metrics" {
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-proxy
|
||||
resource "aws_security_group_rule" "kube-proxy-metrics" {
|
||||
resource "aws_security_group_rule" "controller-cilium-health" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10249
|
||||
to_port = 10249
|
||||
from_port = 4240
|
||||
to_port = 4240
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler
|
||||
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||
resource "aws_security_group_rule" "controller-cilium-health-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10251
|
||||
to_port = 10251
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-controller-manager
|
||||
resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10252
|
||||
to_port = 10252
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 4240
|
||||
to_port = 4240
|
||||
self = true
|
||||
}
|
||||
|
||||
# IANA VXLAN default
|
||||
resource "aws_security_group_rule" "controller-vxlan" {
|
||||
count = var.networking == "flannel" ? 1 : 0
|
||||
|
||||
@ -111,6 +127,31 @@ resource "aws_security_group_rule" "controller-apiserver" {
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
# Linux VXLAN default
|
||||
resource "aws_security_group_rule" "controller-linux-vxlan" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-linux-vxlan-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
@ -122,6 +163,17 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-proxy
|
||||
resource "aws_security_group_rule" "kube-proxy-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10249
|
||||
to_port = 10249
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelets for exec, log, port-forward
|
||||
resource "aws_security_group_rule" "controller-kubelet" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
@ -143,6 +195,28 @@ resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler
|
||||
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10251
|
||||
to_port = 10251
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-controller-manager
|
||||
resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10252
|
||||
to_port = 10252
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-bgp" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
@ -227,6 +301,30 @@ resource "aws_security_group" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-icmp" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 8
|
||||
to_port = 0
|
||||
source_security_group_id = aws_security_group.controller.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-icmp-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 8
|
||||
to_port = 0
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-ssh" {
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
@ -257,6 +355,31 @@ resource "aws_security_group_rule" "worker-https" {
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-cilium-health" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 4240
|
||||
to_port = 4240
|
||||
source_security_group_id = aws_security_group.controller.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-cilium-health-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 4240
|
||||
to_port = 4240
|
||||
self = true
|
||||
}
|
||||
|
||||
# IANA VXLAN default
|
||||
resource "aws_security_group_rule" "worker-vxlan" {
|
||||
count = var.networking == "flannel" ? 1 : 0
|
||||
|
||||
@ -281,6 +404,31 @@ resource "aws_security_group_rule" "worker-vxlan-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Linux VXLAN default
|
||||
resource "aws_security_group_rule" "worker-linux-vxlan" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
source_security_group_id = aws_security_group.controller.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-linux-vxlan-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "aws_security_group_rule" "worker-node-exporter" {
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
@ -4,7 +4,7 @@ terraform {
|
||||
required_version = "~> 0.12.6"
|
||||
required_providers {
|
||||
aws = "~> 2.23"
|
||||
ct = "~> 0.3"
|
||||
ct = "~> 0.4"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
}
|
||||
|
@ -2,11 +2,11 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -19,13 +19,13 @@ systemd:
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -75,10 +75,8 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -115,6 +113,7 @@ storage:
|
||||
${kubeconfig}
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
@ -130,7 +129,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.3 \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -71,9 +71,9 @@ resource "aws_launch_configuration" "worker" {
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
pretty_print = false
|
||||
snippets = var.snippets
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -38,11 +38,12 @@ systemd:
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
Description=Wait for DNS and hostname
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStartPre=/bin/sh -c 'while [ `hostname -s` == "localhost" ]; do sleep 1; done;'
|
||||
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
@ -54,7 +55,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -92,16 +93,13 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
@ -126,7 +124,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.18.3
|
||||
quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -178,6 +176,11 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
net.ipv4.conf.default.rp_filter=0
|
||||
net.ipv4.conf.*.rp_filter=0
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -13,6 +13,30 @@ resource "aws_security_group" "controller" {
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-icmp" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 8
|
||||
to_port = 0
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-icmp-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 8
|
||||
to_port = 0
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-ssh" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
@ -44,39 +68,31 @@ resource "aws_security_group_rule" "controller-etcd-metrics" {
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-proxy
|
||||
resource "aws_security_group_rule" "kube-proxy-metrics" {
|
||||
resource "aws_security_group_rule" "controller-cilium-health" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10249
|
||||
to_port = 10249
|
||||
from_port = 4240
|
||||
to_port = 4240
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler
|
||||
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||
resource "aws_security_group_rule" "controller-cilium-health-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10251
|
||||
to_port = 10251
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-controller-manager
|
||||
resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10252
|
||||
to_port = 10252
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 4240
|
||||
to_port = 4240
|
||||
self = true
|
||||
}
|
||||
|
||||
# IANA VXLAN default
|
||||
resource "aws_security_group_rule" "controller-vxlan" {
|
||||
count = var.networking == "flannel" ? 1 : 0
|
||||
|
||||
@ -111,6 +127,31 @@ resource "aws_security_group_rule" "controller-apiserver" {
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
# Linux VXLAN default
|
||||
resource "aws_security_group_rule" "controller-linux-vxlan" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-linux-vxlan-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
@ -122,6 +163,17 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-proxy
|
||||
resource "aws_security_group_rule" "kube-proxy-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10249
|
||||
to_port = 10249
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelets for exec, log, port-forward
|
||||
resource "aws_security_group_rule" "controller-kubelet" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
@ -143,6 +195,28 @@ resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler
|
||||
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10251
|
||||
to_port = 10251
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-controller-manager
|
||||
resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10252
|
||||
to_port = 10252
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-bgp" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
@ -227,6 +301,30 @@ resource "aws_security_group" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-icmp" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 8
|
||||
to_port = 0
|
||||
source_security_group_id = aws_security_group.controller.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-icmp-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 8
|
||||
to_port = 0
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-ssh" {
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
@ -257,6 +355,31 @@ resource "aws_security_group_rule" "worker-https" {
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-cilium-health" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 4240
|
||||
to_port = 4240
|
||||
source_security_group_id = aws_security_group.controller.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-cilium-health-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 4240
|
||||
to_port = 4240
|
||||
self = true
|
||||
}
|
||||
|
||||
# IANA VXLAN default
|
||||
resource "aws_security_group_rule" "worker-vxlan" {
|
||||
count = var.networking == "flannel" ? 1 : 0
|
||||
|
||||
@ -281,6 +404,31 @@ resource "aws_security_group_rule" "worker-vxlan-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Linux VXLAN default
|
||||
resource "aws_security_group_rule" "worker-linux-vxlan" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
source_security_group_id = aws_security_group.controller.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-linux-vxlan-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "aws_security_group_rule" "worker-node-exporter" {
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
@ -9,11 +9,12 @@ systemd:
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
Description=Wait for DNS and hostname
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStartPre=/bin/sh -c 'while [ `hostname -s` == "localhost" ]; do sleep 1; done;'
|
||||
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
@ -24,7 +25,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -62,10 +63,8 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -90,7 +89,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.3 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.6 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
@ -106,6 +105,11 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
net.ipv4.conf.default.rp_filter=0
|
||||
net.ipv4.conf.*.rp_filter=0
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -2,7 +2,7 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
enable: true
|
||||
enabled: true
|
||||
dropins:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
@ -28,11 +28,11 @@ systemd:
|
||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -46,13 +46,14 @@ systemd:
|
||||
RequiredBy=kubelet.service
|
||||
RequiredBy=etcd-member.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -96,20 +97,18 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -135,7 +134,7 @@ systemd:
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.3 \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
@ -189,6 +188,7 @@ storage:
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
|
@ -139,10 +139,10 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
pretty_print = false
|
||||
snippets = var.controller_snippets
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
@ -157,6 +157,7 @@ data "template_file" "controller-configs" {
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
cgroup_driver = local.flavor == "flatcar" && local.channel == "edge" ? "systemd" : "cgroupfs"
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
|
@ -7,6 +7,21 @@ resource "azurerm_network_security_group" "controller" {
|
||||
location = azurerm_resource_group.cluster.location
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-icmp" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -100,6 +115,22 @@ resource "azurerm_network_security_rule" "controller-apiserver" {
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -115,6 +146,21 @@ resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2021"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
@ -191,6 +237,21 @@ resource "azurerm_network_security_group" "worker" {
|
||||
location = azurerm_resource_group.cluster.location
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-icmp" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -236,6 +297,22 @@ resource "azurerm_network_security_rule" "worker-https" {
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -251,6 +328,21 @@ resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2016"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
@ -4,7 +4,7 @@ terraform {
|
||||
required_version = "~> 0.12.6"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
ct = "~> 0.3"
|
||||
ct = "~> 0.4"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
}
|
||||
|
@ -2,11 +2,11 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -19,13 +19,14 @@ systemd:
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -69,14 +70,13 @@ systemd:
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -92,7 +92,7 @@ systemd:
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Waiting to delete Kubernetes node on shutdown
|
||||
@ -113,6 +113,7 @@ storage:
|
||||
${kubeconfig}
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
@ -128,7 +129,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.3 \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]')
|
||||
|
@ -97,9 +97,9 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
pretty_print = false
|
||||
snippets = var.snippets
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux configs
|
||||
@ -111,6 +111,7 @@ data "template_file" "worker-config" {
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
cgroup_driver = local.flavor == "flatcar" && local.channel == "edge" ? "systemd" : "cgroupfs"
|
||||
node_labels = join(",", var.node_labels)
|
||||
}
|
||||
}
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
@ -10,8 +10,9 @@ module "bootstrap" {
|
||||
networking = var.networking
|
||||
|
||||
# only effective with Calico networking
|
||||
# we should be able to use 1450 MTU, but in practice, 1410 was needed
|
||||
network_encapsulation = "vxlan"
|
||||
network_mtu = "1450"
|
||||
network_mtu = "1410"
|
||||
|
||||
pod_cidr = var.pod_cidr
|
||||
service_cidr = var.service_cidr
|
||||
|
@ -113,10 +113,10 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
pretty_print = false
|
||||
snippets = var.controller_snippets
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
|
@ -54,7 +54,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -92,16 +92,13 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
@ -126,7 +123,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.18.3
|
||||
quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -178,6 +175,11 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
net.ipv4.conf.default.rp_filter=0
|
||||
net.ipv4.conf.*.rp_filter=0
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -7,6 +7,21 @@ resource "azurerm_network_security_group" "controller" {
|
||||
location = azurerm_resource_group.cluster.location
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-icmp" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -100,6 +115,22 @@ resource "azurerm_network_security_rule" "controller-apiserver" {
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -115,6 +146,21 @@ resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2021"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
@ -191,6 +237,21 @@ resource "azurerm_network_security_group" "worker" {
|
||||
location = azurerm_resource_group.cluster.location
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-icmp" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-icmp"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "1995"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Icmp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -236,6 +297,22 @@ resource "azurerm_network_security_rule" "worker-https" {
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "4240"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -251,6 +328,21 @@ resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-linux-vxlan"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2016"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
@ -4,7 +4,7 @@ terraform {
|
||||
required_version = "~> 0.12.6"
|
||||
required_providers {
|
||||
azurerm = "~> 2.8"
|
||||
ct = "~> 0.3"
|
||||
ct = "~> 0.4"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
}
|
||||
|
@ -24,7 +24,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -62,10 +62,8 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -90,7 +88,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.3 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.6 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
@ -106,6 +104,11 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
net.ipv4.conf.default.rp_filter=0
|
||||
net.ipv4.conf.*.rp_filter=0
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -72,9 +72,9 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
pretty_print = false
|
||||
snippets = var.snippets
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS configs
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -2,7 +2,7 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
enable: true
|
||||
enabled: true
|
||||
dropins:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
@ -28,11 +28,11 @@ systemd:
|
||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: kubelet.path
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
@ -41,7 +41,7 @@ systemd:
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -60,7 +60,7 @@ systemd:
|
||||
Description=Kubelet
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -114,17 +114,14 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -150,7 +147,7 @@ systemd:
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.3 \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
@ -161,6 +158,7 @@ storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
@ -207,6 +205,7 @@ storage:
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
|
@ -2,7 +2,7 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: installer.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Requires=network-online.target
|
||||
|
@ -2,11 +2,11 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: kubelet.path
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
@ -15,7 +15,7 @@ systemd:
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -33,7 +33,7 @@ systemd:
|
||||
Description=Kubelet
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -87,11 +87,9 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in compact(split(",", node_labels)) ~}
|
||||
@ -114,6 +112,7 @@ storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
@ -123,6 +122,7 @@ storage:
|
||||
${domain_name}
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
|
@ -141,10 +141,10 @@ resource "matchbox_profile" "controllers" {
|
||||
}
|
||||
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = length(var.controllers)
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
pretty_print = false
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
count = length(var.controllers)
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
@ -171,10 +171,10 @@ resource "matchbox_profile" "workers" {
|
||||
}
|
||||
|
||||
data "ct_config" "worker-ignitions" {
|
||||
count = length(var.workers)
|
||||
content = data.template_file.worker-configs.*.rendered[count.index]
|
||||
pretty_print = false
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
count = length(var.workers)
|
||||
content = data.template_file.worker-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||
}
|
||||
|
||||
data "template_file" "worker-configs" {
|
||||
|
@ -4,7 +4,7 @@ terraform {
|
||||
required_version = "~> 0.12.6"
|
||||
required_providers {
|
||||
matchbox = "~> 0.3.0"
|
||||
ct = "~> 0.3"
|
||||
ct = "~> 0.4"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
}
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -53,7 +53,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -93,17 +93,14 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
@ -137,7 +134,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.18.3
|
||||
quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -189,6 +186,11 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
net.ipv4.conf.default.rp_filter=0
|
||||
net.ipv4.conf.*.rp_filter=0
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -23,7 +23,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -63,11 +63,9 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in compact(split(",", node_labels)) ~}
|
||||
@ -108,6 +106,11 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
net.ipv4.conf.default.rp_filter=0
|
||||
net.ipv4.conf.*.rp_filter=0
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -2,7 +2,7 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
enable: true
|
||||
enabled: true
|
||||
dropins:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
@ -28,11 +28,11 @@ systemd:
|
||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: kubelet.path
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
@ -41,7 +41,7 @@ systemd:
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -62,7 +62,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -111,17 +111,14 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -147,7 +144,7 @@ systemd:
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.3 \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
@ -158,6 +155,7 @@ storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /opt/bootstrap/layout
|
||||
filesystem: root
|
||||
@ -198,6 +196,7 @@ storage:
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
|
@ -2,11 +2,11 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: kubelet.path
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
@ -15,7 +15,7 @@ systemd:
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -35,7 +35,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -84,11 +84,9 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
@ -101,7 +99,7 @@ systemd:
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Waiting to delete Kubernetes node on shutdown
|
||||
@ -116,9 +114,11 @@ storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
filesystem: root
|
||||
mode: 0755
|
||||
files:
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
@ -134,7 +134,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.3 \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -46,9 +46,10 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
# only official DigitalOcean images support IPv6
|
||||
ipv6 = local.is_official_image
|
||||
private_networking = true
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
@ -69,10 +70,10 @@ resource "digitalocean_tag" "controllers" {
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
pretty_print = false
|
||||
snippets = var.controller_snippets
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
|
@ -1,3 +1,10 @@
|
||||
# Network VPC
|
||||
resource "digitalocean_vpc" "network" {
|
||||
name = var.cluster_name
|
||||
region = var.region
|
||||
description = "Network for ${var.cluster_name} cluster"
|
||||
}
|
||||
|
||||
resource "digitalocean_firewall" "rules" {
|
||||
name = var.cluster_name
|
||||
|
||||
@ -6,6 +13,11 @@ resource "digitalocean_firewall" "rules" {
|
||||
digitalocean_tag.workers.name
|
||||
]
|
||||
|
||||
inbound_rule {
|
||||
protocol = "icmp"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# allow ssh, internal flannel, internal node-exporter, internal kubelet
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
@ -13,12 +25,27 @@ resource "digitalocean_firewall" "rules" {
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
|
||||
# Cilium health
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "4240"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# IANA vxlan (flannel, calico)
|
||||
inbound_rule {
|
||||
protocol = "udp"
|
||||
port_range = "4789"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Linux vxlan (Cilium)
|
||||
inbound_rule {
|
||||
protocol = "udp"
|
||||
port_range = "8472"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
@ -33,6 +60,7 @@ resource "digitalocean_firewall" "rules" {
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Kubelet
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "10250"
|
||||
|
@ -2,6 +2,8 @@ output "kubeconfig-admin" {
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
# Outputs for Kubernetes Ingress
|
||||
|
||||
output "controllers_dns" {
|
||||
value = digitalocean_record.controllers[0].fqdn
|
||||
}
|
||||
@ -45,3 +47,10 @@ output "worker_tag" {
|
||||
value = digitalocean_tag.workers.name
|
||||
}
|
||||
|
||||
# Outputs for custom load balancing
|
||||
|
||||
output "vpc_id" {
|
||||
description = "ID of the cluster VPC"
|
||||
value = digitalocean_vpc.network.id
|
||||
}
|
||||
|
||||
|
@ -3,8 +3,8 @@
|
||||
terraform {
|
||||
required_version = "~> 0.12.6"
|
||||
required_providers {
|
||||
digitalocean = "~> 1.3"
|
||||
ct = "~> 0.3"
|
||||
digitalocean = "~> 1.16"
|
||||
ct = "~> 0.4"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
}
|
||||
|
@ -35,9 +35,10 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
# only official DigitalOcean images support IPv6
|
||||
ipv6 = local.is_official_image
|
||||
private_networking = true
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# only official DigitalOcean images support IPv6
|
||||
ipv6 = local.is_official_image
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
@ -58,9 +59,9 @@ resource "digitalocean_tag" "workers" {
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
pretty_print = false
|
||||
snippets = var.worker_snippets
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.worker_snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -41,9 +41,10 @@ resource "digitalocean_droplet" "controllers" {
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
private_networking = true
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
@ -55,7 +55,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -94,17 +94,14 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
@ -138,7 +135,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.18.3
|
||||
quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -185,6 +182,11 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
net.ipv4.conf.default.rp_filter=0
|
||||
net.ipv4.conf.*.rp_filter=0
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -26,7 +26,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -65,11 +65,9 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
@ -100,7 +98,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.3 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.6 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
@ -111,6 +109,11 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
net.ipv4.conf.default.rp_filter=0
|
||||
net.ipv4.conf.*.rp_filter=0
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -1,3 +1,10 @@
|
||||
# Network VPC
|
||||
resource "digitalocean_vpc" "network" {
|
||||
name = var.cluster_name
|
||||
region = var.region
|
||||
description = "Network for ${var.cluster_name} cluster"
|
||||
}
|
||||
|
||||
resource "digitalocean_firewall" "rules" {
|
||||
name = var.cluster_name
|
||||
|
||||
@ -6,6 +13,11 @@ resource "digitalocean_firewall" "rules" {
|
||||
digitalocean_tag.workers.name
|
||||
]
|
||||
|
||||
inbound_rule {
|
||||
protocol = "icmp"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# allow ssh, internal flannel, internal node-exporter, internal kubelet
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
@ -13,12 +25,27 @@ resource "digitalocean_firewall" "rules" {
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
|
||||
# Cilium health
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "4240"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# IANA vxlan (flannel, calico)
|
||||
inbound_rule {
|
||||
protocol = "udp"
|
||||
port_range = "4789"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Linux vxlan (Cilium)
|
||||
inbound_rule {
|
||||
protocol = "udp"
|
||||
port_range = "8472"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
@ -33,6 +60,7 @@ resource "digitalocean_firewall" "rules" {
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Kubelet
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "10250"
|
||||
|
@ -2,6 +2,8 @@ output "kubeconfig-admin" {
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
# Outputs for Kubernetes Ingress
|
||||
|
||||
output "controllers_dns" {
|
||||
value = digitalocean_record.controllers[0].fqdn
|
||||
}
|
||||
@ -45,3 +47,9 @@ output "worker_tag" {
|
||||
value = digitalocean_tag.workers.name
|
||||
}
|
||||
|
||||
# Outputs for custom load balancing
|
||||
|
||||
output "vpc_id" {
|
||||
description = "ID of the cluster VPC"
|
||||
value = digitalocean_vpc.network.id
|
||||
}
|
||||
|
@ -3,8 +3,8 @@
|
||||
terraform {
|
||||
required_version = "~> 0.12.6"
|
||||
required_providers {
|
||||
digitalocean = "~> 1.3"
|
||||
ct = "~> 0.3"
|
||||
digitalocean = "~> 1.16"
|
||||
ct = "~> 0.4"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
}
|
||||
|
@ -37,9 +37,10 @@ resource "digitalocean_droplet" "workers" {
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
private_networking = true
|
||||
vpc_uuid = digitalocean_vpc.network.id
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
@ -174,3 +174,34 @@ module "nemo" {
|
||||
|
||||
To customize low-level Kubernetes control plane bootstrapping, see the [poseidon/terraform-render-bootstrap](https://github.com/poseidon/terraform-render-bootstrap) Terraform module.
|
||||
|
||||
## Kubelet
|
||||
|
||||
Typhoon publishes Kubelet [container images](/topics/security/#container-images) to Quay.io (default) and to Dockerhub (in case of a Quay [outage](https://github.com/poseidon/typhoon/issues/735) or breach). Quay automated builds also provide the option for fully verifiable tagged images (`build-{short_sha}`).
|
||||
|
||||
To set an alternative Kubelet image, use a snippet to set a systemd dropin.
|
||||
|
||||
```
|
||||
# host-image-override.yaml
|
||||
variant: fcos <- remove for Flatcar Linux
|
||||
version: 1.0.0 <- remove for Flatcar Linux
|
||||
systemd:
|
||||
units:
|
||||
- name: kubelet.service
|
||||
dropins:
|
||||
- name: 10-image-override.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker.io/psdn/kubelet:v1.18.3
|
||||
```
|
||||
|
||||
```
|
||||
module "nemo" {
|
||||
...
|
||||
|
||||
worker_snippets = [
|
||||
file("./snippets/host-image-override.yaml")
|
||||
]
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -65,7 +65,8 @@ The AWS internal `workers` module supports a number of [variables](https://githu
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| worker_count | Number of instances | 1 | 3 |
|
||||
| instance_type | EC2 instance type | "t3.small" | "t3.medium" |
|
||||
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alph, coreos-stable, coreos-beta, coreos-alpha |
|
||||
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alph, flatcar-edge |
|
||||
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" |
|
||||
| disk_size | Size of the EBS volume in GB | 40 | 100 |
|
||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
|
||||
@ -82,7 +83,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.18.6"
|
||||
|
||||
# Azure
|
||||
region = module.ramius.region
|
||||
@ -134,7 +135,7 @@ The Azure internal `workers` module supports a number of [variables](https://git
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| worker_count | Number of instances | 1 | 3 |
|
||||
| vm_type | Machine type for instances | "Standard_DS1_v2" | See below |
|
||||
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge, coreos-stable, coreos-beta, coreos-alpha |
|
||||
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge |
|
||||
| priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | "Regular" | "Spot" |
|
||||
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
@ -148,7 +149,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.18.6"
|
||||
|
||||
# Google Cloud
|
||||
region = "europe-west2"
|
||||
@ -179,11 +180,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.18.3
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.18.3
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.18.3
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.18.3
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.18.3
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.18.6
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.18.6
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.18.6
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.18.6
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.18.6
|
||||
```
|
||||
|
||||
### Variables
|
||||
@ -199,7 +200,7 @@ The Google Cloud internal `workers` module supports a number of [variables](http
|
||||
| region | Region for the worker pool instances. May differ from the cluster's region | "europe-west2" |
|
||||
| network | Must be set to `network_name` output by cluster | module.cluster.network_name |
|
||||
| kubeconfig | Must be set to `kubeconfig` output by cluster | module.cluster.kubeconfig |
|
||||
| os_image | Container Linux image for compute instances | "fedora-coreos-or-flatcar-image", coreos-stable, coreos-beta, coreos-alpha |
|
||||
| os_image | Container Linux image for compute instances | "uploaded-flatcar-image" |
|
||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
|
||||
|
||||
Check the list of regions [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`.
|
||||
|
@ -30,6 +30,7 @@ Add a DigitalOcean load balancer to distribute IPv4 TCP traffic (HTTP/HTTPS Ingr
|
||||
resource "digitalocean_loadbalancer" "ingress" {
|
||||
name = "ingress"
|
||||
region = "fra1"
|
||||
vpc_uuid = module.nemo.vpc_id
|
||||
droplet_tag = module.nemo.worker_tag
|
||||
|
||||
healthcheck {
|
||||
|
@ -1,6 +1,6 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.18.3 cluster on AWS with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on AWS with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
@ -49,7 +49,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
|
||||
|
||||
```tf
|
||||
provider "aws" {
|
||||
version = "2.63.0"
|
||||
version = "2.70.0"
|
||||
region = "eu-central-1"
|
||||
shared_credentials_file = "/home/user/.config/aws/credentials"
|
||||
}
|
||||
@ -70,7 +70,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
@ -143,9 +143,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready <none> 10m v1.18.3
|
||||
ip-10-0-26-65 Ready <none> 10m v1.18.3
|
||||
ip-10-0-41-21 Ready <none> 10m v1.18.3
|
||||
ip-10-0-3-155 Ready <none> 10m v1.18.6
|
||||
ip-10-0-26-65 Ready <none> 10m v1.18.6
|
||||
ip-10-0-41-21 Ready <none> 10m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -208,7 +208,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
|
||||
| worker_type | EC2 instance type for workers | "t3.small" | See below |
|
||||
| os_image | AMI channel for Fedora CoreOS | not yet used | ? |
|
||||
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" |
|
||||
| disk_size | Size of the EBS volume in GB | 40 | 100 |
|
||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
|
||||
@ -216,7 +216,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
||||
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 |
|
||||
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [examples](/advanced/customization/) |
|
||||
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [examples](/advanced/customization/) |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
|
||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Azure
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.18.3 cluster on Azure with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on Azure with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
@ -47,7 +47,7 @@ Configure the Azure provider in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "azurerm" {
|
||||
version = "2.11.0"
|
||||
version = "2.19.0"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
@ -83,7 +83,7 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
@ -158,9 +158,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready <none> 24m v1.18.3
|
||||
ramius-worker-000001 Ready <none> 25m v1.18.3
|
||||
ramius-worker-000002 Ready <none> 24m v1.18.3
|
||||
ramius-controller-0 Ready <none> 24m v1.18.6
|
||||
ramius-worker-000001 Ready <none> 25m v1.18.6
|
||||
ramius-worker-000002 Ready <none> 24m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -242,7 +242,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
||||
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
||||
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||
| networking | Choice of networking provider | "calico" | "flannel" or "calico" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.18.3 cluster on bare-metal with Fedora CoreOS.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.18.6 cluster on bare-metal with Fedora CoreOS.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
@ -160,7 +160,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -289,9 +289,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready <none> 10m v1.18.3
|
||||
node2.example.com Ready <none> 10m v1.18.3
|
||||
node3.example.com Ready <none> 10m v1.18.3
|
||||
node1.example.com Ready <none> 10m v1.18.6
|
||||
node2.example.com Ready <none> 10m v1.18.6
|
||||
node3.example.com Ready <none> 10m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -339,7 +339,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| cached_install | PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Fedora CoreOS images into the cache | false | true |
|
||||
| install_disk | Disk device where Fedora CoreOS should be installed | "sda" (not "/dev/sda" like Container Linux) | "sdb" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
|
||||
| snippets | Map from machine names to lists of Fedora CoreOS Config snippets | {} | [examples](/advanced/customization/) |
|
||||
| network_ip_autodetection_method | Method to detect host IPv4 address (calico-only) | "first-found" | "can-reach=10.0.0.1" |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Digital Ocean
|
||||
# DigitalOcean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.18.3 cluster on DigitalOcean with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on DigitalOcean with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
@ -50,7 +50,7 @@ Configure the DigitalOcean provider to use your token in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "digitalocean" {
|
||||
version = "1.18.0"
|
||||
version = "1.20.0"
|
||||
token = "${chomp(file("~/.config/digital-ocean/token"))}"
|
||||
}
|
||||
|
||||
@ -79,7 +79,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||
|
||||
# Digital Ocean
|
||||
cluster_name = "nemo"
|
||||
@ -153,9 +153,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
10.132.110.130 Ready <none> 10m v1.18.3
|
||||
10.132.115.81 Ready <none> 10m v1.18.3
|
||||
10.132.124.107 Ready <none> 10m v1.18.3
|
||||
10.132.110.130 Ready <none> 10m v1.18.6
|
||||
10.132.115.81 Ready <none> 10m v1.18.6
|
||||
10.132.124.107 Ready <none> 10m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -238,7 +238,7 @@ Digital Ocean requires the SSH public key be uploaded to your account, so you ma
|
||||
| worker_type | Droplet type for workers | "s-1vcpu-2gb" | s-1vcpu-2gb, s-2vcpu-2gb, ... |
|
||||
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [example](/advanced/customization/) |
|
||||
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [example](/advanced/customization/) |
|
||||
| networking | Choice of networking provider | "calico" | "flannel" or "calico" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.18.3 cluster on Google Compute Engine with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on Google Compute Engine with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
@ -49,7 +49,7 @@ Configure the Google Cloud provider to use your service account key, project-id,
|
||||
|
||||
```tf
|
||||
provider "google" {
|
||||
version = "3.22.0"
|
||||
version = "3.30.0"
|
||||
project = "project-id"
|
||||
region = "us-central1"
|
||||
credentials = file("~/.config/google-cloud/terraform.json")
|
||||
@ -145,9 +145,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.6
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.6
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -213,12 +213,12 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
|
||||
| worker_type | Machine type for workers | "n1-standard-1" | See below |
|
||||
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" |
|
||||
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "stable", "testing", "next" |
|
||||
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
||||
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [examples](/advanced/customization/) |
|
||||
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [examples](/advanced/customization/) |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.18.3 cluster on AWS with CoreOS Container Linux or Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on AWS with CoreOS Container Linux or Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
@ -49,7 +49,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
|
||||
|
||||
```tf
|
||||
provider "aws" {
|
||||
version = "2.63.0"
|
||||
version = "2.70.0"
|
||||
region = "eu-central-1"
|
||||
shared_credentials_file = "/home/user/.config/aws/credentials"
|
||||
}
|
||||
@ -70,7 +70,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.18.6"
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
@ -143,9 +143,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready <none> 10m v1.18.3
|
||||
ip-10-0-26-65 Ready <none> 10m v1.18.3
|
||||
ip-10-0-41-21 Ready <none> 10m v1.18.3
|
||||
ip-10-0-3-155 Ready <none> 10m v1.18.6
|
||||
ip-10-0-26-65 Ready <none> 10m v1.18.6
|
||||
ip-10-0-41-21 Ready <none> 10m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -208,7 +208,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
|
||||
| worker_type | EC2 instance type for workers | "t3.small" | See below |
|
||||
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge |
|
||||
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge |
|
||||
| disk_size | Size of the EBS volume in GB | 40 | 100 |
|
||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
|
||||
@ -216,7 +216,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
||||
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0/null | 0.10 |
|
||||
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
|
||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Azure
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.18.3 cluster on Azure with CoreOS Container Linux or Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on Azure with CoreOS Container Linux or Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
@ -47,7 +47,7 @@ Configure the Azure provider in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "azurerm" {
|
||||
version = "2.11.0"
|
||||
version = "2.19.0"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `azure/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.18.6"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
@ -146,9 +146,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready <none> 24m v1.18.3
|
||||
ramius-worker-000001 Ready <none> 25m v1.18.3
|
||||
ramius-worker-000002 Ready <none> 24m v1.18.3
|
||||
ramius-controller-0 Ready <none> 24m v1.18.6
|
||||
ramius-worker-000001 Ready <none> 25m v1.18.6
|
||||
ramius-worker-000002 Ready <none> 24m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -225,12 +225,12 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
|
||||
| worker_type | Machine type for workers | "Standard_DS1_v2" | See below |
|
||||
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge, coreos-stable, coreos-beta, coreos-alpha |
|
||||
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge |
|
||||
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
||||
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||
| networking | Choice of networking provider | "calico" | "flannel" or "calico" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.18.3 cluster on bare-metal with CoreOS Container Linux or Flatcar Linux.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.18.6 cluster on bare-metal with CoreOS Container Linux or Flatcar Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
@ -160,7 +160,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.18.6"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -299,9 +299,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready <none> 10m v1.18.3
|
||||
node2.example.com Ready <none> 10m v1.18.3
|
||||
node3.example.com Ready <none> 10m v1.18.3
|
||||
node1.example.com Ready <none> 10m v1.18.6
|
||||
node2.example.com Ready <none> 10m v1.18.6
|
||||
node3.example.com Ready <none> 10m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -336,7 +336,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name | "mercury" |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | "http://matchbox.example.com:port" |
|
||||
| os_channel | Channel for a Container Linux derivative | coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge |
|
||||
| os_channel | Channel for a Container Linux derivative | flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge |
|
||||
| os_version | Version for a Container Linux derivative to PXE and install | "2345.3.1" |
|
||||
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
|
||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
|
||||
@ -350,7 +350,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
| download_protocol | Protocol iPXE uses to download the kernel and initrd. iPXE must be compiled with [crypto](https://ipxe.org/crypto) support for https. Unused if cached_install is true | "https" | "http" |
|
||||
| cached_install | PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux or Flatcar images into the cache | false | true |
|
||||
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
|
||||
| snippets | Map from machine names to lists of Container Linux Config snippets | {} | [examples](/advanced/customization/) |
|
||||
| network_ip_autodetection_method | Method to detect host IPv4 address (calico-only) | "first-found" | "can-reach=10.0.0.1" |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Digital Ocean
|
||||
# DigitalOcean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.18.3 cluster on DigitalOcean with CoreOS Container Linux or Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on DigitalOcean with CoreOS Container Linux or Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
@ -50,7 +50,7 @@ Configure the DigitalOcean provider to use your token in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "digitalocean" {
|
||||
version = "1.18.0"
|
||||
version = "1.20.0"
|
||||
token = "${chomp(file("~/.config/digital-ocean/token"))}"
|
||||
}
|
||||
|
||||
@ -79,7 +79,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.18.6"
|
||||
|
||||
# Digital Ocean
|
||||
cluster_name = "nemo"
|
||||
@ -153,9 +153,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
10.132.110.130 Ready <none> 10m v1.18.3
|
||||
10.132.115.81 Ready <none> 10m v1.18.3
|
||||
10.132.124.107 Ready <none> 10m v1.18.3
|
||||
10.132.110.130 Ready <none> 10m v1.18.6
|
||||
10.132.115.81 Ready <none> 10m v1.18.6
|
||||
10.132.124.107 Ready <none> 10m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -190,7 +190,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital
|
||||
| cluster_name | Unique cluster name (prepended to dns_zone) | "nemo" |
|
||||
| region | Digital Ocean region | "nyc1", "sfo2", "fra1", tor1" |
|
||||
| dns_zone | Digital Ocean domain (i.e. DNS zone) | "do.example.com" |
|
||||
| os_image | Container Linux image for instances | "custom-image-id", coreos-stable, coreos-beta, coreos-alpha |
|
||||
| os_image | Container Linux image for instances | "uploaded-flatcar-image-id" |
|
||||
| ssh_fingerprints | SSH public key fingerprints | ["d7:9d..."] |
|
||||
|
||||
#### DNS Zone
|
||||
@ -238,7 +238,7 @@ Digital Ocean requires the SSH public key be uploaded to your account, so you ma
|
||||
| worker_type | Droplet type for workers | "s-1vcpu-2gb" | s-1vcpu-2gb, s-2vcpu-2gb, ... |
|
||||
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| networking | Choice of networking provider | "calico" | "flannel" or "calico" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.18.3 cluster on Google Compute Engine with CoreOS Container Linux or Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on Google Compute Engine with CoreOS Container Linux or Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
@ -49,7 +49,7 @@ Configure the Google Cloud provider to use your service account key, project-id,
|
||||
|
||||
```tf
|
||||
provider "google" {
|
||||
version = "3.22.0"
|
||||
version = "3.30.0"
|
||||
project = "project-id"
|
||||
region = "us-central1"
|
||||
credentials = file("~/.config/google-cloud/terraform.json")
|
||||
@ -90,7 +90,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.18.6"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -165,9 +165,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.6
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.6
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -204,7 +204,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-
|
||||
| region | Google Cloud region | "us-central1" |
|
||||
| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" |
|
||||
| dns_zone_name | Google Cloud DNS zone name | "example-zone" |
|
||||
| os_image | Container Linux image for compute instances | "flatcar-linux-2303-4-0", coreos-stable, coreos-beta, coreos-alpha |
|
||||
| os_image | Container Linux image for compute instances | "flatcar-linux-2303-4-0" |
|
||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
|
||||
|
||||
Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-zones/regions-zones) and list Container Linux [images](https://cloud.google.com/compute/docs/images) with `gcloud compute images list | grep coreos`.
|
||||
@ -238,7 +238,7 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
||||
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](fedora-coreos/google-cloud/#preemption) workers, and [snippets](advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, or other [addons](addons/overview/)
|
||||
@ -29,7 +29,7 @@ Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
|
||||
| Azure | Fedora CoreOS | [azure/fedora-coreos/kubernetes](fedora-coreos/azure.md) | alpha |
|
||||
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](fedora-coreos/bare-metal.md) | beta |
|
||||
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](fedora-coreos/digitalocean.md) | beta |
|
||||
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | beta |
|
||||
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | stable |
|
||||
|
||||
Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/).
|
||||
|
||||
@ -53,7 +53,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -91,9 +91,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.3
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.3
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.3
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.6
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.6
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.6
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -13,12 +13,12 @@ Typhoon provides tagged releases to allow clusters to be versioned using ordinar
|
||||
|
||||
```
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||
...
|
||||
}
|
||||
|
||||
module "mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.18.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.18.6"
|
||||
...
|
||||
}
|
||||
```
|
||||
|
@ -38,7 +38,7 @@ Network performance varies based on the platform and CNI plugin. `iperf` was use
|
||||
|
||||
Notes:
|
||||
|
||||
* Calico and Flannel have comparable performance. Platform and configuration differences dominate.
|
||||
* Calico, Cilium, and Flannel have comparable performance. Platform and configuration differences dominate.
|
||||
* Azure and DigitalOcean network performance can be quite variable or depend on machine type
|
||||
* Only [certain AWS EC2 instance types](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) allow jumbo frames. This is why the default MTU on AWS must be 1480.
|
||||
|
||||
|
@ -52,9 +52,21 @@ Typhoon uses upstream container images (where possible) and upstream binaries.
|
||||
!!! note
|
||||
Kubernetes releases `kubelet` as a binary for distros to package, either as a DEB/RPM on traditional distros or as a container image for container-optimized operating systems.
|
||||
|
||||
Typhoon [packages](https://github.com/poseidon/kubelet) the upstream Kubelet and its dependencies as a [container image](https://quay.io/repository/poseidon/kubelet) for use in Typhoon. The upstream Kubelet binary is checksummed and packaged directly. Quay automated builds provide verifiability and confidence in image contents.
|
||||
Typhoon [packages](https://github.com/poseidon/kubelet) the upstream Kubelet and its dependencies as a [container image](https://quay.io/repository/poseidon/kubelet). Builds fetch the upstream Kubelet binary and verify its checksum.
|
||||
|
||||
The Kubelet image is published to Quay.io and Dockerhub.
|
||||
|
||||
* [quay.io/poseidon/kubelet](https://quay.io/repository/poseidon/kubelet) (official)
|
||||
* [docker.io/psdn/kubelet](https://hub.docker.com/r/psdn/kubelet) (fallback)
|
||||
|
||||
Two tag styles indicate the build strategy used.
|
||||
|
||||
* Typhoon internal infra publishes single and multi-arch images (e.g. `v1.18.4`, `v1.18.4-amd64`, `v1.18.4-arm64`, `v1.18.4-2-g23228e6-amd64`, `v1.18.4-2-g23228e6-arm64`)
|
||||
* Quay and Dockerhub automated builds publish verifiable images (e.g. `build-SHA` on Quay, `build-TAG` on Dockerhub)
|
||||
|
||||
The Typhoon-built Kubelet image is used as the official image. Automated builds provide an alternative image for those preferring to trust images built by Quay/Dockerhub (albeit lacking multi-arch). To use the fallback registry or an alternative tag, see [customization](/advanced/customization/#kubelet).
|
||||
|
||||
## Disclosures
|
||||
|
||||
If you find security issues, please email dghubble at gmail. If the issue lies in upstream Kubernetes, please inform upstream Kubernetes as well.
|
||||
If you find security issues, please email `security@psdn.io`. If the issue lies in upstream Kubernetes, please inform upstream Kubernetes as well.
|
||||
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -2,7 +2,7 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
enable: true
|
||||
enabled: true
|
||||
dropins:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
@ -28,11 +28,11 @@ systemd:
|
||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -46,13 +46,13 @@ systemd:
|
||||
RequiredBy=kubelet.service
|
||||
RequiredBy=etcd-member.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -100,15 +100,12 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||
--read-only-port=0 \
|
||||
--rotate-certificates \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -135,7 +132,7 @@ systemd:
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.3 \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
@ -189,6 +186,7 @@ storage:
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
|
@ -65,10 +65,10 @@ resource "google_compute_instance" "controllers" {
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
pretty_print = false
|
||||
snippets = var.controller_snippets
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
|
@ -112,6 +112,32 @@ resource "google_compute_firewall" "internal-vxlan" {
|
||||
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
}
|
||||
|
||||
# Cilium VXLAN
|
||||
resource "google_compute_firewall" "internal-linux-vxlan" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "${var.cluster_name}-linux-vxlan"
|
||||
network = google_compute_network.network.name
|
||||
|
||||
allow {
|
||||
protocol = "udp"
|
||||
ports = [8472]
|
||||
}
|
||||
|
||||
# Cilium health
|
||||
allow {
|
||||
protocol = "icmp"
|
||||
}
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [4240]
|
||||
}
|
||||
|
||||
source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "google_compute_firewall" "internal-node-exporter" {
|
||||
name = "${var.cluster_name}-internal-node-exporter"
|
||||
|
@ -4,7 +4,7 @@ terraform {
|
||||
required_version = "~> 0.12.6"
|
||||
required_providers {
|
||||
google = ">= 2.19, < 4.0"
|
||||
ct = "~> 0.3"
|
||||
ct = "~> 0.4"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
}
|
||||
|
@ -2,11 +2,11 @@
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
enable: true
|
||||
enabled: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
@ -19,13 +19,13 @@ systemd:
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.3
|
||||
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -73,10 +73,8 @@ systemd:
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
%{~ for label in split(",", node_labels) ~}
|
||||
@ -92,7 +90,7 @@ systemd:
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enable: true
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Waiting to delete Kubernetes node on shutdown
|
||||
@ -113,6 +111,7 @@ storage:
|
||||
${kubeconfig}
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
@ -128,7 +127,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.3 \
|
||||
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -71,9 +71,9 @@ resource "google_compute_instance_template" "worker" {
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
pretty_print = false
|
||||
snippets = var.snippets
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.snippets
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.3 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.18.6 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=ff7ec52d0a5e97b8ca6b86a80a7e5e1ea8570487"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user