mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-11 05:16:03 +02:00
Compare commits
84 Commits
Author | SHA1 | Date | |
---|---|---|---|
6df6bf904a | |||
5fba20d358 | |||
a8d3d3bb12 | |||
9ea6d2c245 | |||
507aac9b78 | |||
dfd2a0ec23 | |||
e3bf7d8f9b | |||
49050320ce | |||
74e025c9e4 | |||
257a49ce37 | |||
df3f40bcce | |||
32886cfba1 | |||
0ba2c1a4da | |||
430d139a5b | |||
7c6ab21b94 | |||
21178868db | |||
9dcf35e393 | |||
81b6f54169 | |||
7bce15975c | |||
1f83ae7dbb | |||
a10a1cee9f | |||
a79ad34ba3 | |||
99a11442c7 | |||
d27f367004 | |||
e9c8520359 | |||
37f00a3882 | |||
4cfafeaa07 | |||
90e23f5822 | |||
6234147948 | |||
c25c59058c | |||
bc9b808d44 | |||
4b0203fdb2 | |||
331566e1f7 | |||
04520e447c | |||
413585681b | |||
96711d7f17 | |||
c9059d3fe9 | |||
a287920169 | |||
8dc170b9d9 | |||
aed1a5f33d | |||
31d02b0221 | |||
8f875f80f5 | |||
16c0b9152b | |||
99dbce67a3 | |||
20bfd69780 | |||
ba44408b76 | |||
455175d9e6 | |||
d45804b1f6 | |||
907a96916f | |||
187bb17d39 | |||
abc31c3711 | |||
283e14f3e0 | |||
e72f916c8d | |||
c52f9f8d08 | |||
ecae6679ff | |||
4760543356 | |||
09eb208b4e | |||
8d024d22ad | |||
3bdddc452c | |||
ff4187a1fb | |||
2578be1f96 | |||
90edcd3d77 | |||
a927c7c790 | |||
d952576d2f | |||
70e389f37f | |||
a18bd0a707 | |||
01905b00bc | |||
f4194cd57a | |||
a2db4fa8c4 | |||
358854e712 | |||
b5dabcea31 | |||
3f0a5d2715 | |||
33173c0206 | |||
70f30d9c07 | |||
6afc1643d9 | |||
e71e27e769 | |||
64035005d4 | |||
317416b316 | |||
2c1af917ec | |||
4ac2d94999 | |||
fd044ee117 | |||
38a6bddd06 | |||
d8966afdda | |||
84ed0a31c3 |
33
.github/ISSUE_TEMPLATE.md
vendored
33
.github/ISSUE_TEMPLATE.md
vendored
@ -1,33 +0,0 @@
|
|||||||
<!-- Fill in either the 'Bug' or 'Feature Request' section -->
|
|
||||||
|
|
||||||
## Bug
|
|
||||||
|
|
||||||
### Environment
|
|
||||||
|
|
||||||
* Platform: aws, azure, bare-metal, google-cloud, digital-ocean
|
|
||||||
* OS: fedora-coreos, flatcar-linux
|
|
||||||
* Release: Typhoon version or Git SHA (reporting latest is **not** helpful)
|
|
||||||
* Terraform: `terraform version` (reporting latest is **not** helpful)
|
|
||||||
* Plugins: Provider plugin versions (reporting latest is **not** helpful)
|
|
||||||
|
|
||||||
### Problem
|
|
||||||
|
|
||||||
Describe the problem.
|
|
||||||
|
|
||||||
### Desired Behavior
|
|
||||||
|
|
||||||
Describe the goal.
|
|
||||||
|
|
||||||
### Steps to Reproduce
|
|
||||||
|
|
||||||
Provide clear steps to reproduce the issue unless already covered.
|
|
||||||
|
|
||||||
## Feature Request
|
|
||||||
|
|
||||||
### Feature
|
|
||||||
|
|
||||||
Describe the feature and what problem it solves.
|
|
||||||
|
|
||||||
### Tradeoffs
|
|
||||||
|
|
||||||
What are the pros and cons of this feature? How will it be exercised and maintained?
|
|
39
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
39
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
---
|
||||||
|
name: Bug report
|
||||||
|
about: Report a bug to improve the project
|
||||||
|
title: ''
|
||||||
|
labels: ''
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- READ: Issues are used to receive focused bug reports from users and to track planned future enhancements by the authors. Topics like cluster operation, support, debugging help, advice, and Kubernetes concepts are out of scope and should not use issues-->
|
||||||
|
|
||||||
|
**Description**
|
||||||
|
|
||||||
|
A clear and concise description of what the bug is.
|
||||||
|
|
||||||
|
**Steps to Reproduce**
|
||||||
|
|
||||||
|
Provide clear steps to reproduce the bug.
|
||||||
|
|
||||||
|
- [ ] Relevant error messages if appropriate (concise, not a dump of everything).
|
||||||
|
- [ ] Explored using a vanilla cluster from the [tutorials](https://typhoon.psdn.io/#documentation). Ruled out [customizations](https://typhoon.psdn.io/advanced/customization/).
|
||||||
|
|
||||||
|
**Expected behavior**
|
||||||
|
|
||||||
|
A clear and concise description of what you expected to happen.
|
||||||
|
|
||||||
|
**Environment**
|
||||||
|
|
||||||
|
* Platform: aws, azure, bare-metal, google-cloud, digital-ocean
|
||||||
|
* OS: fedora-coreos, flatcar-linux (include release version)
|
||||||
|
* Release: Typhoon version or Git SHA (reporting latest is **not** helpful)
|
||||||
|
* Terraform: `terraform version` (reporting latest is **not** helpful)
|
||||||
|
* Plugins: Provider plugin versions (reporting latest is **not** helpful)
|
||||||
|
|
||||||
|
**Possible Solution**
|
||||||
|
|
||||||
|
<!-- Most bug reports should have some inkling about solutions. Otherwise, your report may be less of a bug and more of a support request (see top).-->
|
||||||
|
|
||||||
|
Link to a PR or description.
|
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
blank_issues_enabled: true
|
||||||
|
contact_links:
|
||||||
|
- name: Security
|
||||||
|
url: https://typhoon.psdn.io/topics/security/
|
||||||
|
about: Report security vulnerabilities
|
15
.github/issue_template.md
vendored
Normal file
15
.github/issue_template.md
vendored
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
<!-- READ: Issues are used to receive focused bug reports from users and to track planned future enhancements by the authors. Topics like cluster operation, support, debugging help, advice, and Kubernetes concepts are out of scope and should not use issues-->
|
||||||
|
|
||||||
|
## Enhancement
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
One paragraph explanation of the enhancement.
|
||||||
|
|
||||||
|
### Motivation
|
||||||
|
|
||||||
|
Describe the motivation and what problem this solves.
|
||||||
|
|
||||||
|
### Tradeoffs
|
||||||
|
|
||||||
|
What are the pros and cons of this feature? How will it be exercised and maintained?
|
155
CHANGES.md
155
CHANGES.md
@ -4,6 +4,161 @@ Notable changes between versions.
|
|||||||
|
|
||||||
## Latest
|
## Latest
|
||||||
|
|
||||||
|
## v1.18.6
|
||||||
|
|
||||||
|
* Kubernetes [v1.18.6](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1186)
|
||||||
|
* Update Calico from v3.15.0 to [v3.15.1](https://docs.projectcalico.org/v3.15/release-notes/)
|
||||||
|
* Update Cilium from v1.8.0 to [v1.8.1](https://github.com/cilium/cilium/releases/tag/v1.8.1)
|
||||||
|
|
||||||
|
#### Addons
|
||||||
|
|
||||||
|
* Update nginx-ingress from v0.33.0 to [v0.34.1](https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.34.1)
|
||||||
|
* [ingress-nginx](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v0.34.0) will publish images only to gcr.io
|
||||||
|
* Update Prometheus from v2.19.1 to [v2.19.2](https://github.com/prometheus/prometheus/releases/tag/v2.19.2)
|
||||||
|
* Update Grafana from v7.0.4 to [v7.0.6](https://github.com/grafana/grafana/releases/tag/v7.0.6)
|
||||||
|
|
||||||
|
## v1.18.5
|
||||||
|
|
||||||
|
* Kubernetes [v1.18.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1185)
|
||||||
|
* Add Cilium v1.8.0 as a (experimental) CNI provider option ([#760](https://github.com/poseidon/typhoon/pull/760))
|
||||||
|
* Set `networking` to "cilium" to enable
|
||||||
|
* Update Calico from v3.14.1 to [v3.15.0](https://docs.projectcalico.org/v3.15/release-notes/)
|
||||||
|
|
||||||
|
#### DigitalOcean
|
||||||
|
|
||||||
|
* Isolate each cluster in an independent DigitalOcean VPC ([#776](https://github.com/poseidon/typhoon/pull/776))
|
||||||
|
* Create droplets in a VPC per cluster (matches Typhoon AWS, Azure, and GCP)
|
||||||
|
* Require `terraform-provider-digitalocean` v1.16.0+ (action required)
|
||||||
|
* Output `vpc_id` for use with an attached DigitalOcean [loadbalancer](https://github.com/poseidon/typhoon/blob/v1.18.5/docs/architecture/digitalocean.md#custom-load-balancer)
|
||||||
|
|
||||||
|
### Fedora CoreOS
|
||||||
|
|
||||||
|
#### Google Cloud
|
||||||
|
|
||||||
|
* Promote Fedora CoreOS to stable
|
||||||
|
* Remove `os_image` variable deprecated in v1.18.3 ([#777](https://github.com/poseidon/typhoon/pull/777))
|
||||||
|
* Use `os_stream` to select a Fedora CoreOS image stream
|
||||||
|
|
||||||
|
### Flatcar Linux
|
||||||
|
|
||||||
|
#### Azure
|
||||||
|
|
||||||
|
* Allow using Flatcar Linux Edge by setting `os_image` to "flatcar-edge" ([#778](https://github.com/poseidon/typhoon/pull/778))
|
||||||
|
|
||||||
|
#### Addons
|
||||||
|
|
||||||
|
* Update Prometheus from v2.19.0 to [v2.19.1](https://github.com/prometheus/prometheus/releases/tag/v2.19.1)
|
||||||
|
* Update Grafana from v7.0.3 to [v7.0.4](https://github.com/grafana/grafana/releases/tag/v7.0.4)
|
||||||
|
|
||||||
|
## v1.18.4
|
||||||
|
|
||||||
|
* Kubernetes [v1.18.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1184)
|
||||||
|
* Update Kubelet image publishing ([#749](https://github.com/poseidon/typhoon/pull/749))
|
||||||
|
* Build Kubelet images internally and publish to Quay and Dockerhub
|
||||||
|
* [quay.io/poseidon/kubelet](https://quay.io/repository/poseidon/kubelet) (official)
|
||||||
|
* [docker.io/psdn/kubelet](https://hub.docker.com/r/psdn/kubelet) (fallback)
|
||||||
|
* Continue offering automated image builds with an alternate tag strategy (see [docs](https://typhoon.psdn.io/topics/security/#container-images))
|
||||||
|
* [Document](https://typhoon.psdn.io/advanced/customization/#kubelet) use of alternate Kubelet images during registry incidents
|
||||||
|
* Update Calico from v3.14.0 to [v3.14.1](https://docs.projectcalico.org/v3.14/release-notes/)
|
||||||
|
* Fix [CVE-2020-13597](https://github.com/kubernetes/kubernetes/issues/91507)
|
||||||
|
* Rename controller NoSchedule taint from `node-role.kubernetes.io/master` to `node-role.kubernetes.io/controller` ([#764](https://github.com/poseidon/typhoon/pull/764))
|
||||||
|
* Tolerate the new taint name for workloads that may run on controller nodes
|
||||||
|
* Remove node label `node.kubernetes.io/master` from controller nodes ([#764](https://github.com/poseidon/typhoon/pull/764))
|
||||||
|
* Use `node.kubernetes.io/controller` (present since v1.9.5, [#160](https://github.com/poseidon/typhoon/pull/160)) to node select controllers
|
||||||
|
* Remove unused Kubelet `-lock-file` and `-exit-on-lock-contention` ([#758](https://github.com/poseidon/typhoon/pull/758))
|
||||||
|
|
||||||
|
### Fedora CoreOS
|
||||||
|
|
||||||
|
#### Azure
|
||||||
|
|
||||||
|
* Use `strict` Fedora CoreOS Config (FCC) snippet parsing ([#755](https://github.com/poseidon/typhoon/pull/755))
|
||||||
|
* Reduce Calico vxlan interface MTU to maintain performance ([#767](https://github.com/poseidon/typhoon/pull/766))
|
||||||
|
|
||||||
|
#### AWS
|
||||||
|
|
||||||
|
* Fix Kubelet service race with hostname update ([#766](https://github.com/poseidon/typhoon/pull/766))
|
||||||
|
* Wait for a hostname to avoid Kubelet trying to register as `localhost`
|
||||||
|
|
||||||
|
### Flatcar Linux
|
||||||
|
|
||||||
|
* Use `strict` Container Linux Config (CLC) snippet parsing ([#755](https://github.com/poseidon/typhoon/pull/755))
|
||||||
|
* Require `terraform-provider-ct` v0.4+, recommend v0.5+ (**action required**)
|
||||||
|
|
||||||
|
### Addons
|
||||||
|
|
||||||
|
* Update nginx-ingress from v0.32.0 to [v0.33.0](https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.33.0)
|
||||||
|
* Update Prometheus from v2.18.1 to [v2.19.0](https://github.com/prometheus/prometheus/releases/tag/v2.19.0)
|
||||||
|
* Update node-exporter from v1.0.0-rc.1 to [v1.0.1](https://github.com/prometheus/node_exporter/releases/tag/v1.0.1)
|
||||||
|
* Update kube-state-metrics from v1.9.6 to v1.9.7
|
||||||
|
* Update Grafana from v7.0.0 to v7.0.3
|
||||||
|
|
||||||
|
## v1.18.3
|
||||||
|
|
||||||
|
* Kubernetes [v1.18.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183)
|
||||||
|
* Use Kubelet [TLS bootstrap](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) with bootstrap token authentication ([#713](https://github.com/poseidon/typhoon/pull/713))
|
||||||
|
* Enable Node [Authorization](https://kubernetes.io/docs/reference/access-authn-authz/node/) and [NodeRestriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) to reduce authorization scope
|
||||||
|
* Renew Kubelet certificates every 72 hours
|
||||||
|
* Update etcd from v3.4.7 to [v3.4.9](https://github.com/etcd-io/etcd/releases/tag/v3.4.9)
|
||||||
|
* Update Calico from v3.13.1 to [v3.14.0](https://docs.projectcalico.org/v3.14/release-notes/)
|
||||||
|
* Add CoreDNS node affinity preference for controller nodes ([#188](https://github.com/poseidon/terraform-render-bootstrap/pull/188))
|
||||||
|
* Deprecate CoreOS Container Linux support (no OS [updates](https://coreos.com/os/eol/) after May 2020)
|
||||||
|
* Use a `fedora-coreos` module for Fedora CoreOS
|
||||||
|
* Use a `container-linux` module for Flatcar Linux
|
||||||
|
|
||||||
|
### AWS
|
||||||
|
|
||||||
|
* Fix Terraform plan error when `controller_count` exceeds AWS zones (e.g. 5 controllers) ([#714](https://github.com/poseidon/typhoon/pull/714))
|
||||||
|
* Regressed in v1.17.1 ([#605](https://github.com/poseidon/typhoon/pull/605))
|
||||||
|
|
||||||
|
### Azure
|
||||||
|
|
||||||
|
* Update Azure subnets to set `address_prefixes` list ([#730](https://github.com/poseidon/typhoon/pull/730))
|
||||||
|
* Fix warning that `address_prefix` is deprecated
|
||||||
|
* Require `terraform-provider-azurerm` v2.8.0+ (action required)
|
||||||
|
|
||||||
|
### DigitalOcean
|
||||||
|
|
||||||
|
* Promote DigitalOcean to beta on both Fedora CoreOS and Flatcar Linux
|
||||||
|
|
||||||
|
### Fedora CoreOS
|
||||||
|
|
||||||
|
* Fix Calico `install-cni` crashloop on Pod restarts ([#724](https://github.com/poseidon/typhoon/pull/724))
|
||||||
|
* SELinux enforcement requires consistent file context MCS level
|
||||||
|
* Restarting a node resolved the issue as a previous workaround
|
||||||
|
|
||||||
|
#### AWS
|
||||||
|
|
||||||
|
* Support Fedora CoreOS [image streams](https://docs.fedoraproject.org/en-US/fedora-coreos/update-streams/) ([#727](https://github.com/poseidon/typhoon/pull/727))
|
||||||
|
* Add `os_stream` variable to set the stream to `stable` (default), `testing`, or `next`
|
||||||
|
* Remove unused `os_image` variable
|
||||||
|
|
||||||
|
#### Google
|
||||||
|
|
||||||
|
* Support Fedora CoreOS [image streams](https://docs.fedoraproject.org/en-US/fedora-coreos/update-streams/) ([#723](https://github.com/poseidon/typhoon/pull/723))
|
||||||
|
* Add `os_stream` variable to set the stream to `stable` (default), `testing`, or `next`
|
||||||
|
* Deprecate `os_image` variable. Manual image uploads are no longer needed
|
||||||
|
|
||||||
|
### Flatcar Linux
|
||||||
|
|
||||||
|
#### Azure
|
||||||
|
|
||||||
|
* Use the Flatcar Linux Azure Marketplace image
|
||||||
|
* Restore [#664](https://github.com/poseidon/typhoon/pull/664) (reverted in [#707](https://github.com/poseidon/typhoon/pull/707)) but use Flatcar Linux new free offer (not byol)
|
||||||
|
* Change `os_image` to use a `flatcar-stable` default
|
||||||
|
|
||||||
|
#### Google
|
||||||
|
|
||||||
|
* Promote Flatcar Linux to beta
|
||||||
|
|
||||||
|
### Addons
|
||||||
|
|
||||||
|
* Update nginx-ingress from v0.30.0 to [v0.32.0](https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.32.0)
|
||||||
|
* Add support for [IngressClass](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class)
|
||||||
|
* Update Prometheus from v2.17.1 to v2.18.1
|
||||||
|
* Update kube-state-metrics from v1.9.5 to [v1.9.6](https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.6)
|
||||||
|
* Update node-exporter from v1.0.0-rc.0 to [v1.0.0-rc.1](https://github.com/prometheus/node_exporter/releases/tag/v1.0.0-rc.1)
|
||||||
|
* Update Grafana from v6.7.2 to [v7.0.0](https://grafana.com/docs/grafana/latest/guides/whats-new-in-v7-0/)
|
||||||
|
|
||||||
## v1.18.2
|
## v1.18.2
|
||||||
|
|
||||||
* Kubernetes [v1.18.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1182)
|
* Kubernetes [v1.18.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1182)
|
||||||
|
36
README.md
36
README.md
@ -11,9 +11,9 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.18.2 (upstream)
|
* Kubernetes v1.18.6 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, CSI, or other [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, CSI, or other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
@ -28,35 +28,25 @@ Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
|
|||||||
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](aws/fedora-coreos/kubernetes) | stable |
|
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](aws/fedora-coreos/kubernetes) | stable |
|
||||||
| Azure | Fedora CoreOS | [azure/fedora-coreos/kubernetes](azure/fedora-coreos/kubernetes) | alpha |
|
| Azure | Fedora CoreOS | [azure/fedora-coreos/kubernetes](azure/fedora-coreos/kubernetes) | alpha |
|
||||||
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](bare-metal/fedora-coreos/kubernetes) | beta |
|
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](bare-metal/fedora-coreos/kubernetes) | beta |
|
||||||
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](digital-ocean/fedora-coreos/kubernetes) | alpha |
|
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](digital-ocean/fedora-coreos/kubernetes) | beta |
|
||||||
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | beta |
|
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | stable |
|
||||||
|
|
||||||
Typhoon is available for [Flatcar Container Linux](https://www.flatcar-linux.org/releases/).
|
Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/).
|
||||||
|
|
||||||
| Platform | Operating System | Terraform Module | Status |
|
| Platform | Operating System | Terraform Module | Status |
|
||||||
|---------------|------------------|------------------|--------|
|
|---------------|------------------|------------------|--------|
|
||||||
| AWS | Flatcar Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
|
| AWS | Flatcar Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
|
||||||
| Azure | Flatcar Linux | [azure/container-linux/kubernetes](azure/container-linux/kubernetes) | alpha |
|
| Azure | Flatcar Linux | [azure/container-linux/kubernetes](azure/container-linux/kubernetes) | alpha |
|
||||||
| Bare-Metal | Flatcar Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
|
| Bare-Metal | Flatcar Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
|
||||||
| DigitalOcean | Flatcar Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | alpha |
|
| DigitalOcean | Flatcar Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
||||||
| Google Cloud | Flatcar Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | alpha |
|
| Google Cloud | Flatcar Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | beta |
|
||||||
|
|
||||||
Typhoon is available for CoreOS Container Linux ([no updates](https://coreos.com/os/eol/) after May 2020).
|
|
||||||
|
|
||||||
| Platform | Operating System | Terraform Module | Status |
|
|
||||||
|---------------|------------------|------------------|--------|
|
|
||||||
| AWS | Container Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
|
|
||||||
| Azure | Container Linux | [azure/container-linux/kubernetes](azure/container-linux/kubernetes) | alpha |
|
|
||||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
|
|
||||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
|
||||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable |
|
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
* [Docs](https://typhoon.psdn.io)
|
* [Docs](https://typhoon.psdn.io)
|
||||||
* Architecture [concepts](https://typhoon.psdn.io/architecture/concepts/) and [operating systems](https://typhoon.psdn.io/architecture/operating-systems/)
|
* Architecture [concepts](https://typhoon.psdn.io/architecture/concepts/) and [operating systems](https://typhoon.psdn.io/architecture/operating-systems/)
|
||||||
* Fedora CoreOS tutorials for [AWS](docs/fedora-coreos/aws.md), [Azure](docs/fedora-coreos/azure.md), [Bare-Metal](docs/fedora-coreos/bare-metal.md), [DigitalOcean](docs/fedora-coreos/digitalocean.md), and [Google Cloud](docs/fedora-coreos/google-cloud.md)
|
* Fedora CoreOS tutorials for [AWS](docs/fedora-coreos/aws.md), [Azure](docs/fedora-coreos/azure.md), [Bare-Metal](docs/fedora-coreos/bare-metal.md), [DigitalOcean](docs/fedora-coreos/digitalocean.md), and [Google Cloud](docs/fedora-coreos/google-cloud.md)
|
||||||
* Flatcar Linux tutorials for [AWS](docs/cl/aws.md), [Azure](docs/cl/azure.md), [Bare-Metal](docs/cl/bare-metal.md), [DigitalOcean](docs/cl/digital-ocean.md), and [Google Cloud](docs/cl/google-cloud.md)
|
* Flatcar Linux tutorials for [AWS](docs/flatcar-linux/aws.md), [Azure](docs/flatcar-linux/azure.md), [Bare-Metal](docs/flatcar-linux/bare-metal.md), [DigitalOcean](docs/flatcar-linux/digitalocean.md), and [Google Cloud](docs/flatcar-linux/google-cloud.md)
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
@ -64,7 +54,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin" {
|
module "yavin" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.18.2"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
cluster_name = "yavin"
|
cluster_name = "yavin"
|
||||||
@ -103,9 +93,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME ROLES STATUS AGE VERSION
|
NAME ROLES STATUS AGE VERSION
|
||||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.2
|
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.18.6
|
||||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.2
|
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.18.6
|
||||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.2
|
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.18.6
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
|
@ -23,7 +23,7 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: grafana
|
- name: grafana
|
||||||
image: docker.io/grafana/grafana:6.7.2
|
image: docker.io/grafana/grafana:7.0.6
|
||||||
env:
|
env:
|
||||||
- name: GF_PATHS_CONFIG
|
- name: GF_PATHS_CONFIG
|
||||||
value: "/etc/grafana/custom.ini"
|
value: "/etc/grafana/custom.ini"
|
||||||
|
6
addons/nginx-ingress/aws/class.yaml
Normal file
6
addons/nginx-ingress/aws/class.yaml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
apiVersion: networking.k8s.io/v1beta1
|
||||||
|
kind: IngressClass
|
||||||
|
metadata:
|
||||||
|
name: public
|
||||||
|
spec:
|
||||||
|
controller: k8s.io/ingress-nginx
|
@ -22,7 +22,7 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-ingress-controller
|
- name: nginx-ingress-controller
|
||||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
|
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||||
args:
|
args:
|
||||||
- /nginx-ingress-controller
|
- /nginx-ingress-controller
|
||||||
- --ingress-class=public
|
- --ingress-class=public
|
||||||
|
@ -51,3 +51,12 @@ rules:
|
|||||||
- ingresses/status
|
- ingresses/status
|
||||||
verbs:
|
verbs:
|
||||||
- update
|
- update
|
||||||
|
- apiGroups:
|
||||||
|
- "networking.k8s.io"
|
||||||
|
resources:
|
||||||
|
- ingressclasses
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
|
||||||
|
6
addons/nginx-ingress/azure/class.yaml
Normal file
6
addons/nginx-ingress/azure/class.yaml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
apiVersion: networking.k8s.io/v1beta1
|
||||||
|
kind: IngressClass
|
||||||
|
metadata:
|
||||||
|
name: public
|
||||||
|
spec:
|
||||||
|
controller: k8s.io/ingress-nginx
|
@ -22,7 +22,7 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-ingress-controller
|
- name: nginx-ingress-controller
|
||||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
|
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||||
args:
|
args:
|
||||||
- /nginx-ingress-controller
|
- /nginx-ingress-controller
|
||||||
- --ingress-class=public
|
- --ingress-class=public
|
||||||
|
@ -51,3 +51,12 @@ rules:
|
|||||||
- ingresses/status
|
- ingresses/status
|
||||||
verbs:
|
verbs:
|
||||||
- update
|
- update
|
||||||
|
- apiGroups:
|
||||||
|
- "networking.k8s.io"
|
||||||
|
resources:
|
||||||
|
- ingressclasses
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
|
||||||
|
6
addons/nginx-ingress/bare-metal/class.yaml
Normal file
6
addons/nginx-ingress/bare-metal/class.yaml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
apiVersion: networking.k8s.io/v1beta1
|
||||||
|
kind: IngressClass
|
||||||
|
metadata:
|
||||||
|
name: public
|
||||||
|
spec:
|
||||||
|
controller: k8s.io/ingress-nginx
|
@ -22,7 +22,7 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-ingress-controller
|
- name: nginx-ingress-controller
|
||||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
|
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||||
args:
|
args:
|
||||||
- /nginx-ingress-controller
|
- /nginx-ingress-controller
|
||||||
- --ingress-class=public
|
- --ingress-class=public
|
||||||
|
@ -51,3 +51,12 @@ rules:
|
|||||||
- ingresses/status
|
- ingresses/status
|
||||||
verbs:
|
verbs:
|
||||||
- update
|
- update
|
||||||
|
- apiGroups:
|
||||||
|
- "networking.k8s.io"
|
||||||
|
resources:
|
||||||
|
- ingressclasses
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
|
||||||
|
6
addons/nginx-ingress/digital-ocean/class.yaml
Normal file
6
addons/nginx-ingress/digital-ocean/class.yaml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
apiVersion: networking.k8s.io/v1beta1
|
||||||
|
kind: IngressClass
|
||||||
|
metadata:
|
||||||
|
name: public
|
||||||
|
spec:
|
||||||
|
controller: k8s.io/ingress-nginx
|
@ -22,7 +22,7 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-ingress-controller
|
- name: nginx-ingress-controller
|
||||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
|
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||||
args:
|
args:
|
||||||
- /nginx-ingress-controller
|
- /nginx-ingress-controller
|
||||||
- --ingress-class=public
|
- --ingress-class=public
|
||||||
|
@ -51,3 +51,12 @@ rules:
|
|||||||
- ingresses/status
|
- ingresses/status
|
||||||
verbs:
|
verbs:
|
||||||
- update
|
- update
|
||||||
|
- apiGroups:
|
||||||
|
- "networking.k8s.io"
|
||||||
|
resources:
|
||||||
|
- ingressclasses
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
|
||||||
|
6
addons/nginx-ingress/google-cloud/class.yaml
Normal file
6
addons/nginx-ingress/google-cloud/class.yaml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
apiVersion: networking.k8s.io/v1beta1
|
||||||
|
kind: IngressClass
|
||||||
|
metadata:
|
||||||
|
name: public
|
||||||
|
spec:
|
||||||
|
controller: k8s.io/ingress-nginx
|
@ -22,7 +22,7 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-ingress-controller
|
- name: nginx-ingress-controller
|
||||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
|
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
|
||||||
args:
|
args:
|
||||||
- /nginx-ingress-controller
|
- /nginx-ingress-controller
|
||||||
- --ingress-class=public
|
- --ingress-class=public
|
||||||
|
@ -51,3 +51,12 @@ rules:
|
|||||||
- ingresses/status
|
- ingresses/status
|
||||||
verbs:
|
verbs:
|
||||||
- update
|
- update
|
||||||
|
- apiGroups:
|
||||||
|
- "networking.k8s.io"
|
||||||
|
resources:
|
||||||
|
- ingressclasses
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
|
||||||
|
@ -20,7 +20,7 @@ spec:
|
|||||||
serviceAccountName: prometheus
|
serviceAccountName: prometheus
|
||||||
containers:
|
containers:
|
||||||
- name: prometheus
|
- name: prometheus
|
||||||
image: quay.io/prometheus/prometheus:v2.17.1
|
image: quay.io/prometheus/prometheus:v2.19.2
|
||||||
args:
|
args:
|
||||||
- --web.listen-address=0.0.0.0:9090
|
- --web.listen-address=0.0.0.0:9090
|
||||||
- --config.file=/etc/prometheus/prometheus.yaml
|
- --config.file=/etc/prometheus/prometheus.yaml
|
||||||
|
@ -24,7 +24,7 @@ spec:
|
|||||||
serviceAccountName: kube-state-metrics
|
serviceAccountName: kube-state-metrics
|
||||||
containers:
|
containers:
|
||||||
- name: kube-state-metrics
|
- name: kube-state-metrics
|
||||||
image: quay.io/coreos/kube-state-metrics:v1.9.5
|
image: quay.io/coreos/kube-state-metrics:v1.9.7
|
||||||
ports:
|
ports:
|
||||||
- name: metrics
|
- name: metrics
|
||||||
containerPort: 8080
|
containerPort: 8080
|
||||||
|
@ -28,7 +28,7 @@ spec:
|
|||||||
hostPID: true
|
hostPID: true
|
||||||
containers:
|
containers:
|
||||||
- name: node-exporter
|
- name: node-exporter
|
||||||
image: quay.io/prometheus/node-exporter:v1.0.0-rc.0
|
image: quay.io/prometheus/node-exporter:v1.0.1
|
||||||
args:
|
args:
|
||||||
- --path.procfs=/host/proc
|
- --path.procfs=/host/proc
|
||||||
- --path.sysfs=/host/sys
|
- --path.sysfs=/host/sys
|
||||||
|
@ -882,10 +882,10 @@ data:
|
|||||||
{
|
{
|
||||||
"alert": "KubeClientCertificateExpiration",
|
"alert": "KubeClientCertificateExpiration",
|
||||||
"annotations": {
|
"annotations": {
|
||||||
"message": "A client certificate used to authenticate to the apiserver is expiring in less than 7.0 days.",
|
"message": "A client certificate used to authenticate to the apiserver is expiring in less than 1.0 hours.",
|
||||||
"runbook_url": "https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeclientcertificateexpiration"
|
"runbook_url": "https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeclientcertificateexpiration"
|
||||||
},
|
},
|
||||||
"expr": "apiserver_client_certificate_expiration_seconds_count{job=\"apiserver\"} > 0 and on(job) histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job=\"apiserver\"}[5m]))) < 604800\n",
|
"expr": "apiserver_client_certificate_expiration_seconds_count{job=\"apiserver\"} > 0 and on(job) histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job=\"apiserver\"}[5m]))) < 3600\n",
|
||||||
"labels": {
|
"labels": {
|
||||||
"severity": "warning"
|
"severity": "warning"
|
||||||
}
|
}
|
||||||
@ -893,10 +893,10 @@ data:
|
|||||||
{
|
{
|
||||||
"alert": "KubeClientCertificateExpiration",
|
"alert": "KubeClientCertificateExpiration",
|
||||||
"annotations": {
|
"annotations": {
|
||||||
"message": "A client certificate used to authenticate to the apiserver is expiring in less than 24.0 hours.",
|
"message": "A client certificate used to authenticate to the apiserver is expiring in less than 0.1 hours.",
|
||||||
"runbook_url": "https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeclientcertificateexpiration"
|
"runbook_url": "https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeclientcertificateexpiration"
|
||||||
},
|
},
|
||||||
"expr": "apiserver_client_certificate_expiration_seconds_count{job=\"apiserver\"} > 0 and on(job) histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job=\"apiserver\"}[5m]))) < 86400\n",
|
"expr": "apiserver_client_certificate_expiration_seconds_count{job=\"apiserver\"} > 0 and on(job) histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job=\"apiserver\"}[5m]))) < 300\n",
|
||||||
"labels": {
|
"labels": {
|
||||||
"severity": "critical"
|
"severity": "critical"
|
||||||
}
|
}
|
||||||
|
@ -11,11 +11,11 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.18.2 (upstream)
|
* Kubernetes v1.18.6 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, CSI, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
## Docs
|
## Docs
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=14d0b2087962a0f2557c184f3f523548ce19bbdc"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
|
@ -2,12 +2,12 @@
|
|||||||
systemd:
|
systemd:
|
||||||
units:
|
units:
|
||||||
- name: etcd-member.service
|
- name: etcd-member.service
|
||||||
enable: true
|
enabled: true
|
||||||
dropins:
|
dropins:
|
||||||
- name: 40-etcd-cluster.conf
|
- name: 40-etcd-cluster.conf
|
||||||
contents: |
|
contents: |
|
||||||
[Service]
|
[Service]
|
||||||
Environment="ETCD_IMAGE_TAG=v3.4.7"
|
Environment="ETCD_IMAGE_TAG=v3.4.9"
|
||||||
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
|
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
|
||||||
Environment="RKT_RUN_ARGS=--insecure-options=image"
|
Environment="RKT_RUN_ARGS=--insecure-options=image"
|
||||||
Environment="ETCD_NAME=${etcd_name}"
|
Environment="ETCD_NAME=${etcd_name}"
|
||||||
@ -28,11 +28,11 @@ systemd:
|
|||||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||||
- name: docker.service
|
- name: docker.service
|
||||||
enable: true
|
enabled: true
|
||||||
- name: locksmithd.service
|
- name: locksmithd.service
|
||||||
mask: true
|
mask: true
|
||||||
- name: wait-for-dns.service
|
- name: wait-for-dns.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS entries
|
||||||
@ -46,12 +46,13 @@ systemd:
|
|||||||
RequiredBy=kubelet.service
|
RequiredBy=kubelet.service
|
||||||
RequiredBy=etcd-member.service
|
RequiredBy=etcd-member.service
|
||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube
|
Description=Kubelet
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -91,25 +92,24 @@ systemd:
|
|||||||
--mount volume=var-log,target=/var/log \
|
--mount volume=var-log,target=/var/log \
|
||||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 -- \
|
$${KUBELET_IMAGE} -- \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/master \
|
|
||||||
--node-labels=node.kubernetes.io/controller="true" \
|
--node-labels=node.kubernetes.io/controller="true" \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
Restart=always
|
Restart=always
|
||||||
@ -134,7 +134,7 @@ systemd:
|
|||||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||||
--mount volume=script,target=/apply \
|
--mount volume=script,target=/apply \
|
||||||
--insecure-options=image \
|
--insecure-options=image \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 \
|
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||||
--net=host \
|
--net=host \
|
||||||
--dns=host \
|
--dns=host \
|
||||||
--exec=/apply
|
--exec=/apply
|
||||||
@ -165,11 +165,11 @@ storage:
|
|||||||
chmod -R 500 /etc/ssl/etcd
|
chmod -R 500 /etc/ssl/etcd
|
||||||
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||||
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||||
sudo mkdir -p /etc/kubernetes/manifests
|
mkdir -p /etc/kubernetes/manifests
|
||||||
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
mv static-manifests/* /etc/kubernetes/manifests/
|
||||||
sudo mkdir -p /opt/bootstrap/assets
|
mkdir -p /opt/bootstrap/assets
|
||||||
sudo mv manifests /opt/bootstrap/assets/manifests
|
mv manifests /opt/bootstrap/assets/manifests
|
||||||
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||||
rm -rf assets auth static-manifests tls manifests-networking
|
rm -rf assets auth static-manifests tls manifests-networking
|
||||||
- path: /opt/bootstrap/apply
|
- path: /opt/bootstrap/apply
|
||||||
filesystem: root
|
filesystem: root
|
||||||
@ -188,6 +188,7 @@ storage:
|
|||||||
done
|
done
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0644
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
@ -36,7 +36,7 @@ resource "aws_instance" "controllers" {
|
|||||||
|
|
||||||
# network
|
# network
|
||||||
associate_public_ip_address = true
|
associate_public_ip_address = true
|
||||||
subnet_id = aws_subnet.public.*.id[count.index]
|
subnet_id = element(aws_subnet.public.*.id, count.index)
|
||||||
vpc_security_group_ids = [aws_security_group.controller.id]
|
vpc_security_group_ids = [aws_security_group.controller.id]
|
||||||
|
|
||||||
lifecycle {
|
lifecycle {
|
||||||
@ -49,10 +49,10 @@ resource "aws_instance" "controllers" {
|
|||||||
|
|
||||||
# Controller Ignition configs
|
# Controller Ignition configs
|
||||||
data "ct_config" "controller-ignitions" {
|
data "ct_config" "controller-ignitions" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller Container Linux configs
|
# Controller Container Linux configs
|
||||||
|
@ -13,6 +13,30 @@ resource "aws_security_group" "controller" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "controller-icmp" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "icmp"
|
||||||
|
from_port = 8
|
||||||
|
to_port = 0
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "controller-icmp-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "icmp"
|
||||||
|
from_port = 8
|
||||||
|
to_port = 0
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "controller-ssh" {
|
resource "aws_security_group_rule" "controller-ssh" {
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
@ -44,39 +68,31 @@ resource "aws_security_group_rule" "controller-etcd-metrics" {
|
|||||||
source_security_group_id = aws_security_group.worker.id
|
source_security_group_id = aws_security_group.worker.id
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-proxy
|
resource "aws_security_group_rule" "controller-cilium-health" {
|
||||||
resource "aws_security_group_rule" "kube-proxy-metrics" {
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
type = "ingress"
|
type = "ingress"
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
from_port = 10249
|
from_port = 4240
|
||||||
to_port = 10249
|
to_port = 4240
|
||||||
source_security_group_id = aws_security_group.worker.id
|
source_security_group_id = aws_security_group.worker.id
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-scheduler
|
resource "aws_security_group_rule" "controller-cilium-health-self" {
|
||||||
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
type = "ingress"
|
type = "ingress"
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
from_port = 10251
|
from_port = 4240
|
||||||
to_port = 10251
|
to_port = 4240
|
||||||
source_security_group_id = aws_security_group.worker.id
|
self = true
|
||||||
}
|
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-controller-manager
|
|
||||||
resource "aws_security_group_rule" "controller-manager-metrics" {
|
|
||||||
security_group_id = aws_security_group.controller.id
|
|
||||||
|
|
||||||
type = "ingress"
|
|
||||||
protocol = "tcp"
|
|
||||||
from_port = 10252
|
|
||||||
to_port = 10252
|
|
||||||
source_security_group_id = aws_security_group.worker.id
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# IANA VXLAN default
|
||||||
resource "aws_security_group_rule" "controller-vxlan" {
|
resource "aws_security_group_rule" "controller-vxlan" {
|
||||||
count = var.networking == "flannel" ? 1 : 0
|
count = var.networking == "flannel" ? 1 : 0
|
||||||
|
|
||||||
@ -111,6 +127,31 @@ resource "aws_security_group_rule" "controller-apiserver" {
|
|||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Linux VXLAN default
|
||||||
|
resource "aws_security_group_rule" "controller-linux-vxlan" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "udp"
|
||||||
|
from_port = 8472
|
||||||
|
to_port = 8472
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "controller-linux-vxlan-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "udp"
|
||||||
|
from_port = 8472
|
||||||
|
to_port = 8472
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "aws_security_group_rule" "controller-node-exporter" {
|
resource "aws_security_group_rule" "controller-node-exporter" {
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
@ -122,6 +163,17 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
|||||||
source_security_group_id = aws_security_group.worker.id
|
source_security_group_id = aws_security_group.worker.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape kube-proxy
|
||||||
|
resource "aws_security_group_rule" "kube-proxy-metrics" {
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 10249
|
||||||
|
to_port = 10249
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
# Allow apiserver to access kubelets for exec, log, port-forward
|
# Allow apiserver to access kubelets for exec, log, port-forward
|
||||||
resource "aws_security_group_rule" "controller-kubelet" {
|
resource "aws_security_group_rule" "controller-kubelet" {
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
@ -143,6 +195,28 @@ resource "aws_security_group_rule" "controller-kubelet-self" {
|
|||||||
self = true
|
self = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape kube-scheduler
|
||||||
|
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 10251
|
||||||
|
to_port = 10251
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape kube-controller-manager
|
||||||
|
resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 10252
|
||||||
|
to_port = 10252
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "controller-bgp" {
|
resource "aws_security_group_rule" "controller-bgp" {
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
@ -227,6 +301,30 @@ resource "aws_security_group" "worker" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-icmp" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "icmp"
|
||||||
|
from_port = 8
|
||||||
|
to_port = 0
|
||||||
|
source_security_group_id = aws_security_group.controller.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-icmp-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "icmp"
|
||||||
|
from_port = 8
|
||||||
|
to_port = 0
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "worker-ssh" {
|
resource "aws_security_group_rule" "worker-ssh" {
|
||||||
security_group_id = aws_security_group.worker.id
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
@ -257,6 +355,31 @@ resource "aws_security_group_rule" "worker-https" {
|
|||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-cilium-health" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 4240
|
||||||
|
to_port = 4240
|
||||||
|
source_security_group_id = aws_security_group.controller.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-cilium-health-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 4240
|
||||||
|
to_port = 4240
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
|
# IANA VXLAN default
|
||||||
resource "aws_security_group_rule" "worker-vxlan" {
|
resource "aws_security_group_rule" "worker-vxlan" {
|
||||||
count = var.networking == "flannel" ? 1 : 0
|
count = var.networking == "flannel" ? 1 : 0
|
||||||
|
|
||||||
@ -281,6 +404,31 @@ resource "aws_security_group_rule" "worker-vxlan-self" {
|
|||||||
self = true
|
self = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Linux VXLAN default
|
||||||
|
resource "aws_security_group_rule" "worker-linux-vxlan" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "udp"
|
||||||
|
from_port = 8472
|
||||||
|
to_port = 8472
|
||||||
|
source_security_group_id = aws_security_group.controller.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-linux-vxlan-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "udp"
|
||||||
|
from_port = 8472
|
||||||
|
to_port = 8472
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "aws_security_group_rule" "worker-node-exporter" {
|
resource "aws_security_group_rule" "worker-node-exporter" {
|
||||||
security_group_id = aws_security_group.worker.id
|
security_group_id = aws_security_group.worker.id
|
||||||
|
@ -4,7 +4,7 @@ terraform {
|
|||||||
required_version = "~> 0.12.6"
|
required_version = "~> 0.12.6"
|
||||||
required_providers {
|
required_providers {
|
||||||
aws = "~> 2.23"
|
aws = "~> 2.23"
|
||||||
ct = "~> 0.3"
|
ct = "~> 0.4"
|
||||||
template = "~> 2.1"
|
template = "~> 2.1"
|
||||||
null = "~> 2.1"
|
null = "~> 2.1"
|
||||||
}
|
}
|
||||||
|
@ -2,11 +2,11 @@
|
|||||||
systemd:
|
systemd:
|
||||||
units:
|
units:
|
||||||
- name: docker.service
|
- name: docker.service
|
||||||
enable: true
|
enabled: true
|
||||||
- name: locksmithd.service
|
- name: locksmithd.service
|
||||||
mask: true
|
mask: true
|
||||||
- name: wait-for-dns.service
|
- name: wait-for-dns.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS entries
|
||||||
@ -19,12 +19,13 @@ systemd:
|
|||||||
[Install]
|
[Install]
|
||||||
RequiredBy=kubelet.service
|
RequiredBy=kubelet.service
|
||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube
|
Description=Kubelet
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -64,19 +65,18 @@ systemd:
|
|||||||
--mount volume=var-log,target=/var/log \
|
--mount volume=var-log,target=/var/log \
|
||||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 -- \
|
$${KUBELET_IMAGE} -- \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/node \
|
--node-labels=node.kubernetes.io/node \
|
||||||
%{~ for label in split(",", node_labels) ~}
|
%{~ for label in split(",", node_labels) ~}
|
||||||
@ -84,6 +84,7 @@ systemd:
|
|||||||
%{~ endfor ~}
|
%{~ endfor ~}
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
Restart=always
|
Restart=always
|
||||||
@ -112,6 +113,7 @@ storage:
|
|||||||
${kubeconfig}
|
${kubeconfig}
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0644
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
@ -127,7 +129,7 @@ storage:
|
|||||||
--volume config,kind=host,source=/etc/kubernetes \
|
--volume config,kind=host,source=/etc/kubernetes \
|
||||||
--mount volume=config,target=/etc/kubernetes \
|
--mount volume=config,target=/etc/kubernetes \
|
||||||
--insecure-options=image \
|
--insecure-options=image \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 \
|
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||||
--net=host \
|
--net=host \
|
||||||
--dns=host \
|
--dns=host \
|
||||||
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||||
|
@ -71,9 +71,9 @@ resource "aws_launch_configuration" "worker" {
|
|||||||
|
|
||||||
# Worker Ignition config
|
# Worker Ignition config
|
||||||
data "ct_config" "worker-ignition" {
|
data "ct_config" "worker-ignition" {
|
||||||
content = data.template_file.worker-config.rendered
|
content = data.template_file.worker-config.rendered
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = var.snippets
|
snippets = var.snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Worker Container Linux config
|
# Worker Container Linux config
|
||||||
|
@ -11,11 +11,11 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.18.2 (upstream)
|
* Kubernetes v1.18.6 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, CSI, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
## Docs
|
## Docs
|
||||||
|
|
||||||
|
@ -13,16 +13,8 @@ data "aws_ami" "fedora-coreos" {
|
|||||||
values = ["hvm"]
|
values = ["hvm"]
|
||||||
}
|
}
|
||||||
|
|
||||||
filter {
|
|
||||||
name = "name"
|
|
||||||
values = ["fedora-coreos-31.*.*.*-hvm"]
|
|
||||||
}
|
|
||||||
|
|
||||||
filter {
|
filter {
|
||||||
name = "description"
|
name = "description"
|
||||||
values = ["Fedora CoreOS stable*"]
|
values = ["Fedora CoreOS ${var.os_stream} *"]
|
||||||
}
|
}
|
||||||
|
|
||||||
# try to filter out dev images (AWS filters can't)
|
|
||||||
name_regex = "^fedora-coreos-31.[0-9]*.[0-9]*.[0-9]*-hvm*"
|
|
||||||
}
|
}
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=14d0b2087962a0f2557c184f3f523548ce19bbdc"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
|
@ -36,7 +36,7 @@ resource "aws_instance" "controllers" {
|
|||||||
|
|
||||||
# network
|
# network
|
||||||
associate_public_ip_address = true
|
associate_public_ip_address = true
|
||||||
subnet_id = aws_subnet.public.*.id[count.index]
|
subnet_id = element(aws_subnet.public.*.id, count.index)
|
||||||
vpc_security_group_ids = [aws_security_group.controller.id]
|
vpc_security_group_ids = [aws_security_group.controller.id]
|
||||||
|
|
||||||
lifecycle {
|
lifecycle {
|
||||||
|
@ -28,7 +28,7 @@ systemd:
|
|||||||
--network host \
|
--network host \
|
||||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||||
quay.io/coreos/etcd:v3.4.7
|
quay.io/coreos/etcd:v3.4.9
|
||||||
ExecStop=/usr/bin/podman stop etcd
|
ExecStop=/usr/bin/podman stop etcd
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
@ -38,11 +38,12 @@ systemd:
|
|||||||
enabled: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS and hostname
|
||||||
Before=kubelet.service
|
Before=kubelet.service
|
||||||
[Service]
|
[Service]
|
||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
|
ExecStartPre=/bin/sh -c 'while [ `hostname -s` == "localhost" ]; do sleep 1; done;'
|
||||||
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||||
[Install]
|
[Install]
|
||||||
RequiredBy=kubelet.service
|
RequiredBy=kubelet.service
|
||||||
@ -51,9 +52,10 @@ systemd:
|
|||||||
enabled: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -79,10 +81,11 @@ systemd:
|
|||||||
--volume /var/log:/var/log \
|
--volume /var/log:/var/log \
|
||||||
--volume /var/run/lock:/var/run/lock:z \
|
--volume /var/run/lock:/var/run/lock:z \
|
||||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
quay.io/poseidon/kubelet:v1.18.2 \
|
$${KUBELET_IMAGE} \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=systemd \
|
--cgroup-driver=systemd \
|
||||||
--cgroups-per-qos=true \
|
--cgroups-per-qos=true \
|
||||||
--enforce-node-allocatable=pods \
|
--enforce-node-allocatable=pods \
|
||||||
@ -90,16 +93,14 @@ systemd:
|
|||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/master \
|
|
||||||
--node-labels=node.kubernetes.io/controller="true" \
|
--node-labels=node.kubernetes.io/controller="true" \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/podman stop kubelet
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
Delegate=yes
|
Delegate=yes
|
||||||
@ -123,7 +124,7 @@ systemd:
|
|||||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||||
--entrypoint=/apply \
|
--entrypoint=/apply \
|
||||||
quay.io/poseidon/kubelet:v1.18.2
|
quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||||
storage:
|
storage:
|
||||||
@ -151,11 +152,11 @@ storage:
|
|||||||
chmod -R 500 /etc/ssl/etcd
|
chmod -R 500 /etc/ssl/etcd
|
||||||
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||||
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||||
sudo mkdir -p /etc/kubernetes/manifests
|
mkdir -p /etc/kubernetes/manifests
|
||||||
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
mv static-manifests/* /etc/kubernetes/manifests/
|
||||||
sudo mkdir -p /opt/bootstrap/assets
|
mkdir -p /opt/bootstrap/assets
|
||||||
sudo mv manifests /opt/bootstrap/assets/manifests
|
mv manifests /opt/bootstrap/assets/manifests
|
||||||
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||||
rm -rf assets auth static-manifests tls manifests-networking
|
rm -rf assets auth static-manifests tls manifests-networking
|
||||||
- path: /opt/bootstrap/apply
|
- path: /opt/bootstrap/apply
|
||||||
mode: 0544
|
mode: 0544
|
||||||
@ -175,6 +176,11 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
net.ipv4.conf.default.rp_filter=0
|
||||||
|
net.ipv4.conf.*.rp_filter=0
|
||||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
|
@ -13,6 +13,30 @@ resource "aws_security_group" "controller" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "controller-icmp" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "icmp"
|
||||||
|
from_port = 8
|
||||||
|
to_port = 0
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "controller-icmp-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "icmp"
|
||||||
|
from_port = 8
|
||||||
|
to_port = 0
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "controller-ssh" {
|
resource "aws_security_group_rule" "controller-ssh" {
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
@ -44,39 +68,31 @@ resource "aws_security_group_rule" "controller-etcd-metrics" {
|
|||||||
source_security_group_id = aws_security_group.worker.id
|
source_security_group_id = aws_security_group.worker.id
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-proxy
|
resource "aws_security_group_rule" "controller-cilium-health" {
|
||||||
resource "aws_security_group_rule" "kube-proxy-metrics" {
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
type = "ingress"
|
type = "ingress"
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
from_port = 10249
|
from_port = 4240
|
||||||
to_port = 10249
|
to_port = 4240
|
||||||
source_security_group_id = aws_security_group.worker.id
|
source_security_group_id = aws_security_group.worker.id
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-scheduler
|
resource "aws_security_group_rule" "controller-cilium-health-self" {
|
||||||
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
type = "ingress"
|
type = "ingress"
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
from_port = 10251
|
from_port = 4240
|
||||||
to_port = 10251
|
to_port = 4240
|
||||||
source_security_group_id = aws_security_group.worker.id
|
self = true
|
||||||
}
|
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-controller-manager
|
|
||||||
resource "aws_security_group_rule" "controller-manager-metrics" {
|
|
||||||
security_group_id = aws_security_group.controller.id
|
|
||||||
|
|
||||||
type = "ingress"
|
|
||||||
protocol = "tcp"
|
|
||||||
from_port = 10252
|
|
||||||
to_port = 10252
|
|
||||||
source_security_group_id = aws_security_group.worker.id
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# IANA VXLAN default
|
||||||
resource "aws_security_group_rule" "controller-vxlan" {
|
resource "aws_security_group_rule" "controller-vxlan" {
|
||||||
count = var.networking == "flannel" ? 1 : 0
|
count = var.networking == "flannel" ? 1 : 0
|
||||||
|
|
||||||
@ -111,6 +127,31 @@ resource "aws_security_group_rule" "controller-apiserver" {
|
|||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Linux VXLAN default
|
||||||
|
resource "aws_security_group_rule" "controller-linux-vxlan" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "udp"
|
||||||
|
from_port = 8472
|
||||||
|
to_port = 8472
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "controller-linux-vxlan-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "udp"
|
||||||
|
from_port = 8472
|
||||||
|
to_port = 8472
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "aws_security_group_rule" "controller-node-exporter" {
|
resource "aws_security_group_rule" "controller-node-exporter" {
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
@ -122,6 +163,17 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
|||||||
source_security_group_id = aws_security_group.worker.id
|
source_security_group_id = aws_security_group.worker.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape kube-proxy
|
||||||
|
resource "aws_security_group_rule" "kube-proxy-metrics" {
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 10249
|
||||||
|
to_port = 10249
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
# Allow apiserver to access kubelets for exec, log, port-forward
|
# Allow apiserver to access kubelets for exec, log, port-forward
|
||||||
resource "aws_security_group_rule" "controller-kubelet" {
|
resource "aws_security_group_rule" "controller-kubelet" {
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
@ -143,6 +195,28 @@ resource "aws_security_group_rule" "controller-kubelet-self" {
|
|||||||
self = true
|
self = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape kube-scheduler
|
||||||
|
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 10251
|
||||||
|
to_port = 10251
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape kube-controller-manager
|
||||||
|
resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||||
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 10252
|
||||||
|
to_port = 10252
|
||||||
|
source_security_group_id = aws_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "controller-bgp" {
|
resource "aws_security_group_rule" "controller-bgp" {
|
||||||
security_group_id = aws_security_group.controller.id
|
security_group_id = aws_security_group.controller.id
|
||||||
|
|
||||||
@ -227,6 +301,30 @@ resource "aws_security_group" "worker" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-icmp" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "icmp"
|
||||||
|
from_port = 8
|
||||||
|
to_port = 0
|
||||||
|
source_security_group_id = aws_security_group.controller.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-icmp-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "icmp"
|
||||||
|
from_port = 8
|
||||||
|
to_port = 0
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "worker-ssh" {
|
resource "aws_security_group_rule" "worker-ssh" {
|
||||||
security_group_id = aws_security_group.worker.id
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
@ -257,6 +355,31 @@ resource "aws_security_group_rule" "worker-https" {
|
|||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-cilium-health" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 4240
|
||||||
|
to_port = 4240
|
||||||
|
source_security_group_id = aws_security_group.controller.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-cilium-health-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 4240
|
||||||
|
to_port = 4240
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
|
# IANA VXLAN default
|
||||||
resource "aws_security_group_rule" "worker-vxlan" {
|
resource "aws_security_group_rule" "worker-vxlan" {
|
||||||
count = var.networking == "flannel" ? 1 : 0
|
count = var.networking == "flannel" ? 1 : 0
|
||||||
|
|
||||||
@ -281,6 +404,31 @@ resource "aws_security_group_rule" "worker-vxlan-self" {
|
|||||||
self = true
|
self = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Linux VXLAN default
|
||||||
|
resource "aws_security_group_rule" "worker-linux-vxlan" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "udp"
|
||||||
|
from_port = 8472
|
||||||
|
to_port = 8472
|
||||||
|
source_security_group_id = aws_security_group.controller.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "worker-linux-vxlan-self" {
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
security_group_id = aws_security_group.worker.id
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "udp"
|
||||||
|
from_port = 8472
|
||||||
|
to_port = 8472
|
||||||
|
self = true
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "aws_security_group_rule" "worker-node-exporter" {
|
resource "aws_security_group_rule" "worker-node-exporter" {
|
||||||
security_group_id = aws_security_group.worker.id
|
security_group_id = aws_security_group.worker.id
|
||||||
|
@ -41,9 +41,9 @@ variable "worker_type" {
|
|||||||
default = "t3.small"
|
default = "t3.small"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "os_image" {
|
variable "os_stream" {
|
||||||
type = string
|
type = string
|
||||||
description = "AMI channel for Fedora CoreOS (not yet used)"
|
description = "Fedora CoreOs image stream for instances (e.g. stable, testing, next)"
|
||||||
default = "stable"
|
default = "stable"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -8,7 +8,7 @@ module "workers" {
|
|||||||
security_groups = [aws_security_group.worker.id]
|
security_groups = [aws_security_group.worker.id]
|
||||||
worker_count = var.worker_count
|
worker_count = var.worker_count
|
||||||
instance_type = var.worker_type
|
instance_type = var.worker_type
|
||||||
os_image = var.os_image
|
os_stream = var.os_stream
|
||||||
disk_size = var.disk_size
|
disk_size = var.disk_size
|
||||||
spot_price = var.worker_price
|
spot_price = var.worker_price
|
||||||
target_groups = var.worker_target_groups
|
target_groups = var.worker_target_groups
|
||||||
|
@ -13,16 +13,8 @@ data "aws_ami" "fedora-coreos" {
|
|||||||
values = ["hvm"]
|
values = ["hvm"]
|
||||||
}
|
}
|
||||||
|
|
||||||
filter {
|
|
||||||
name = "name"
|
|
||||||
values = ["fedora-coreos-31.*.*.*-hvm"]
|
|
||||||
}
|
|
||||||
|
|
||||||
filter {
|
filter {
|
||||||
name = "description"
|
name = "description"
|
||||||
values = ["Fedora CoreOS stable*"]
|
values = ["Fedora CoreOS ${var.os_stream} *"]
|
||||||
}
|
}
|
||||||
|
|
||||||
# try to filter out dev images (AWS filters can't)
|
|
||||||
name_regex = "^fedora-coreos-31.[0-9]*.[0-9]*.[0-9]*-hvm*"
|
|
||||||
}
|
}
|
||||||
|
@ -9,11 +9,12 @@ systemd:
|
|||||||
enabled: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS and hostname
|
||||||
Before=kubelet.service
|
Before=kubelet.service
|
||||||
[Service]
|
[Service]
|
||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
|
ExecStartPre=/bin/sh -c 'while [ `hostname -s` == "localhost" ]; do sleep 1; done;'
|
||||||
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||||
[Install]
|
[Install]
|
||||||
RequiredBy=kubelet.service
|
RequiredBy=kubelet.service
|
||||||
@ -21,9 +22,10 @@ systemd:
|
|||||||
enabled: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -49,10 +51,11 @@ systemd:
|
|||||||
--volume /var/log:/var/log \
|
--volume /var/log:/var/log \
|
||||||
--volume /var/run/lock:/var/run/lock:z \
|
--volume /var/run/lock:/var/run/lock:z \
|
||||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
quay.io/poseidon/kubelet:v1.18.2 \
|
$${KUBELET_IMAGE} \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=systemd \
|
--cgroup-driver=systemd \
|
||||||
--cgroups-per-qos=true \
|
--cgroups-per-qos=true \
|
||||||
--enforce-node-allocatable=pods \
|
--enforce-node-allocatable=pods \
|
||||||
@ -60,10 +63,8 @@ systemd:
|
|||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/node \
|
--node-labels=node.kubernetes.io/node \
|
||||||
%{~ for label in split(",", node_labels) ~}
|
%{~ for label in split(",", node_labels) ~}
|
||||||
@ -71,6 +72,7 @@ systemd:
|
|||||||
%{~ endfor ~}
|
%{~ endfor ~}
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/podman stop kubelet
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
Delegate=yes
|
Delegate=yes
|
||||||
@ -87,7 +89,7 @@ systemd:
|
|||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
ExecStart=/bin/true
|
ExecStart=/bin/true
|
||||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.2 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.6 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
storage:
|
storage:
|
||||||
@ -103,6 +105,11 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
net.ipv4.conf.default.rp_filter=0
|
||||||
|
net.ipv4.conf.*.rp_filter=0
|
||||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
|
@ -34,9 +34,9 @@ variable "instance_type" {
|
|||||||
default = "t3.small"
|
default = "t3.small"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "os_image" {
|
variable "os_stream" {
|
||||||
type = string
|
type = string
|
||||||
description = "AMI channel for Fedora CoreOS (not yet used)"
|
description = "Fedora CoreOs image stream for instances (e.g. stable, testing, next)"
|
||||||
default = "stable"
|
default = "stable"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.18.2 (upstream)
|
* Kubernetes v1.18.6 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=14d0b2087962a0f2557c184f3f523548ce19bbdc"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
|
@ -2,12 +2,12 @@
|
|||||||
systemd:
|
systemd:
|
||||||
units:
|
units:
|
||||||
- name: etcd-member.service
|
- name: etcd-member.service
|
||||||
enable: true
|
enabled: true
|
||||||
dropins:
|
dropins:
|
||||||
- name: 40-etcd-cluster.conf
|
- name: 40-etcd-cluster.conf
|
||||||
contents: |
|
contents: |
|
||||||
[Service]
|
[Service]
|
||||||
Environment="ETCD_IMAGE_TAG=v3.4.7"
|
Environment="ETCD_IMAGE_TAG=v3.4.9"
|
||||||
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
|
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
|
||||||
Environment="RKT_RUN_ARGS=--insecure-options=image"
|
Environment="RKT_RUN_ARGS=--insecure-options=image"
|
||||||
Environment="ETCD_NAME=${etcd_name}"
|
Environment="ETCD_NAME=${etcd_name}"
|
||||||
@ -28,11 +28,11 @@ systemd:
|
|||||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||||
- name: docker.service
|
- name: docker.service
|
||||||
enable: true
|
enabled: true
|
||||||
- name: locksmithd.service
|
- name: locksmithd.service
|
||||||
mask: true
|
mask: true
|
||||||
- name: wait-for-dns.service
|
- name: wait-for-dns.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS entries
|
||||||
@ -46,12 +46,14 @@ systemd:
|
|||||||
RequiredBy=kubelet.service
|
RequiredBy=kubelet.service
|
||||||
RequiredBy=etcd-member.service
|
RequiredBy=etcd-member.service
|
||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube
|
Description=Kubelet
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||||
|
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -90,24 +92,24 @@ systemd:
|
|||||||
--mount volume=var-log,target=/var/log \
|
--mount volume=var-log,target=/var/log \
|
||||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 -- \
|
$${KUBELET_IMAGE} -- \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
|
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/master \
|
|
||||||
--node-labels=node.kubernetes.io/controller="true" \
|
--node-labels=node.kubernetes.io/controller="true" \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
Restart=always
|
Restart=always
|
||||||
@ -132,7 +134,7 @@ systemd:
|
|||||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||||
--mount volume=script,target=/apply \
|
--mount volume=script,target=/apply \
|
||||||
--insecure-options=image \
|
--insecure-options=image \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 \
|
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||||
--net=host \
|
--net=host \
|
||||||
--dns=host \
|
--dns=host \
|
||||||
--exec=/apply
|
--exec=/apply
|
||||||
@ -163,11 +165,11 @@ storage:
|
|||||||
chmod -R 500 /etc/ssl/etcd
|
chmod -R 500 /etc/ssl/etcd
|
||||||
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||||
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||||
sudo mkdir -p /etc/kubernetes/manifests
|
mkdir -p /etc/kubernetes/manifests
|
||||||
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
mv static-manifests/* /etc/kubernetes/manifests/
|
||||||
sudo mkdir -p /opt/bootstrap/assets
|
mkdir -p /opt/bootstrap/assets
|
||||||
sudo mv manifests /opt/bootstrap/assets/manifests
|
mv manifests /opt/bootstrap/assets/manifests
|
||||||
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||||
rm -rf assets auth static-manifests tls manifests-networking
|
rm -rf assets auth static-manifests tls manifests-networking
|
||||||
- path: /opt/bootstrap/apply
|
- path: /opt/bootstrap/apply
|
||||||
filesystem: root
|
filesystem: root
|
||||||
@ -186,6 +188,7 @@ storage:
|
|||||||
done
|
done
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0644
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
@ -53,18 +53,24 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
|||||||
storage_account_type = "Premium_LRS"
|
storage_account_type = "Premium_LRS"
|
||||||
}
|
}
|
||||||
|
|
||||||
// CoreOS Container Linux or Flatcar Container Linux (manual upload)
|
# CoreOS Container Linux or Flatcar Container Linux
|
||||||
dynamic "source_image_reference" {
|
source_image_reference {
|
||||||
for_each = local.flavor == "coreos" ? [1] : []
|
publisher = local.flavor == "flatcar" ? "Kinvolk" : "CoreOS"
|
||||||
|
offer = local.flavor == "flatcar" ? "flatcar-container-linux-free" : "CoreOS"
|
||||||
|
sku = local.channel
|
||||||
|
version = "latest"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Gross hack for Flatcar Linux
|
||||||
|
dynamic "plan" {
|
||||||
|
for_each = local.flavor == "flatcar" ? [1] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
publisher = "CoreOS"
|
name = local.channel
|
||||||
offer = "CoreOS"
|
publisher = "kinvolk"
|
||||||
sku = local.channel
|
product = "flatcar-container-linux-free"
|
||||||
version = "latest"
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
source_image_id = local.flavor == "coreos" ? null : var.os_image
|
|
||||||
|
|
||||||
# network
|
# network
|
||||||
network_interface_ids = [
|
network_interface_ids = [
|
||||||
@ -133,10 +139,10 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
|||||||
|
|
||||||
# Controller Ignition configs
|
# Controller Ignition configs
|
||||||
data "ct_config" "controller-ignitions" {
|
data "ct_config" "controller-ignitions" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller Container Linux configs
|
# Controller Container Linux configs
|
||||||
@ -151,6 +157,7 @@ data "template_file" "controller-configs" {
|
|||||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||||
|
cgroup_driver = local.flavor == "flatcar" && local.channel == "edge" ? "systemd" : "cgroupfs"
|
||||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
@ -21,7 +21,7 @@ resource "azurerm_subnet" "controller" {
|
|||||||
|
|
||||||
name = "controller"
|
name = "controller"
|
||||||
virtual_network_name = azurerm_virtual_network.network.name
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
address_prefix = cidrsubnet(var.host_cidr, 1, 0)
|
address_prefixes = [cidrsubnet(var.host_cidr, 1, 0)]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet_network_security_group_association" "controller" {
|
resource "azurerm_subnet_network_security_group_association" "controller" {
|
||||||
@ -34,7 +34,7 @@ resource "azurerm_subnet" "worker" {
|
|||||||
|
|
||||||
name = "worker"
|
name = "worker"
|
||||||
virtual_network_name = azurerm_virtual_network.network.name
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
address_prefix = cidrsubnet(var.host_cidr, 1, 1)
|
address_prefixes = [cidrsubnet(var.host_cidr, 1, 1)]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet_network_security_group_association" "worker" {
|
resource "azurerm_subnet_network_security_group_association" "worker" {
|
||||||
|
@ -7,6 +7,21 @@ resource "azurerm_network_security_group" "controller" {
|
|||||||
location = azurerm_resource_group.cluster.location
|
location = azurerm_resource_group.cluster.location
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-icmp" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-icmp"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "1995"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Icmp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "*"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-ssh" {
|
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
@ -100,6 +115,22 @@ resource "azurerm_network_security_rule" "controller-apiserver" {
|
|||||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
name = "allow-cilium-health"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2019"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "4240"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
@ -115,6 +146,21 @@ resource "azurerm_network_security_rule" "controller-vxlan" {
|
|||||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-linux-vxlan"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2021"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Udp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "8472"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
@ -191,6 +237,21 @@ resource "azurerm_network_security_group" "worker" {
|
|||||||
location = azurerm_resource_group.cluster.location
|
location = azurerm_resource_group.cluster.location
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-icmp" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-icmp"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "1995"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Icmp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "*"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-ssh" {
|
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
@ -236,6 +297,22 @@ resource "azurerm_network_security_rule" "worker-https" {
|
|||||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
name = "allow-cilium-health"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2014"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "4240"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
@ -251,6 +328,21 @@ resource "azurerm_network_security_rule" "worker-vxlan" {
|
|||||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-linux-vxlan"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2016"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Udp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "8472"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
@ -48,7 +48,8 @@ variable "worker_type" {
|
|||||||
|
|
||||||
variable "os_image" {
|
variable "os_image" {
|
||||||
type = string
|
type = string
|
||||||
description = "Channel for a Container Linux derivative (/subscriptions/some-flatcar-upload, coreos-stable, coreos-beta, coreos-alpha)"
|
description = "Channel for a Container Linux derivative (flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge, coreos-stable, coreos-beta, coreos-alpha)"
|
||||||
|
default = "flatcar-stable"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "disk_size" {
|
variable "disk_size" {
|
||||||
|
@ -3,8 +3,8 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_version = "~> 0.12.6"
|
required_version = "~> 0.12.6"
|
||||||
required_providers {
|
required_providers {
|
||||||
azurerm = "~> 2.0"
|
azurerm = "~> 2.8"
|
||||||
ct = "~> 0.3"
|
ct = "~> 0.4"
|
||||||
template = "~> 2.1"
|
template = "~> 2.1"
|
||||||
null = "~> 2.1"
|
null = "~> 2.1"
|
||||||
}
|
}
|
||||||
|
@ -2,11 +2,11 @@
|
|||||||
systemd:
|
systemd:
|
||||||
units:
|
units:
|
||||||
- name: docker.service
|
- name: docker.service
|
||||||
enable: true
|
enabled: true
|
||||||
- name: locksmithd.service
|
- name: locksmithd.service
|
||||||
mask: true
|
mask: true
|
||||||
- name: wait-for-dns.service
|
- name: wait-for-dns.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS entries
|
||||||
@ -19,12 +19,14 @@ systemd:
|
|||||||
[Install]
|
[Install]
|
||||||
RequiredBy=kubelet.service
|
RequiredBy=kubelet.service
|
||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube
|
Description=Kubelet
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||||
|
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -63,18 +65,18 @@ systemd:
|
|||||||
--mount volume=var-log,target=/var/log \
|
--mount volume=var-log,target=/var/log \
|
||||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 -- \
|
$${KUBELET_IMAGE} -- \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
|
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/node \
|
--node-labels=node.kubernetes.io/node \
|
||||||
%{~ for label in split(",", node_labels) ~}
|
%{~ for label in split(",", node_labels) ~}
|
||||||
@ -82,6 +84,7 @@ systemd:
|
|||||||
%{~ endfor ~}
|
%{~ endfor ~}
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
Restart=always
|
Restart=always
|
||||||
@ -89,7 +92,7 @@ systemd:
|
|||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
- name: delete-node.service
|
- name: delete-node.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Waiting to delete Kubernetes node on shutdown
|
Description=Waiting to delete Kubernetes node on shutdown
|
||||||
@ -110,6 +113,7 @@ storage:
|
|||||||
${kubeconfig}
|
${kubeconfig}
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0644
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
@ -125,7 +129,7 @@ storage:
|
|||||||
--volume config,kind=host,source=/etc/kubernetes \
|
--volume config,kind=host,source=/etc/kubernetes \
|
||||||
--mount volume=config,target=/etc/kubernetes \
|
--mount volume=config,target=/etc/kubernetes \
|
||||||
--insecure-options=image \
|
--insecure-options=image \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 \
|
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||||
--net=host \
|
--net=host \
|
||||||
--dns=host \
|
--dns=host \
|
||||||
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]')
|
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]')
|
||||||
|
@ -46,7 +46,7 @@ variable "vm_type" {
|
|||||||
|
|
||||||
variable "os_image" {
|
variable "os_image" {
|
||||||
type = string
|
type = string
|
||||||
description = "Channel for a Container Linux derivative (flatcar-stable, flatcar-beta, coreos-stable, coreos-beta, coreos-alpha)"
|
description = "Channel for a Container Linux derivative (flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge, coreos-stable, coreos-beta, coreos-alpha)"
|
||||||
default = "flatcar-stable"
|
default = "flatcar-stable"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -24,18 +24,24 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
|||||||
caching = "ReadWrite"
|
caching = "ReadWrite"
|
||||||
}
|
}
|
||||||
|
|
||||||
// CoreOS Container Linux or Flatcar Container Linux (manual upload)
|
# CoreOS Container Linux or Flatcar Container Linux
|
||||||
dynamic "source_image_reference" {
|
source_image_reference {
|
||||||
for_each = local.flavor == "coreos" ? [1] : []
|
publisher = local.flavor == "flatcar" ? "Kinvolk" : "CoreOS"
|
||||||
|
offer = local.flavor == "flatcar" ? "flatcar-container-linux-free" : "CoreOS"
|
||||||
|
sku = local.channel
|
||||||
|
version = "latest"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Gross hack for Flatcar Linux
|
||||||
|
dynamic "plan" {
|
||||||
|
for_each = local.flavor == "flatcar" ? [1] : []
|
||||||
|
|
||||||
content {
|
content {
|
||||||
publisher = "CoreOS"
|
name = local.channel
|
||||||
offer = "CoreOS"
|
publisher = "kinvolk"
|
||||||
sku = local.channel
|
product = "flatcar-container-linux-free"
|
||||||
version = "latest"
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
source_image_id = local.flavor == "coreos" ? null : var.os_image
|
|
||||||
|
|
||||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
||||||
admin_username = "core"
|
admin_username = "core"
|
||||||
@ -91,9 +97,9 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
|||||||
|
|
||||||
# Worker Ignition configs
|
# Worker Ignition configs
|
||||||
data "ct_config" "worker-ignition" {
|
data "ct_config" "worker-ignition" {
|
||||||
content = data.template_file.worker-config.rendered
|
content = data.template_file.worker-config.rendered
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = var.snippets
|
snippets = var.snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Worker Container Linux configs
|
# Worker Container Linux configs
|
||||||
@ -105,6 +111,7 @@ data "template_file" "worker-config" {
|
|||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
cluster_domain_suffix = var.cluster_domain_suffix
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
cgroup_driver = local.flavor == "flatcar" && local.channel == "edge" ? "systemd" : "cgroupfs"
|
||||||
node_labels = join(",", var.node_labels)
|
node_labels = join(",", var.node_labels)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -11,9 +11,9 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.18.2 (upstream)
|
* Kubernetes v1.18.6 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=14d0b2087962a0f2557c184f3f523548ce19bbdc"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
@ -10,8 +10,9 @@ module "bootstrap" {
|
|||||||
networking = var.networking
|
networking = var.networking
|
||||||
|
|
||||||
# only effective with Calico networking
|
# only effective with Calico networking
|
||||||
|
# we should be able to use 1450 MTU, but in practice, 1410 was needed
|
||||||
network_encapsulation = "vxlan"
|
network_encapsulation = "vxlan"
|
||||||
network_mtu = "1450"
|
network_mtu = "1410"
|
||||||
|
|
||||||
pod_cidr = var.pod_cidr
|
pod_cidr = var.pod_cidr
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
@ -113,10 +113,10 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
|||||||
|
|
||||||
# Controller Ignition configs
|
# Controller Ignition configs
|
||||||
data "ct_config" "controller-ignitions" {
|
data "ct_config" "controller-ignitions" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller Fedora CoreOS configs
|
# Controller Fedora CoreOS configs
|
||||||
|
@ -28,7 +28,7 @@ systemd:
|
|||||||
--network host \
|
--network host \
|
||||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||||
quay.io/coreos/etcd:v3.4.7
|
quay.io/coreos/etcd:v3.4.9
|
||||||
ExecStop=/usr/bin/podman stop etcd
|
ExecStop=/usr/bin/podman stop etcd
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
@ -51,9 +51,10 @@ systemd:
|
|||||||
enabled: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -79,10 +80,11 @@ systemd:
|
|||||||
--volume /var/log:/var/log \
|
--volume /var/log:/var/log \
|
||||||
--volume /var/run/lock:/var/run/lock:z \
|
--volume /var/run/lock:/var/run/lock:z \
|
||||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
quay.io/poseidon/kubelet:v1.18.2 \
|
$${KUBELET_IMAGE} \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=systemd \
|
--cgroup-driver=systemd \
|
||||||
--cgroups-per-qos=true \
|
--cgroups-per-qos=true \
|
||||||
--enforce-node-allocatable=pods \
|
--enforce-node-allocatable=pods \
|
||||||
@ -90,16 +92,14 @@ systemd:
|
|||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/master \
|
|
||||||
--node-labels=node.kubernetes.io/controller="true" \
|
--node-labels=node.kubernetes.io/controller="true" \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/podman stop kubelet
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
Delegate=yes
|
Delegate=yes
|
||||||
@ -123,7 +123,7 @@ systemd:
|
|||||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||||
--entrypoint=/apply \
|
--entrypoint=/apply \
|
||||||
quay.io/poseidon/kubelet:v1.18.2
|
quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||||
storage:
|
storage:
|
||||||
@ -151,11 +151,11 @@ storage:
|
|||||||
chmod -R 500 /etc/ssl/etcd
|
chmod -R 500 /etc/ssl/etcd
|
||||||
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||||
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||||
sudo mkdir -p /etc/kubernetes/manifests
|
mkdir -p /etc/kubernetes/manifests
|
||||||
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
mv static-manifests/* /etc/kubernetes/manifests/
|
||||||
sudo mkdir -p /opt/bootstrap/assets
|
mkdir -p /opt/bootstrap/assets
|
||||||
sudo mv manifests /opt/bootstrap/assets/manifests
|
mv manifests /opt/bootstrap/assets/manifests
|
||||||
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||||
rm -rf assets auth static-manifests tls manifests-networking
|
rm -rf assets auth static-manifests tls manifests-networking
|
||||||
- path: /opt/bootstrap/apply
|
- path: /opt/bootstrap/apply
|
||||||
mode: 0544
|
mode: 0544
|
||||||
@ -175,6 +175,11 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
net.ipv4.conf.default.rp_filter=0
|
||||||
|
net.ipv4.conf.*.rp_filter=0
|
||||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
|
@ -21,7 +21,7 @@ resource "azurerm_subnet" "controller" {
|
|||||||
|
|
||||||
name = "controller"
|
name = "controller"
|
||||||
virtual_network_name = azurerm_virtual_network.network.name
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
address_prefix = cidrsubnet(var.host_cidr, 1, 0)
|
address_prefixes = [cidrsubnet(var.host_cidr, 1, 0)]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet_network_security_group_association" "controller" {
|
resource "azurerm_subnet_network_security_group_association" "controller" {
|
||||||
@ -34,7 +34,7 @@ resource "azurerm_subnet" "worker" {
|
|||||||
|
|
||||||
name = "worker"
|
name = "worker"
|
||||||
virtual_network_name = azurerm_virtual_network.network.name
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
address_prefix = cidrsubnet(var.host_cidr, 1, 1)
|
address_prefixes = [cidrsubnet(var.host_cidr, 1, 1)]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet_network_security_group_association" "worker" {
|
resource "azurerm_subnet_network_security_group_association" "worker" {
|
||||||
|
@ -7,6 +7,21 @@ resource "azurerm_network_security_group" "controller" {
|
|||||||
location = azurerm_resource_group.cluster.location
|
location = azurerm_resource_group.cluster.location
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-icmp" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-icmp"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "1995"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Icmp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "*"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-ssh" {
|
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
@ -100,6 +115,22 @@ resource "azurerm_network_security_rule" "controller-apiserver" {
|
|||||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
name = "allow-cilium-health"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2019"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "4240"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
@ -115,6 +146,21 @@ resource "azurerm_network_security_rule" "controller-vxlan" {
|
|||||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-linux-vxlan"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2021"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Udp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "8472"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
@ -191,6 +237,21 @@ resource "azurerm_network_security_group" "worker" {
|
|||||||
location = azurerm_resource_group.cluster.location
|
location = azurerm_resource_group.cluster.location
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-icmp" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-icmp"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "1995"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Icmp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "*"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-ssh" {
|
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
@ -236,6 +297,22 @@ resource "azurerm_network_security_rule" "worker-https" {
|
|||||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
count = var.networking == "cilium" ? 1 : 0
|
||||||
|
|
||||||
|
name = "allow-cilium-health"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2014"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "4240"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
@ -251,6 +328,21 @@ resource "azurerm_network_security_rule" "worker-vxlan" {
|
|||||||
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-linux-vxlan"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2016"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Udp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "8472"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
@ -3,8 +3,8 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_version = "~> 0.12.6"
|
required_version = "~> 0.12.6"
|
||||||
required_providers {
|
required_providers {
|
||||||
azurerm = "~> 2.0"
|
azurerm = "~> 2.8"
|
||||||
ct = "~> 0.3"
|
ct = "~> 0.4"
|
||||||
template = "~> 2.1"
|
template = "~> 2.1"
|
||||||
null = "~> 2.1"
|
null = "~> 2.1"
|
||||||
}
|
}
|
||||||
|
@ -21,9 +21,10 @@ systemd:
|
|||||||
enabled: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -49,10 +50,11 @@ systemd:
|
|||||||
--volume /var/log:/var/log \
|
--volume /var/log:/var/log \
|
||||||
--volume /var/run/lock:/var/run/lock:z \
|
--volume /var/run/lock:/var/run/lock:z \
|
||||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
quay.io/poseidon/kubelet:v1.18.2 \
|
$${KUBELET_IMAGE} \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=systemd \
|
--cgroup-driver=systemd \
|
||||||
--cgroups-per-qos=true \
|
--cgroups-per-qos=true \
|
||||||
--enforce-node-allocatable=pods \
|
--enforce-node-allocatable=pods \
|
||||||
@ -60,10 +62,8 @@ systemd:
|
|||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/node \
|
--node-labels=node.kubernetes.io/node \
|
||||||
%{~ for label in split(",", node_labels) ~}
|
%{~ for label in split(",", node_labels) ~}
|
||||||
@ -71,6 +71,7 @@ systemd:
|
|||||||
%{~ endfor ~}
|
%{~ endfor ~}
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/podman stop kubelet
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
Delegate=yes
|
Delegate=yes
|
||||||
@ -87,7 +88,7 @@ systemd:
|
|||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
ExecStart=/bin/true
|
ExecStart=/bin/true
|
||||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.2 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.6 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
storage:
|
storage:
|
||||||
@ -103,6 +104,11 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
net.ipv4.conf.default.rp_filter=0
|
||||||
|
net.ipv4.conf.*.rp_filter=0
|
||||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
|
@ -72,9 +72,9 @@ resource "azurerm_monitor_autoscale_setting" "workers" {
|
|||||||
|
|
||||||
# Worker Ignition configs
|
# Worker Ignition configs
|
||||||
data "ct_config" "worker-ignition" {
|
data "ct_config" "worker-ignition" {
|
||||||
content = data.template_file.worker-config.rendered
|
content = data.template_file.worker-config.rendered
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = var.snippets
|
snippets = var.snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Worker Fedora CoreOS configs
|
# Worker Fedora CoreOS configs
|
||||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.18.2 (upstream)
|
* Kubernetes v1.18.6 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=14d0b2087962a0f2557c184f3f523548ce19bbdc"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [var.k8s_domain_name]
|
api_servers = [var.k8s_domain_name]
|
||||||
|
@ -2,12 +2,12 @@
|
|||||||
systemd:
|
systemd:
|
||||||
units:
|
units:
|
||||||
- name: etcd-member.service
|
- name: etcd-member.service
|
||||||
enable: true
|
enabled: true
|
||||||
dropins:
|
dropins:
|
||||||
- name: 40-etcd-cluster.conf
|
- name: 40-etcd-cluster.conf
|
||||||
contents: |
|
contents: |
|
||||||
[Service]
|
[Service]
|
||||||
Environment="ETCD_IMAGE_TAG=v3.4.7"
|
Environment="ETCD_IMAGE_TAG=v3.4.9"
|
||||||
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
|
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
|
||||||
Environment="RKT_RUN_ARGS=--insecure-options=image"
|
Environment="RKT_RUN_ARGS=--insecure-options=image"
|
||||||
Environment="ETCD_NAME=${etcd_name}"
|
Environment="ETCD_NAME=${etcd_name}"
|
||||||
@ -28,11 +28,11 @@ systemd:
|
|||||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||||
- name: docker.service
|
- name: docker.service
|
||||||
enable: true
|
enabled: true
|
||||||
- name: locksmithd.service
|
- name: locksmithd.service
|
||||||
mask: true
|
mask: true
|
||||||
- name: kubelet.path
|
- name: kubelet.path
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Watch for kubeconfig
|
Description=Watch for kubeconfig
|
||||||
@ -41,7 +41,7 @@ systemd:
|
|||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
- name: wait-for-dns.service
|
- name: wait-for-dns.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS entries
|
||||||
@ -57,9 +57,10 @@ systemd:
|
|||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube
|
Description=Kubelet
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -103,26 +104,25 @@ systemd:
|
|||||||
--mount volume=etc-iscsi,target=/etc/iscsi \
|
--mount volume=etc-iscsi,target=/etc/iscsi \
|
||||||
--volume usr-sbin-iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
|
--volume usr-sbin-iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
|
||||||
--mount volume=usr-sbin-iscsiadm,target=/sbin/iscsiadm \
|
--mount volume=usr-sbin-iscsiadm,target=/sbin/iscsiadm \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 -- \
|
$${KUBELET_IMAGE} -- \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--hostname-override=${domain_name} \
|
--hostname-override=${domain_name} \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/master \
|
|
||||||
--node-labels=node.kubernetes.io/controller="true" \
|
--node-labels=node.kubernetes.io/controller="true" \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
Restart=always
|
Restart=always
|
||||||
@ -147,7 +147,7 @@ systemd:
|
|||||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||||
--mount volume=script,target=/apply \
|
--mount volume=script,target=/apply \
|
||||||
--insecure-options=image \
|
--insecure-options=image \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 \
|
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||||
--net=host \
|
--net=host \
|
||||||
--dns=host \
|
--dns=host \
|
||||||
--exec=/apply
|
--exec=/apply
|
||||||
@ -158,6 +158,7 @@ storage:
|
|||||||
directories:
|
directories:
|
||||||
- path: /etc/kubernetes
|
- path: /etc/kubernetes
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0755
|
||||||
files:
|
files:
|
||||||
- path: /etc/hostname
|
- path: /etc/hostname
|
||||||
filesystem: root
|
filesystem: root
|
||||||
@ -181,11 +182,11 @@ storage:
|
|||||||
chmod -R 500 /etc/ssl/etcd
|
chmod -R 500 /etc/ssl/etcd
|
||||||
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||||
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||||
sudo mkdir -p /etc/kubernetes/manifests
|
mkdir -p /etc/kubernetes/manifests
|
||||||
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
mv static-manifests/* /etc/kubernetes/manifests/
|
||||||
sudo mkdir -p /opt/bootstrap/assets
|
mkdir -p /opt/bootstrap/assets
|
||||||
sudo mv manifests /opt/bootstrap/assets/manifests
|
mv manifests /opt/bootstrap/assets/manifests
|
||||||
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||||
rm -rf assets auth static-manifests tls manifests-networking
|
rm -rf assets auth static-manifests tls manifests-networking
|
||||||
- path: /opt/bootstrap/apply
|
- path: /opt/bootstrap/apply
|
||||||
filesystem: root
|
filesystem: root
|
||||||
@ -204,6 +205,7 @@ storage:
|
|||||||
done
|
done
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0644
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
systemd:
|
systemd:
|
||||||
units:
|
units:
|
||||||
- name: installer.service
|
- name: installer.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Requires=network-online.target
|
Requires=network-online.target
|
||||||
|
@ -2,11 +2,11 @@
|
|||||||
systemd:
|
systemd:
|
||||||
units:
|
units:
|
||||||
- name: docker.service
|
- name: docker.service
|
||||||
enable: true
|
enabled: true
|
||||||
- name: locksmithd.service
|
- name: locksmithd.service
|
||||||
mask: true
|
mask: true
|
||||||
- name: kubelet.path
|
- name: kubelet.path
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Watch for kubeconfig
|
Description=Watch for kubeconfig
|
||||||
@ -15,7 +15,7 @@ systemd:
|
|||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
- name: wait-for-dns.service
|
- name: wait-for-dns.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS entries
|
||||||
@ -30,9 +30,10 @@ systemd:
|
|||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube
|
Description=Kubelet
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -76,20 +77,19 @@ systemd:
|
|||||||
--mount volume=etc-iscsi,target=/etc/iscsi \
|
--mount volume=etc-iscsi,target=/etc/iscsi \
|
||||||
--volume usr-sbin-iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
|
--volume usr-sbin-iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
|
||||||
--mount volume=usr-sbin-iscsiadm,target=/sbin/iscsiadm \
|
--mount volume=usr-sbin-iscsiadm,target=/sbin/iscsiadm \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 -- \
|
$${KUBELET_IMAGE} -- \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
--cgroup-driver=$${KUBELET_CGROUP_DRIVER} \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--hostname-override=${domain_name} \
|
--hostname-override=${domain_name} \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/node \
|
--node-labels=node.kubernetes.io/node \
|
||||||
%{~ for label in compact(split(",", node_labels)) ~}
|
%{~ for label in compact(split(",", node_labels)) ~}
|
||||||
@ -100,6 +100,7 @@ systemd:
|
|||||||
%{~ endfor ~}
|
%{~ endfor ~}
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
Restart=always
|
Restart=always
|
||||||
@ -111,6 +112,7 @@ storage:
|
|||||||
directories:
|
directories:
|
||||||
- path: /etc/kubernetes
|
- path: /etc/kubernetes
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0755
|
||||||
files:
|
files:
|
||||||
- path: /etc/hostname
|
- path: /etc/hostname
|
||||||
filesystem: root
|
filesystem: root
|
||||||
@ -120,6 +122,7 @@ storage:
|
|||||||
${domain_name}
|
${domain_name}
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0644
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
@ -141,10 +141,10 @@ resource "matchbox_profile" "controllers" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
data "ct_config" "controller-ignitions" {
|
data "ct_config" "controller-ignitions" {
|
||||||
count = length(var.controllers)
|
count = length(var.controllers)
|
||||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
snippets = lookup(var.snippets, var.controllers.*.name[count.index], [])
|
||||||
}
|
}
|
||||||
|
|
||||||
data "template_file" "controller-configs" {
|
data "template_file" "controller-configs" {
|
||||||
@ -171,10 +171,10 @@ resource "matchbox_profile" "workers" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
data "ct_config" "worker-ignitions" {
|
data "ct_config" "worker-ignitions" {
|
||||||
count = length(var.workers)
|
count = length(var.workers)
|
||||||
content = data.template_file.worker-configs.*.rendered[count.index]
|
content = data.template_file.worker-configs.*.rendered[count.index]
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
snippets = lookup(var.snippets, var.workers.*.name[count.index], [])
|
||||||
}
|
}
|
||||||
|
|
||||||
data "template_file" "worker-configs" {
|
data "template_file" "worker-configs" {
|
||||||
|
@ -4,7 +4,7 @@ terraform {
|
|||||||
required_version = "~> 0.12.6"
|
required_version = "~> 0.12.6"
|
||||||
required_providers {
|
required_providers {
|
||||||
matchbox = "~> 0.3.0"
|
matchbox = "~> 0.3.0"
|
||||||
ct = "~> 0.3"
|
ct = "~> 0.4"
|
||||||
template = "~> 2.1"
|
template = "~> 2.1"
|
||||||
null = "~> 2.1"
|
null = "~> 2.1"
|
||||||
}
|
}
|
||||||
|
@ -11,9 +11,9 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.18.2 (upstream)
|
* Kubernetes v1.18.6 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=14d0b2087962a0f2557c184f3f523548ce19bbdc"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [var.k8s_domain_name]
|
api_servers = [var.k8s_domain_name]
|
||||||
|
@ -28,7 +28,7 @@ systemd:
|
|||||||
--network host \
|
--network host \
|
||||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||||
quay.io/coreos/etcd:v3.4.7
|
quay.io/coreos/etcd:v3.4.9
|
||||||
ExecStop=/usr/bin/podman stop etcd
|
ExecStop=/usr/bin/podman stop etcd
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
@ -50,9 +50,10 @@ systemd:
|
|||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -80,10 +81,11 @@ systemd:
|
|||||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
--volume /etc/iscsi:/etc/iscsi \
|
--volume /etc/iscsi:/etc/iscsi \
|
||||||
--volume /sbin/iscsiadm:/sbin/iscsiadm \
|
--volume /sbin/iscsiadm:/sbin/iscsiadm \
|
||||||
quay.io/poseidon/kubelet:v1.18.2 \
|
$${KUBELET_IMAGE} \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=systemd \
|
--cgroup-driver=systemd \
|
||||||
--cgroups-per-qos=true \
|
--cgroups-per-qos=true \
|
||||||
--enforce-node-allocatable=pods \
|
--enforce-node-allocatable=pods \
|
||||||
@ -91,17 +93,15 @@ systemd:
|
|||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--hostname-override=${domain_name} \
|
--hostname-override=${domain_name} \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/master \
|
|
||||||
--node-labels=node.kubernetes.io/controller="true" \
|
--node-labels=node.kubernetes.io/controller="true" \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/podman stop kubelet
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
Delegate=yes
|
Delegate=yes
|
||||||
@ -134,7 +134,7 @@ systemd:
|
|||||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||||
--entrypoint=/apply \
|
--entrypoint=/apply \
|
||||||
quay.io/poseidon/kubelet:v1.18.2
|
quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||||
storage:
|
storage:
|
||||||
@ -162,11 +162,11 @@ storage:
|
|||||||
chmod -R 500 /etc/ssl/etcd
|
chmod -R 500 /etc/ssl/etcd
|
||||||
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||||
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||||
sudo mkdir -p /etc/kubernetes/manifests
|
mkdir -p /etc/kubernetes/manifests
|
||||||
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
mv static-manifests/* /etc/kubernetes/manifests/
|
||||||
sudo mkdir -p /opt/bootstrap/assets
|
mkdir -p /opt/bootstrap/assets
|
||||||
sudo mv manifests /opt/bootstrap/assets/manifests
|
mv manifests /opt/bootstrap/assets/manifests
|
||||||
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||||
rm -rf assets auth static-manifests tls manifests-networking
|
rm -rf assets auth static-manifests tls manifests-networking
|
||||||
- path: /opt/bootstrap/apply
|
- path: /opt/bootstrap/apply
|
||||||
mode: 0544
|
mode: 0544
|
||||||
@ -186,6 +186,11 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
net.ipv4.conf.default.rp_filter=0
|
||||||
|
net.ipv4.conf.*.rp_filter=0
|
||||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
|
@ -20,9 +20,10 @@ systemd:
|
|||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -50,10 +51,11 @@ systemd:
|
|||||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
--volume /etc/iscsi:/etc/iscsi \
|
--volume /etc/iscsi:/etc/iscsi \
|
||||||
--volume /sbin/iscsiadm:/sbin/iscsiadm \
|
--volume /sbin/iscsiadm:/sbin/iscsiadm \
|
||||||
quay.io/poseidon/kubelet:v1.18.2 \
|
$${KUBELET_IMAGE} \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=systemd \
|
--cgroup-driver=systemd \
|
||||||
--cgroups-per-qos=true \
|
--cgroups-per-qos=true \
|
||||||
--enforce-node-allocatable=pods \
|
--enforce-node-allocatable=pods \
|
||||||
@ -61,11 +63,9 @@ systemd:
|
|||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--hostname-override=${domain_name} \
|
--hostname-override=${domain_name} \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/node \
|
--node-labels=node.kubernetes.io/node \
|
||||||
%{~ for label in compact(split(",", node_labels)) ~}
|
%{~ for label in compact(split(",", node_labels)) ~}
|
||||||
@ -76,6 +76,7 @@ systemd:
|
|||||||
%{~ endfor ~}
|
%{~ endfor ~}
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/podman stop kubelet
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
Delegate=yes
|
Delegate=yes
|
||||||
@ -105,6 +106,11 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
net.ipv4.conf.default.rp_filter=0
|
||||||
|
net.ipv4.conf.*.rp_filter=0
|
||||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.18.2 (upstream)
|
* Kubernetes v1.18.6 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, CSI, and other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=14d0b2087962a0f2557c184f3f523548ce19bbdc"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
|
@ -2,12 +2,12 @@
|
|||||||
systemd:
|
systemd:
|
||||||
units:
|
units:
|
||||||
- name: etcd-member.service
|
- name: etcd-member.service
|
||||||
enable: true
|
enabled: true
|
||||||
dropins:
|
dropins:
|
||||||
- name: 40-etcd-cluster.conf
|
- name: 40-etcd-cluster.conf
|
||||||
contents: |
|
contents: |
|
||||||
[Service]
|
[Service]
|
||||||
Environment="ETCD_IMAGE_TAG=v3.4.7"
|
Environment="ETCD_IMAGE_TAG=v3.4.9"
|
||||||
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
|
Environment="ETCD_IMAGE_URL=docker://quay.io/coreos/etcd"
|
||||||
Environment="RKT_RUN_ARGS=--insecure-options=image"
|
Environment="RKT_RUN_ARGS=--insecure-options=image"
|
||||||
Environment="ETCD_NAME=${etcd_name}"
|
Environment="ETCD_NAME=${etcd_name}"
|
||||||
@ -28,11 +28,11 @@ systemd:
|
|||||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||||
- name: docker.service
|
- name: docker.service
|
||||||
enable: true
|
enabled: true
|
||||||
- name: locksmithd.service
|
- name: locksmithd.service
|
||||||
mask: true
|
mask: true
|
||||||
- name: kubelet.path
|
- name: kubelet.path
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Watch for kubeconfig
|
Description=Watch for kubeconfig
|
||||||
@ -41,7 +41,7 @@ systemd:
|
|||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
- name: wait-for-dns.service
|
- name: wait-for-dns.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS entries
|
||||||
@ -57,11 +57,12 @@ systemd:
|
|||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube
|
Description=Kubelet
|
||||||
Requires=coreos-metadata.service
|
Requires=coreos-metadata.service
|
||||||
After=coreos-metadata.service
|
After=coreos-metadata.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||||
EnvironmentFile=/run/metadata/coreos
|
EnvironmentFile=/run/metadata/coreos
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -101,25 +102,24 @@ systemd:
|
|||||||
--mount volume=var-log,target=/var/log \
|
--mount volume=var-log,target=/var/log \
|
||||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 -- \
|
$${KUBELET_IMAGE} -- \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/master \
|
|
||||||
--node-labels=node.kubernetes.io/controller="true" \
|
--node-labels=node.kubernetes.io/controller="true" \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
Restart=always
|
Restart=always
|
||||||
@ -144,7 +144,7 @@ systemd:
|
|||||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||||
--mount volume=script,target=/apply \
|
--mount volume=script,target=/apply \
|
||||||
--insecure-options=image \
|
--insecure-options=image \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 \
|
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||||
--net=host \
|
--net=host \
|
||||||
--dns=host \
|
--dns=host \
|
||||||
--exec=/apply
|
--exec=/apply
|
||||||
@ -155,6 +155,7 @@ storage:
|
|||||||
directories:
|
directories:
|
||||||
- path: /etc/kubernetes
|
- path: /etc/kubernetes
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0755
|
||||||
files:
|
files:
|
||||||
- path: /opt/bootstrap/layout
|
- path: /opt/bootstrap/layout
|
||||||
filesystem: root
|
filesystem: root
|
||||||
@ -172,11 +173,11 @@ storage:
|
|||||||
chmod -R 500 /etc/ssl/etcd
|
chmod -R 500 /etc/ssl/etcd
|
||||||
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||||
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||||
sudo mkdir -p /etc/kubernetes/manifests
|
mkdir -p /etc/kubernetes/manifests
|
||||||
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
mv static-manifests/* /etc/kubernetes/manifests/
|
||||||
sudo mkdir -p /opt/bootstrap/assets
|
mkdir -p /opt/bootstrap/assets
|
||||||
sudo mv manifests /opt/bootstrap/assets/manifests
|
mv manifests /opt/bootstrap/assets/manifests
|
||||||
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||||
rm -rf assets auth static-manifests tls manifests-networking
|
rm -rf assets auth static-manifests tls manifests-networking
|
||||||
- path: /opt/bootstrap/apply
|
- path: /opt/bootstrap/apply
|
||||||
filesystem: root
|
filesystem: root
|
||||||
@ -195,6 +196,7 @@ storage:
|
|||||||
done
|
done
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0644
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
@ -2,11 +2,11 @@
|
|||||||
systemd:
|
systemd:
|
||||||
units:
|
units:
|
||||||
- name: docker.service
|
- name: docker.service
|
||||||
enable: true
|
enabled: true
|
||||||
- name: locksmithd.service
|
- name: locksmithd.service
|
||||||
mask: true
|
mask: true
|
||||||
- name: kubelet.path
|
- name: kubelet.path
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Watch for kubeconfig
|
Description=Watch for kubeconfig
|
||||||
@ -15,7 +15,7 @@ systemd:
|
|||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
- name: wait-for-dns.service
|
- name: wait-for-dns.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Wait for DNS entries
|
Description=Wait for DNS entries
|
||||||
@ -30,11 +30,12 @@ systemd:
|
|||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube
|
Description=Kubelet
|
||||||
Requires=coreos-metadata.service
|
Requires=coreos-metadata.service
|
||||||
After=coreos-metadata.service
|
After=coreos-metadata.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=docker://quay.io/poseidon/kubelet:v1.18.6
|
||||||
EnvironmentFile=/run/metadata/coreos
|
EnvironmentFile=/run/metadata/coreos
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -74,23 +75,23 @@ systemd:
|
|||||||
--mount volume=var-log,target=/var/log \
|
--mount volume=var-log,target=/var/log \
|
||||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 -- \
|
$${KUBELET_IMAGE} -- \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/node \
|
--node-labels=node.kubernetes.io/node \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
Restart=always
|
Restart=always
|
||||||
@ -98,7 +99,7 @@ systemd:
|
|||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
- name: delete-node.service
|
- name: delete-node.service
|
||||||
enable: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Waiting to delete Kubernetes node on shutdown
|
Description=Waiting to delete Kubernetes node on shutdown
|
||||||
@ -113,9 +114,11 @@ storage:
|
|||||||
directories:
|
directories:
|
||||||
- path: /etc/kubernetes
|
- path: /etc/kubernetes
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0755
|
||||||
files:
|
files:
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
|
mode: 0644
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
@ -131,7 +134,7 @@ storage:
|
|||||||
--volume config,kind=host,source=/etc/kubernetes \
|
--volume config,kind=host,source=/etc/kubernetes \
|
||||||
--mount volume=config,target=/etc/kubernetes \
|
--mount volume=config,target=/etc/kubernetes \
|
||||||
--insecure-options=image \
|
--insecure-options=image \
|
||||||
docker://quay.io/poseidon/kubelet:v1.18.2 \
|
docker://quay.io/poseidon/kubelet:v1.18.6 \
|
||||||
--net=host \
|
--net=host \
|
||||||
--dns=host \
|
--dns=host \
|
||||||
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
--exec=/usr/local/bin/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||||
|
@ -46,9 +46,10 @@ resource "digitalocean_droplet" "controllers" {
|
|||||||
size = var.controller_type
|
size = var.controller_type
|
||||||
|
|
||||||
# network
|
# network
|
||||||
# only official DigitalOcean images support IPv6
|
|
||||||
ipv6 = local.is_official_image
|
|
||||||
private_networking = true
|
private_networking = true
|
||||||
|
vpc_uuid = digitalocean_vpc.network.id
|
||||||
|
# TODO: Only official DigitalOcean images support IPv6
|
||||||
|
ipv6 = false
|
||||||
|
|
||||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||||
ssh_keys = var.ssh_fingerprints
|
ssh_keys = var.ssh_fingerprints
|
||||||
@ -69,10 +70,10 @@ resource "digitalocean_tag" "controllers" {
|
|||||||
|
|
||||||
# Controller Ignition configs
|
# Controller Ignition configs
|
||||||
data "ct_config" "controller-ignitions" {
|
data "ct_config" "controller-ignitions" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller Container Linux configs
|
# Controller Container Linux configs
|
||||||
|
@ -1,3 +1,10 @@
|
|||||||
|
# Network VPC
|
||||||
|
resource "digitalocean_vpc" "network" {
|
||||||
|
name = var.cluster_name
|
||||||
|
region = var.region
|
||||||
|
description = "Network for ${var.cluster_name} cluster"
|
||||||
|
}
|
||||||
|
|
||||||
resource "digitalocean_firewall" "rules" {
|
resource "digitalocean_firewall" "rules" {
|
||||||
name = var.cluster_name
|
name = var.cluster_name
|
||||||
|
|
||||||
@ -6,6 +13,11 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
digitalocean_tag.workers.name
|
digitalocean_tag.workers.name
|
||||||
]
|
]
|
||||||
|
|
||||||
|
inbound_rule {
|
||||||
|
protocol = "icmp"
|
||||||
|
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||||
|
}
|
||||||
|
|
||||||
# allow ssh, internal flannel, internal node-exporter, internal kubelet
|
# allow ssh, internal flannel, internal node-exporter, internal kubelet
|
||||||
inbound_rule {
|
inbound_rule {
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
@ -13,12 +25,27 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Cilium health
|
||||||
|
inbound_rule {
|
||||||
|
protocol = "tcp"
|
||||||
|
port_range = "4240"
|
||||||
|
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||||
|
}
|
||||||
|
|
||||||
|
# IANA vxlan (flannel, calico)
|
||||||
inbound_rule {
|
inbound_rule {
|
||||||
protocol = "udp"
|
protocol = "udp"
|
||||||
port_range = "4789"
|
port_range = "4789"
|
||||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Linux vxlan (Cilium)
|
||||||
|
inbound_rule {
|
||||||
|
protocol = "udp"
|
||||||
|
port_range = "8472"
|
||||||
|
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter
|
# Allow Prometheus to scrape node-exporter
|
||||||
inbound_rule {
|
inbound_rule {
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
@ -33,6 +60,7 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
source_tags = [digitalocean_tag.workers.name]
|
source_tags = [digitalocean_tag.workers.name]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Kubelet
|
||||||
inbound_rule {
|
inbound_rule {
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
port_range = "10250"
|
port_range = "10250"
|
||||||
|
@ -2,6 +2,8 @@ output "kubeconfig-admin" {
|
|||||||
value = module.bootstrap.kubeconfig-admin
|
value = module.bootstrap.kubeconfig-admin
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Outputs for Kubernetes Ingress
|
||||||
|
|
||||||
output "controllers_dns" {
|
output "controllers_dns" {
|
||||||
value = digitalocean_record.controllers[0].fqdn
|
value = digitalocean_record.controllers[0].fqdn
|
||||||
}
|
}
|
||||||
@ -45,3 +47,10 @@ output "worker_tag" {
|
|||||||
value = digitalocean_tag.workers.name
|
value = digitalocean_tag.workers.name
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Outputs for custom load balancing
|
||||||
|
|
||||||
|
output "vpc_id" {
|
||||||
|
description = "ID of the cluster VPC"
|
||||||
|
value = digitalocean_vpc.network.id
|
||||||
|
}
|
||||||
|
|
||||||
|
@ -3,8 +3,8 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_version = "~> 0.12.6"
|
required_version = "~> 0.12.6"
|
||||||
required_providers {
|
required_providers {
|
||||||
digitalocean = "~> 1.3"
|
digitalocean = "~> 1.16"
|
||||||
ct = "~> 0.3"
|
ct = "~> 0.4"
|
||||||
template = "~> 2.1"
|
template = "~> 2.1"
|
||||||
null = "~> 2.1"
|
null = "~> 2.1"
|
||||||
}
|
}
|
||||||
|
@ -35,9 +35,10 @@ resource "digitalocean_droplet" "workers" {
|
|||||||
size = var.worker_type
|
size = var.worker_type
|
||||||
|
|
||||||
# network
|
# network
|
||||||
# only official DigitalOcean images support IPv6
|
|
||||||
ipv6 = local.is_official_image
|
|
||||||
private_networking = true
|
private_networking = true
|
||||||
|
vpc_uuid = digitalocean_vpc.network.id
|
||||||
|
# only official DigitalOcean images support IPv6
|
||||||
|
ipv6 = local.is_official_image
|
||||||
|
|
||||||
user_data = data.ct_config.worker-ignition.rendered
|
user_data = data.ct_config.worker-ignition.rendered
|
||||||
ssh_keys = var.ssh_fingerprints
|
ssh_keys = var.ssh_fingerprints
|
||||||
@ -58,9 +59,9 @@ resource "digitalocean_tag" "workers" {
|
|||||||
|
|
||||||
# Worker Ignition config
|
# Worker Ignition config
|
||||||
data "ct_config" "worker-ignition" {
|
data "ct_config" "worker-ignition" {
|
||||||
content = data.template_file.worker-config.rendered
|
content = data.template_file.worker-config.rendered
|
||||||
pretty_print = false
|
strict = true
|
||||||
snippets = var.worker_snippets
|
snippets = var.worker_snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Worker Container Linux config
|
# Worker Container Linux config
|
||||||
|
@ -11,9 +11,9 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.18.2 (upstream)
|
* Kubernetes v1.18.6 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, CSI, and other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=14d0b2087962a0f2557c184f3f523548ce19bbdc"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=835890025bfb3499aa0cd5a9704e9e7d3e7e5fe5"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
|
@ -41,9 +41,10 @@ resource "digitalocean_droplet" "controllers" {
|
|||||||
size = var.controller_type
|
size = var.controller_type
|
||||||
|
|
||||||
# network
|
# network
|
||||||
# TODO: Only official DigitalOcean images support IPv6
|
|
||||||
ipv6 = false
|
|
||||||
private_networking = true
|
private_networking = true
|
||||||
|
vpc_uuid = digitalocean_vpc.network.id
|
||||||
|
# TODO: Only official DigitalOcean images support IPv6
|
||||||
|
ipv6 = false
|
||||||
|
|
||||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||||
ssh_keys = var.ssh_fingerprints
|
ssh_keys = var.ssh_fingerprints
|
||||||
@ -64,10 +65,10 @@ resource "digitalocean_tag" "controllers" {
|
|||||||
|
|
||||||
# Controller Ignition configs
|
# Controller Ignition configs
|
||||||
data "ct_config" "controller-ignitions" {
|
data "ct_config" "controller-ignitions" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller Fedora CoreOS configs
|
# Controller Fedora CoreOS configs
|
||||||
|
@ -28,7 +28,7 @@ systemd:
|
|||||||
--network host \
|
--network host \
|
||||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||||
quay.io/coreos/etcd:v3.4.7
|
quay.io/coreos/etcd:v3.4.9
|
||||||
ExecStop=/usr/bin/podman stop etcd
|
ExecStop=/usr/bin/podman stop etcd
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
@ -50,11 +50,12 @@ systemd:
|
|||||||
- name: kubelet.service
|
- name: kubelet.service
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube (System Container)
|
Description=Kubelet (System Container)
|
||||||
Requires=afterburn.service
|
Requires=afterburn.service
|
||||||
After=afterburn.service
|
After=afterburn.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||||
EnvironmentFile=/run/metadata/afterburn
|
EnvironmentFile=/run/metadata/afterburn
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -81,10 +82,11 @@ systemd:
|
|||||||
--volume /var/log:/var/log \
|
--volume /var/log:/var/log \
|
||||||
--volume /var/run/lock:/var/run/lock:z \
|
--volume /var/run/lock:/var/run/lock:z \
|
||||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
quay.io/poseidon/kubelet:v1.18.2 \
|
$${KUBELET_IMAGE} \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=systemd \
|
--cgroup-driver=systemd \
|
||||||
--cgroups-per-qos=true \
|
--cgroups-per-qos=true \
|
||||||
--enforce-node-allocatable=pods \
|
--enforce-node-allocatable=pods \
|
||||||
@ -92,17 +94,15 @@ systemd:
|
|||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/master \
|
|
||||||
--node-labels=node.kubernetes.io/controller="true" \
|
--node-labels=node.kubernetes.io/controller="true" \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
--register-with-taints=node-role.kubernetes.io/controller=:NoSchedule \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/podman stop kubelet
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
Delegate=yes
|
Delegate=yes
|
||||||
@ -135,7 +135,7 @@ systemd:
|
|||||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||||
--entrypoint=/apply \
|
--entrypoint=/apply \
|
||||||
quay.io/poseidon/kubelet:v1.18.2
|
quay.io/poseidon/kubelet:v1.18.6
|
||||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||||
storage:
|
storage:
|
||||||
@ -158,11 +158,11 @@ storage:
|
|||||||
chmod -R 500 /etc/ssl/etcd
|
chmod -R 500 /etc/ssl/etcd
|
||||||
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||||
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||||
sudo mkdir -p /etc/kubernetes/manifests
|
mkdir -p /etc/kubernetes/manifests
|
||||||
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
mv static-manifests/* /etc/kubernetes/manifests/
|
||||||
sudo mkdir -p /opt/bootstrap/assets
|
mkdir -p /opt/bootstrap/assets
|
||||||
sudo mv manifests /opt/bootstrap/assets/manifests
|
mv manifests /opt/bootstrap/assets/manifests
|
||||||
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||||
rm -rf assets auth static-manifests tls manifests-networking
|
rm -rf assets auth static-manifests tls manifests-networking
|
||||||
- path: /opt/bootstrap/apply
|
- path: /opt/bootstrap/apply
|
||||||
mode: 0544
|
mode: 0544
|
||||||
@ -182,6 +182,11 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
net.ipv4.conf.default.rp_filter=0
|
||||||
|
net.ipv4.conf.*.rp_filter=0
|
||||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
|
@ -21,11 +21,12 @@ systemd:
|
|||||||
enabled: true
|
enabled: true
|
||||||
contents: |
|
contents: |
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubelet via Hyperkube (System Container)
|
Description=Kubelet (System Container)
|
||||||
Requires=afterburn.service
|
Requires=afterburn.service
|
||||||
After=afterburn.service
|
After=afterburn.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.18.6
|
||||||
EnvironmentFile=/run/metadata/afterburn
|
EnvironmentFile=/run/metadata/afterburn
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -52,10 +53,11 @@ systemd:
|
|||||||
--volume /var/log:/var/log \
|
--volume /var/log:/var/log \
|
||||||
--volume /var/run/lock:/var/run/lock:z \
|
--volume /var/run/lock:/var/run/lock:z \
|
||||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
quay.io/poseidon/kubelet:v1.18.2 \
|
$${KUBELET_IMAGE} \
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
--authentication-token-webhook \
|
--authentication-token-webhook \
|
||||||
--authorization-mode=Webhook \
|
--authorization-mode=Webhook \
|
||||||
|
--bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
--cgroup-driver=systemd \
|
--cgroup-driver=systemd \
|
||||||
--cgroups-per-qos=true \
|
--cgroups-per-qos=true \
|
||||||
--enforce-node-allocatable=pods \
|
--enforce-node-allocatable=pods \
|
||||||
@ -63,15 +65,14 @@ systemd:
|
|||||||
--cluster_dns=${cluster_dns_service_ip} \
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
--exit-on-lock-contention \
|
|
||||||
--healthz-port=0 \
|
--healthz-port=0 \
|
||||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
--lock-file=/var/run/lock/kubelet.lock \
|
|
||||||
--network-plugin=cni \
|
--network-plugin=cni \
|
||||||
--node-labels=node.kubernetes.io/node \
|
--node-labels=node.kubernetes.io/node \
|
||||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
--read-only-port=0 \
|
--read-only-port=0 \
|
||||||
|
--rotate-certificates \
|
||||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
ExecStop=-/usr/bin/podman stop kubelet
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
Delegate=yes
|
Delegate=yes
|
||||||
@ -97,7 +98,7 @@ systemd:
|
|||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
ExecStart=/bin/true
|
ExecStart=/bin/true
|
||||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.2 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.6 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||||
[Install]
|
[Install]
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
storage:
|
storage:
|
||||||
@ -108,6 +109,11 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
fs.inotify.max_user_watches=16184
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
net.ipv4.conf.default.rp_filter=0
|
||||||
|
net.ipv4.conf.*.rp_filter=0
|
||||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
|
@ -1,3 +1,10 @@
|
|||||||
|
# Network VPC
|
||||||
|
resource "digitalocean_vpc" "network" {
|
||||||
|
name = var.cluster_name
|
||||||
|
region = var.region
|
||||||
|
description = "Network for ${var.cluster_name} cluster"
|
||||||
|
}
|
||||||
|
|
||||||
resource "digitalocean_firewall" "rules" {
|
resource "digitalocean_firewall" "rules" {
|
||||||
name = var.cluster_name
|
name = var.cluster_name
|
||||||
|
|
||||||
@ -6,6 +13,11 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
digitalocean_tag.workers.name
|
digitalocean_tag.workers.name
|
||||||
]
|
]
|
||||||
|
|
||||||
|
inbound_rule {
|
||||||
|
protocol = "icmp"
|
||||||
|
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||||
|
}
|
||||||
|
|
||||||
# allow ssh, internal flannel, internal node-exporter, internal kubelet
|
# allow ssh, internal flannel, internal node-exporter, internal kubelet
|
||||||
inbound_rule {
|
inbound_rule {
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
@ -13,12 +25,27 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Cilium health
|
||||||
|
inbound_rule {
|
||||||
|
protocol = "tcp"
|
||||||
|
port_range = "4240"
|
||||||
|
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||||
|
}
|
||||||
|
|
||||||
|
# IANA vxlan (flannel, calico)
|
||||||
inbound_rule {
|
inbound_rule {
|
||||||
protocol = "udp"
|
protocol = "udp"
|
||||||
port_range = "4789"
|
port_range = "4789"
|
||||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Linux vxlan (Cilium)
|
||||||
|
inbound_rule {
|
||||||
|
protocol = "udp"
|
||||||
|
port_range = "8472"
|
||||||
|
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||||
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter
|
# Allow Prometheus to scrape node-exporter
|
||||||
inbound_rule {
|
inbound_rule {
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
@ -33,6 +60,7 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
source_tags = [digitalocean_tag.workers.name]
|
source_tags = [digitalocean_tag.workers.name]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Kubelet
|
||||||
inbound_rule {
|
inbound_rule {
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
port_range = "10250"
|
port_range = "10250"
|
||||||
|
@ -2,6 +2,8 @@ output "kubeconfig-admin" {
|
|||||||
value = module.bootstrap.kubeconfig-admin
|
value = module.bootstrap.kubeconfig-admin
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Outputs for Kubernetes Ingress
|
||||||
|
|
||||||
output "controllers_dns" {
|
output "controllers_dns" {
|
||||||
value = digitalocean_record.controllers[0].fqdn
|
value = digitalocean_record.controllers[0].fqdn
|
||||||
}
|
}
|
||||||
@ -45,3 +47,9 @@ output "worker_tag" {
|
|||||||
value = digitalocean_tag.workers.name
|
value = digitalocean_tag.workers.name
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Outputs for custom load balancing
|
||||||
|
|
||||||
|
output "vpc_id" {
|
||||||
|
description = "ID of the cluster VPC"
|
||||||
|
value = digitalocean_vpc.network.id
|
||||||
|
}
|
||||||
|
@ -3,8 +3,8 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_version = "~> 0.12.6"
|
required_version = "~> 0.12.6"
|
||||||
required_providers {
|
required_providers {
|
||||||
digitalocean = "~> 1.3"
|
digitalocean = "~> 1.16"
|
||||||
ct = "~> 0.3"
|
ct = "~> 0.4"
|
||||||
template = "~> 2.1"
|
template = "~> 2.1"
|
||||||
null = "~> 2.1"
|
null = "~> 2.1"
|
||||||
}
|
}
|
||||||
|
@ -37,9 +37,10 @@ resource "digitalocean_droplet" "workers" {
|
|||||||
size = var.worker_type
|
size = var.worker_type
|
||||||
|
|
||||||
# network
|
# network
|
||||||
# TODO: Only official DigitalOcean images support IPv6
|
|
||||||
ipv6 = false
|
|
||||||
private_networking = true
|
private_networking = true
|
||||||
|
vpc_uuid = digitalocean_vpc.network.id
|
||||||
|
# TODO: Only official DigitalOcean images support IPv6
|
||||||
|
ipv6 = false
|
||||||
|
|
||||||
user_data = data.ct_config.worker-ignition.rendered
|
user_data = data.ct_config.worker-ignition.rendered
|
||||||
ssh_keys = var.ssh_fingerprints
|
ssh_keys = var.ssh_fingerprints
|
||||||
@ -60,9 +61,9 @@ resource "digitalocean_tag" "workers" {
|
|||||||
|
|
||||||
# Worker Ignition config
|
# Worker Ignition config
|
||||||
data "ct_config" "worker-ignition" {
|
data "ct_config" "worker-ignition" {
|
||||||
content = data.template_file.worker-config.rendered
|
content = data.template_file.worker-config.rendered
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.worker_snippets
|
snippets = var.worker_snippets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Worker Fedora CoreOS config
|
# Worker Fedora CoreOS config
|
||||||
|
@ -174,3 +174,34 @@ module "nemo" {
|
|||||||
|
|
||||||
To customize low-level Kubernetes control plane bootstrapping, see the [poseidon/terraform-render-bootstrap](https://github.com/poseidon/terraform-render-bootstrap) Terraform module.
|
To customize low-level Kubernetes control plane bootstrapping, see the [poseidon/terraform-render-bootstrap](https://github.com/poseidon/terraform-render-bootstrap) Terraform module.
|
||||||
|
|
||||||
|
## Kubelet
|
||||||
|
|
||||||
|
Typhoon publishes Kubelet [container images](/topics/security/#container-images) to Quay.io (default) and to Dockerhub (in case of a Quay [outage](https://github.com/poseidon/typhoon/issues/735) or breach). Quay automated builds also provide the option for fully verifiable tagged images (`build-{short_sha}`).
|
||||||
|
|
||||||
|
To set an alternative Kubelet image, use a snippet to set a systemd dropin.
|
||||||
|
|
||||||
|
```
|
||||||
|
# host-image-override.yaml
|
||||||
|
variant: fcos <- remove for Flatcar Linux
|
||||||
|
version: 1.0.0 <- remove for Flatcar Linux
|
||||||
|
systemd:
|
||||||
|
units:
|
||||||
|
- name: kubelet.service
|
||||||
|
dropins:
|
||||||
|
- name: 10-image-override.conf
|
||||||
|
contents: |
|
||||||
|
[Service]
|
||||||
|
Environment=KUBELET_IMAGE=docker.io/psdn/kubelet:v1.18.3
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
module "nemo" {
|
||||||
|
...
|
||||||
|
|
||||||
|
worker_snippets = [
|
||||||
|
file("./snippets/host-image-override.yaml")
|
||||||
|
]
|
||||||
|
...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
@ -13,7 +13,7 @@ Internal Terraform Modules:
|
|||||||
|
|
||||||
## AWS
|
## AWS
|
||||||
|
|
||||||
Create a cluster following the AWS [tutorial](../cl/aws.md#cluster). Define a worker pool using the AWS internal `workers` module.
|
Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster). Define a worker pool using the AWS internal `workers` module.
|
||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "tempest-worker-pool" {
|
module "tempest-worker-pool" {
|
||||||
@ -65,7 +65,8 @@ The AWS internal `workers` module supports a number of [variables](https://githu
|
|||||||
|:-----|:------------|:--------|:--------|
|
|:-----|:------------|:--------|:--------|
|
||||||
| worker_count | Number of instances | 1 | 3 |
|
| worker_count | Number of instances | 1 | 3 |
|
||||||
| instance_type | EC2 instance type | "t3.small" | "t3.medium" |
|
| instance_type | EC2 instance type | "t3.small" | "t3.medium" |
|
||||||
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alph, coreos-stable, coreos-beta, coreos-alpha |
|
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alph, flatcar-edge |
|
||||||
|
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" |
|
||||||
| disk_size | Size of the EBS volume in GB | 40 | 100 |
|
| disk_size | Size of the EBS volume in GB | 40 | 100 |
|
||||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||||
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
|
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
|
||||||
@ -78,11 +79,11 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
|
|||||||
|
|
||||||
## Azure
|
## Azure
|
||||||
|
|
||||||
Create a cluster following the Azure [tutorial](../cl/azure.md#cluster). Define a worker pool using the Azure internal `workers` module.
|
Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluster). Define a worker pool using the Azure internal `workers` module.
|
||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "ramius-worker-pool" {
|
module "ramius-worker-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.18.2"
|
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.18.6"
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
region = module.ramius.region
|
region = module.ramius.region
|
||||||
@ -134,7 +135,7 @@ The Azure internal `workers` module supports a number of [variables](https://git
|
|||||||
|:-----|:------------|:--------|:--------|
|
|:-----|:------------|:--------|:--------|
|
||||||
| worker_count | Number of instances | 1 | 3 |
|
| worker_count | Number of instances | 1 | 3 |
|
||||||
| vm_type | Machine type for instances | "Standard_DS1_v2" | See below |
|
| vm_type | Machine type for instances | "Standard_DS1_v2" | See below |
|
||||||
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, coreos-stable, coreos-beta, coreos-alpha |
|
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha, flatcar-edge |
|
||||||
| priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | "Regular" | "Spot" |
|
| priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | "Regular" | "Spot" |
|
||||||
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
|
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
|
||||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||||
@ -144,11 +145,11 @@ Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricin
|
|||||||
|
|
||||||
## Google Cloud
|
## Google Cloud
|
||||||
|
|
||||||
Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#cluster). Define a worker pool using the Google Cloud internal `workers` module.
|
Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-cloud.md#cluster). Define a worker pool using the Google Cloud internal `workers` module.
|
||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin-worker-pool" {
|
module "yavin-worker-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.18.2"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.18.6"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
region = "europe-west2"
|
region = "europe-west2"
|
||||||
@ -179,11 +180,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
|||||||
```
|
```
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS AGE VERSION
|
NAME STATUS AGE VERSION
|
||||||
yavin-controller-0.c.example-com.internal Ready 6m v1.18.2
|
yavin-controller-0.c.example-com.internal Ready 6m v1.18.6
|
||||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.18.2
|
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.18.6
|
||||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.18.2
|
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.18.6
|
||||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.18.2
|
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.18.6
|
||||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.18.2
|
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.18.6
|
||||||
```
|
```
|
||||||
|
|
||||||
### Variables
|
### Variables
|
||||||
@ -199,7 +200,7 @@ The Google Cloud internal `workers` module supports a number of [variables](http
|
|||||||
| region | Region for the worker pool instances. May differ from the cluster's region | "europe-west2" |
|
| region | Region for the worker pool instances. May differ from the cluster's region | "europe-west2" |
|
||||||
| network | Must be set to `network_name` output by cluster | module.cluster.network_name |
|
| network | Must be set to `network_name` output by cluster | module.cluster.network_name |
|
||||||
| kubeconfig | Must be set to `kubeconfig` output by cluster | module.cluster.kubeconfig |
|
| kubeconfig | Must be set to `kubeconfig` output by cluster | module.cluster.kubeconfig |
|
||||||
| os_image | Container Linux image for compute instances | "fedora-coreos-or-flatcar-image", coreos-stable, coreos-beta, coreos-alpha |
|
| os_image | Container Linux image for compute instances | "uploaded-flatcar-image" |
|
||||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
|
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
|
||||||
|
|
||||||
Check the list of regions [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`.
|
Check the list of regions [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`.
|
||||||
@ -210,6 +211,7 @@ Check the list of regions [docs](https://cloud.google.com/compute/docs/regions-z
|
|||||||
|:-----|:------------|:--------|:--------|
|
|:-----|:------------|:--------|:--------|
|
||||||
| worker_count | Number of instances | 1 | 3 |
|
| worker_count | Number of instances | 1 | 3 |
|
||||||
| machine_type | Compute instance machine type | "n1-standard-1" | See below |
|
| machine_type | Compute instance machine type | "n1-standard-1" | See below |
|
||||||
|
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" |
|
||||||
| disk_size | Size of the disk in GB | 40 | 100 |
|
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||||
| preemptible | If true, Compute Engine will terminate instances randomly within 24 hours | false | true |
|
| preemptible | If true, Compute Engine will terminate instances randomly within 24 hours | false | true |
|
||||||
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
|
| snippets | Container Linux Config snippets | [] | [examples](/advanced/customization/) |
|
||||||
|
@ -30,6 +30,7 @@ Add a DigitalOcean load balancer to distribute IPv4 TCP traffic (HTTP/HTTPS Ingr
|
|||||||
resource "digitalocean_loadbalancer" "ingress" {
|
resource "digitalocean_loadbalancer" "ingress" {
|
||||||
name = "ingress"
|
name = "ingress"
|
||||||
region = "fra1"
|
region = "fra1"
|
||||||
|
vpc_uuid = module.nemo.vpc_id
|
||||||
droplet_tag = module.nemo.worker_tag
|
droplet_tag = module.nemo.worker_tag
|
||||||
|
|
||||||
healthcheck {
|
healthcheck {
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Operating Systems
|
# Operating Systems
|
||||||
|
|
||||||
Typhoon supports [Fedora CoreOS](https://getfedora.org/coreos/), [Flatcar Linux](https://www.flatcar-linux.org/) and Container Linux (EOL in May 2020). These operating systems were chosen because they offer:
|
Typhoon supports [Fedora CoreOS](https://getfedora.org/coreos/) and [Flatcar Linux](https://www.flatcar-linux.org/). These operating systems were chosen because they offer:
|
||||||
|
|
||||||
* Minimalism and focus on clustered operation
|
* Minimalism and focus on clustered operation
|
||||||
* Automated and atomic operating system upgrades
|
* Automated and atomic operating system upgrades
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# AWS
|
# AWS
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.18.2 cluster on AWS with Fedora CoreOS.
|
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on AWS with Fedora CoreOS.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||||
|
|
||||||
@ -49,7 +49,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
provider "aws" {
|
provider "aws" {
|
||||||
version = "2.53.0"
|
version = "2.70.0"
|
||||||
region = "eu-central-1"
|
region = "eu-central-1"
|
||||||
shared_credentials_file = "/home/user/.config/aws/credentials"
|
shared_credentials_file = "/home/user/.config/aws/credentials"
|
||||||
}
|
}
|
||||||
@ -70,7 +70,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "tempest" {
|
module "tempest" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.18.2"
|
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
cluster_name = "tempest"
|
cluster_name = "tempest"
|
||||||
@ -143,9 +143,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
ip-10-0-3-155 Ready <none> 10m v1.18.2
|
ip-10-0-3-155 Ready <none> 10m v1.18.6
|
||||||
ip-10-0-26-65 Ready <none> 10m v1.18.2
|
ip-10-0-26-65 Ready <none> 10m v1.18.6
|
||||||
ip-10-0-41-21 Ready <none> 10m v1.18.2
|
ip-10-0-41-21 Ready <none> 10m v1.18.6
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -208,7 +208,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
|||||||
| worker_count | Number of workers | 1 | 3 |
|
| worker_count | Number of workers | 1 | 3 |
|
||||||
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
|
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
|
||||||
| worker_type | EC2 instance type for workers | "t3.small" | See below |
|
| worker_type | EC2 instance type for workers | "t3.small" | See below |
|
||||||
| os_image | AMI channel for Fedora CoreOS | not yet used | ? |
|
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" |
|
||||||
| disk_size | Size of the EBS volume in GB | 40 | 100 |
|
| disk_size | Size of the EBS volume in GB | 40 | 100 |
|
||||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||||
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
|
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
|
||||||
@ -216,7 +216,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
|||||||
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 |
|
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 |
|
||||||
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [examples](/advanced/customization/) |
|
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [examples](/advanced/customization/) |
|
||||||
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [examples](/advanced/customization/) |
|
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [examples](/advanced/customization/) |
|
||||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||||
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
|
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
|
||||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Azure
|
# Azure
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.18.2 cluster on Azure with Fedora CoreOS.
|
In this tutorial, we'll create a Kubernetes v1.18.6 cluster on Azure with Fedora CoreOS.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||||
|
|
||||||
@ -47,7 +47,7 @@ Configure the Azure provider in a `providers.tf` file.
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
provider "azurerm" {
|
provider "azurerm" {
|
||||||
version = "2.5.0"
|
version = "2.19.0"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "ct" {
|
provider "ct" {
|
||||||
@ -83,7 +83,7 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "ramius" {
|
module "ramius" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.18.2"
|
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.18.6"
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
cluster_name = "ramius"
|
cluster_name = "ramius"
|
||||||
@ -158,9 +158,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
ramius-controller-0 Ready <none> 24m v1.18.2
|
ramius-controller-0 Ready <none> 24m v1.18.6
|
||||||
ramius-worker-000001 Ready <none> 25m v1.18.2
|
ramius-worker-000001 Ready <none> 25m v1.18.6
|
||||||
ramius-worker-000002 Ready <none> 24m v1.18.2
|
ramius-worker-000002 Ready <none> 24m v1.18.6
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -242,7 +242,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
|||||||
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
||||||
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [example](/advanced/customization/#usage) |
|
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [example](/advanced/customization/#usage) |
|
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
| networking | Choice of networking provider | "calico" | "flannel" or "calico" |
|
| networking | Choice of networking provider | "calico" | "calico" or "cilium" or "flannel" |
|
||||||
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
|
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
|
||||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user