Compare commits

..

45 Commits

Author SHA1 Message Date
0c53ad52e4 Update recommended Terraform versions and providers
* Sync the documented Terraform versions and provider
plugin versions to those that are actively used/tested
by the author
2020-02-13 14:39:48 -08:00
008817b0aa Promote Fedora CoreOS AWS/bare-metal to beta
* Remove alpha warnings from docs headers
2020-02-13 14:25:22 -08:00
49d3b9e6b3 Set docker log driver to json-file on Fedora CoreOS
* Fix the last minor issue for Fedora CoreOS clusters to pass CNCF's
Kubernetes conformance tests
* Kubelet supports a seldom used feature `kubectl logs --limit-bytes=N`
to trim a log stream to a desired length. Kubelet handles this in the
CRI driver. The Kubelet docker shim only supports the limit bytes
feature when Docker is configured with the default `json-file` logging
driver
* CNCF conformance tests started requiring limit-bytes be supported,
indirectly forcing the log driver choice until either the Kubelet or
the conformance tests are fixed
* Fedora CoreOS defaults Docker to use `journald` (desired). For now,
as a workaround to offer conformant clusters, the log driver can
be set back to `json-file`. RHEL CoreOS likely won't have noticed the
non-conformance since its using crio runtime
* https://github.com/kubernetes/kubernetes/issues/86367

Note: When upstream has a fix, the aim is to drop the docker config
override and use the journald default
2020-02-11 23:00:38 -08:00
1243f395d1 Update Kubernetes from v1.17.2 to v1.17.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#v1173
2020-02-11 20:22:14 -08:00
846f11097f Update Fedora CoreOS kernel arguments to align with upstream
* Align bare-metal kernel arguments with upstream docs
* Add missing initrd argument which can cause issues if
not present. Fix #638
* Add tty0 and ttyS0 consoles (matches Container Linux)
* Remove unused coreos.inst=yes

Related: https://docs.fedoraproject.org/en-US/fedora-coreos/bare-metal/
2020-02-11 20:11:19 -08:00
ba84f86dc7 Add guide for Typhoon with Flatcar Linux on Google Cloud
* Add docs on manually uploading a Flatcar Linux GCE/GCP gzipped
tarball image as a Compute Engine image for use with the Typhoon
container-linux module
* Set status of Flatcar Linux on Google Cloud to alpha
2020-02-11 19:38:40 -08:00
b49a1d715d Update docs generation packages
* Update mkdocs-material from v4.6.0 to v4.6.2
2020-02-08 15:12:12 -08:00
34c3d7cc39 Update Grafana from v6.6.0 to v6.6.1
* https://github.com/grafana/grafana/releases/tag/v6.6.1
2020-02-08 14:50:33 -08:00
ca96a1335c Update Calico from v3.11.2 to v3.12.0
* https://docs.projectcalico.org/release-notes/#v3120
* Remove reverse packet filter override, since Calico no
longer relies on the setting
* https://github.com/coreos/fedora-coreos-tracker/issues/219
* https://github.com/projectcalico/felix/pull/2189
2020-02-06 00:43:33 -08:00
e339fbd2b6 Update kube-state-metrics from v1.9.3 to v1.9.4
* https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.4
2020-02-04 21:33:34 -08:00
8cc303c9ac Add module for Fedora CoreOS on Google Cloud
* Add Typhoon Fedora CoreOS on Google Cloud as alpha
* Add docs on uploading the Fedora CoreOS GCP gzipped tarball to
Google Cloud storage to create a boot disk image
2020-02-01 15:21:40 -08:00
b19ba16afa Update nginx-ingress from v0.27.1 to v0.28.0
* https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.28.0
2020-01-30 18:00:23 -08:00
d127a7345c Update Grafana from v6.5.3 to v6.6.0
* https://github.com/grafana/grafana/releases/tag/v6.6.0
2020-01-27 20:46:32 -08:00
02a470d2f2 Fix minor typo in announcement date 2020-01-23 08:57:01 -08:00
5643ad525f Promote Fedora CoreOS from preview to alpha in docs
* Add an announcement to the website as well
2020-01-23 08:47:18 -08:00
d5b7ce8f27 Update kube-state-metrics from v1.9.2 to v1.9.3
* https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.3
2020-01-23 00:03:16 -08:00
1cda5bcd2a Update Kubernetes from v1.17.1 to v1.17.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#v1172
2020-01-21 18:27:39 -08:00
bda73264f7 Update nginx-ingress from v0.26.1 to v0.27.1
* Change runAsUser from 33 to 101 for new alpine-based image
* https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.27.0
* https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.27.1
2020-01-20 15:22:16 -08:00
dd930a2ff9 Update bare-metal Fedora CoreOS image location
* Use Fedora CoreOS production download streams (change)
* Use live PXE kernel and initramfs images
* https://getfedora.org/coreos/download/
* Update docs example to use public images (cache is still
recommended at large scale) and stable stream
2020-01-20 14:44:06 -08:00
03ff3a9cf3 Update kube-state-metrics from v1.9.1 to v1.9.2
* https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.2
2020-01-18 15:32:10 -08:00
48703f9906 Update Grafana from v6.5.2 to v6.5.3
* https://github.com/grafana/grafana/releases/tag/v6.5.3
2020-01-18 15:30:39 -08:00
7ddd3d096d Fix link in maintenance docs
* Also a fix version mention, since Terraform v0.12 was
added in Typhoon v1.15.0
2020-01-18 15:19:27 -08:00
7daabd28b5 Update Calico from v3.11.1 to v3.11.2
* https://docs.projectcalico.org/v3.11/release-notes/
2020-01-18 13:45:24 -08:00
b642e3b41b Update Kubernetes from v1.17.0 to v1.17.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#v1171
2020-01-14 20:21:36 -08:00
ac786a2efc Update AWS Fedora CoreOS AMI filter for fedora-coreos-31
* Select the most recent fedora-coreos-31 AMI on AWS, instead
of the most recent fedora-coreos-30 AMI (Nov 27, 2019)
* Evaluated with fedora-coreos-31.20200108.2.0-hvm
2020-01-14 20:06:14 -08:00
073fcb7067 Fix bare-metal instruction for watching install to disk
* Original instructions were to watch install to disk by SSH'ing
via port 2222 following Typhoon v1.10.1. Restore that message,
since the version number in the instruction was incorrectly bumped
on each release
2020-01-12 14:16:00 -08:00
ce0569e03b Remove unneeded Kubelet /var/run mount on Fedora CoreOS
* /var/run symlinks to /run (already mounted)
2020-01-11 15:15:39 -08:00
0e2fc89f78 Update kube-state-metrics from v1.9.0 to v1.9.1
* https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.1
2020-01-11 14:15:55 -08:00
b1f521fc4a Allow terraform-provider-google v3.x plugin versions
* Typhoon Google Cloud is compatible with `terraform-provider-google`
v3.x releases
* No v3.x specific features are used, so v2.19+ provider versions are
still allowed, to ease migrations
2020-01-11 14:07:18 -08:00
73588cfad3 Update Prometheus from v2.15.1 to v2.15.2
* https://github.com/prometheus/prometheus/releases/tag/v2.15.2
2020-01-06 22:08:34 -08:00
0223b31e1a Ensure /etc/kubernetes exists following Kubelet inlining
* Inlining the Kubelet service removed the need for the
kubelet.env file declared in Ignition. However, on some
platforms, this removed the guarantee that /etc/kubernetes
exists. Bare-Metal and DigitalOcean distribute the kubelet
kubeconfig through Terraform file provisioner (scp) and
place it in (now missing) /etc/kubernetes
* https://github.com/poseidon/typhoon/pull/606
* Fix bare-metal and DigitalOcean Ignition to ensure the
desired directory exists following first boot from disk
* Cloud platforms with worker pools distribute the kubeconfig
through Ignition user data (no impact or need)
2020-01-06 21:38:20 -08:00
bb586b60da Reduce Prometheus addon's node-exporter tolerations
* Change node-exporter DaemonSet tolerations from tolerating
all possible NoSchedule taints to tolerating the master taint
and the not ready taint (we'd like metrics regardless)
* Users who add custom node taints must add their custom taints
to the addon node-exporter DaemonSet. As an addon, its expected
users copy and manipulate manifests out-of-band in their own
systems
2020-01-06 21:24:24 -08:00
43e05b9131 Enable kube-proxy metrics and allow Prometheus scrapes
* Configure kube-proxy --metrics-bind-address=0.0.0.0 (default
127.0.0.1) to serve metrics on 0.0.0.0:10249
* Add firewall rules to allow Prometheus (resides on a worker) to
scrape kube-proxy service endpoints on controllers or workers
* Add a clusterIP: None service for kube-proxy endpoint discovery
2020-01-06 21:11:18 -08:00
b2eb3e05d0 Disable Kubelet 127.0.0.1.10248 healthz endpoint
* Kubelet runs a healthz server listening on 127.0.0.1:10248
by default. Its unused by Typhoon and can be disabled
* https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
2019-12-29 11:23:25 -08:00
f1f4cd6fc0 Inline Container Linux kubelet.service, deprecate kubelet-wrapper
* Change kubelet.service on Container Linux nodes to ExecStart Kubelet
inline to replace the use of the host OS kubelet-wrapper script
* Express rkt run flags and volume mounts in a clear, uniform way to
make the Kubelet service easier to audit, manage, and understand
* Eliminate reliance on a Container Linux kubelet-wrapper script
* Typhoon for Fedora CoreOS developed a kubelet.service that similarly
uses an inline ExecStart (except with podman instead of rkt) and a
more minimal set of volume mounts. Adopt the volume improvements:
  * Change Kubelet /etc/kubernetes volume to read-only
  * Change Kubelet /etc/resolv.conf volume to read-only
  * Remove unneeded /var/lib/cni volume mount

Background:

* kubelet-wrapper was added in CoreOS around the time of Kubernetes v1.0
to simplify running a CoreOS-built hyperkube ACI image via rkt-fly. The
script defaults are no longer ideal (e.g. rkt's notion of trust dates
back to quay.io ACI image serving and signing, which informed the OCI
standard images we use today, though they still lack rkt's signing ideas).
* Shipping kubelet-wrapper was regretted at CoreOS, but remains in the
distro for compatibility. The script is not updated to track hyperkube
changes, but it is stable and kubelet.env overrides bridge most gaps
* Typhoon Container Linux nodes have used kubelet-wrapper to rkt/rkt-fly
run the Kubelet via the official k8s.gcr.io hyperkube image using overrides
(new image registry, new image format, restart handling, new mounts, new
entrypoint in v1.17).
* Observation: Most of what it takes to run a Kubelet container is defined
in Typhoon, not in kubelet-wrapper. The wrapper's value is now undermined
by having to workaround its dated defaults. Typhoon may be better served
defining Kubelet.service explicitly
* Typhoon for Fedora CoreOS developed a kubelet.service without the use
of a host OS kubelet-wrapper which is both clearer and eliminated some
volume mounts
2019-12-29 11:17:26 -08:00
50db3d0231 Rename CLC files and favor Terraform list index syntax
* Rename Container Linux Config (CLC) files to *.yaml to align
with Fedora CoreOS Config (FCC) files and for syntax highlighting
* Replace common uses of Terraform `element` (which wraps around)
with `list[index]` syntax to surface index errors
2019-12-28 12:14:01 -08:00
11565ffa8a Update Calico from v3.10.2 to v3.11.1
* https://docs.projectcalico.org/v3.11/release-notes/
2019-12-28 11:08:03 -08:00
a4e843693f Update Prometheus from v2.15.0 to v2.15.1
* https://github.com/prometheus/prometheus/releases/tag/v2.15.1
2019-12-26 09:12:55 -05:00
f48e43c0b1 Update Prometheus from v2.14.0 to v2.15.0
* https://github.com/prometheus/prometheus/releases/tag/v2.15.0
2019-12-24 10:52:19 -05:00
daa8d9d9ec Update CoreDNS from v1.6.5 to v1.6.6
* https://coredns.io/2019/12/11/coredns-1.6.6-release/
2019-12-22 10:47:19 -05:00
52d11096dc Update kube-state-metrics from v1.9.0-rc.1 to v1.9.0
* https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.0
* https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.0-rc.1
* https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.0-rc.0
2019-12-20 13:53:37 -08:00
00c431a9d2 Add Kubelet kubeconfig output for DigitalOcean
* Allow the raw kubelet kubeconfig to be consumed via
Terraform output
2019-12-18 23:20:55 -08:00
0ecb995890 Update kube-state-metrics from v1.8.0 to v1.9.0-rc.1
* https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.0-rc.1
* https://github.com/kubernetes/kube-state-metrics/releases/tag/v1.9.0-rc.0
2019-12-14 17:20:49 -08:00
1b9fa2e688 Update Grafana from v6.5.1 to v6.5.2
* https://github.com/grafana/grafana/releases/tag/v6.5.2
2019-12-14 15:25:48 -08:00
2d8e367664 Update mkdocs-material from v4.5.1 to v4.6.0 2019-12-14 15:02:28 -08:00
110 changed files with 2651 additions and 552 deletions

View File

@ -4,6 +4,79 @@ Notable changes between versions.
## Latest
* Kubernetes [v1.17.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#v1173)
* Update Calico from v3.11.2 to v3.12.0
* Allow Fedora CoreOS clusters to pass CNCF conformance suite
* Set Docker log driver to `json-file` as a workaround
#### AWS
* Promote Fedora CoreOS to beta
#### Bare-Metal
* Promote Fedora CoreOS to beta
* Add Fedora CoreOS kernel arguments initrd and console ([#640](https://github.com/poseidon/typhoon/pull/640))
#### Google Cloud
* Add initial Terraform module for Fedora CoreOS ([#632](https://github.com/poseidon/typhoon/pull/632))
* Add initial support for Flatcar Container Linux ([#639](https://github.com/poseidon/typhoon/pull/639))
#### Addons
* Update nginx-ingress from v0.27.1 to v0.28.0
* Update kube-state-metrics from v1.9.3 to v1.9.4
* Update Grafana from v6.5.3 to v6.6.1
## v1.17.2
* Kubernetes [v1.17.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#v1172)
#### AWS
* Promote Fedora CoreOS from preview to alpha
#### Bare-Metal
* Promote Fedora CoreOS from preview to alpha
* Update Fedora CoreOS images location
* Use Fedora CoreOS production [download](https://getfedora.org/coreos/download/) streams
* Use live PXE kernel and initramfs images
#### Addons
* Update nginx-ingress from v0.26.1 to [v0.27.1](https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.27.1) ([#625](https://github.com/poseidon/typhoon/pull/625))
* Change runAsUser from 33 to 101 for alpine-based image
* Update kube-state-metrics from v1.9.2 to v1.9.3
## v1.17.1
* Kubernetes [v1.17.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#v1171)
* Update CoreDNS from v1.6.5 to [v1.6.6](https://coredns.io/2019/12/11/coredns-1.6.6-release/) ([#602](https://github.com/poseidon/typhoon/pull/602))
* Update Calico from v3.10.2 to v3.11.2 ([#604](https://github.com/poseidon/typhoon/pull/604))
* Inline Kubelet service on Container Linux nodes ([#606](https://github.com/poseidon/typhoon/pull/606))
* Disable unused Kubelet `127.0.0.1:10248` healthz listener ([#607](https://github.com/poseidon/typhoon/pull/607))
* Enable kube-proxy metrics and allow Prometheus scrapes
* Allow TCP/10249 traffic with worker node sources
#### AWS
* Update Fedora CoreOS AMI filter for fedora-coreos-31 ([#620](https://github.com/poseidon/typhoon/pull/620))
#### Google
* Allow `terraform-provider-google` v3.0+ ([#617](https://github.com/poseidon/typhoon/pull/617))
* Only enforce `v2.19+` to ease migration, as no v3.x features are used
#### Addons
* Update Prometheus from v2.14.0 to [v2.15.2](https://github.com/prometheus/prometheus/releases/tag/v2.15.2)
* Add discovery for kube-proxy service endpoints
* Update kube-state-metrics from v1.8.0 to v1.9.2
* Reduce node-exporter DaemonSet tolerations ([#614](https://github.com/poseidon/typhoon/pull/614))
* Update Grafana from v6.5.1 to v6.5.3
## v1.17.0
* Kubernetes [v1.17.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.17.md#v1170)
@ -226,7 +299,7 @@ Notable changes between versions.
* Require `terraform-provider-azurerm` v1.27+ to support Terraform v0.12 (action required)
* Avoid unneeded rotations of Regular priority virtual machine scale sets
* Azure only allows `eviction_policy` to be set for Low priority VMs. Supporting Low priority VMs meant when Regular VMs were used, each `terraform apply` rolled workers, to set eviction_policy to null.
* Terraform v0.12 nullable variables fix the issue so plan does not produce a diff.
* Terraform v0.12 nullable variables fix the issue so plan does not produce a diff.
#### Bare-Metal
@ -281,7 +354,7 @@ Notable changes between versions.
* Update Grafana from v6.1.6 to v6.2.1
## v1.14.2
* Kubernetes [v1.14.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#v1142)
* Update etcd from v3.3.12 to [v3.3.13](https://github.com/etcd-io/etcd/releases/tag/v3.3.13)
* Upgrade Calico from v3.6.1 to [v3.7.2](https://docs.projectcalico.org/v3.7/release-notes/)
@ -352,7 +425,7 @@ Notable changes between versions.
* Add ability to load balance TCP/UDP applications ([#442](https://github.com/poseidon/typhoon/pull/442))
* Add worker instances to a target pool, output as `worker_target_pool`
* Health check for workers with Ingress controllers. Forward rules don't support differing internal/external ports, but some Ingress controllers support TCP/UDP proxy as a workaround
* Health check for workers with Ingress controllers. Forward rules don't support differing internal/external ports, but some Ingress controllers support TCP/UDP proxy as a workaround
* Remove Haswell minimum CPU platform requirement ([#439](https://github.com/poseidon/typhoon/pull/439))
* Google Cloud API implements `min_cpu_platform` to mean "use exactly this CPU". Revert [#405](https://github.com/poseidon/typhoon/pull/405) added in v1.13.4.
* Fix error creating clusters in new regions without Haswell (e.g. europe-west2) ([#438](https://github.com/poseidon/typhoon/issues/438))
@ -537,7 +610,7 @@ Notable changes between versions.
* Update Calico from v3.3.0 to [v3.3.1](https://docs.projectcalico.org/v3.3/releases/)
* Disable Felix usage reporting by default ([#345](https://github.com/poseidon/typhoon/pull/345))
* Improve flannel manifests
* [Rename](https://github.com/poseidon/terraform-render-bootkube/commit/d045a8e6b8eccfbb9d69bb51953b5a93d23f67f7) `kube-flannel` DaemonSet to `flannel` and `kube-flannel-cfg` ConfigMap to `flannel-config`
* [Rename](https://github.com/poseidon/terraform-render-bootkube/commit/d045a8e6b8eccfbb9d69bb51953b5a93d23f67f7) `kube-flannel` DaemonSet to `flannel` and `kube-flannel-cfg` ConfigMap to `flannel-config`
* [Drop](https://github.com/poseidon/terraform-render-bootkube/commit/39f9afb3360ec642e5b98457c8bd07eda35b6c96) unused mounts and add a CPU resource request
* Update CoreDNS from v1.2.4 to [v1.2.6](https://coredns.io/2018/11/05/coredns-1.2.6-release/)
* Enable CoreDNS `loop` and `loadbalance` plugins ([#340](https://github.com/poseidon/typhoon/pull/340))
@ -699,7 +772,7 @@ Notable changes between versions.
* Force apiserver to stop listening on `127.0.0.1:8080`
* Replace `kube-dns` with [CoreDNS](https://coredns.io/) ([#261](https://github.com/poseidon/typhoon/pull/261))
* Edit the `coredns` ConfigMap to [customize](https://coredns.io/plugins/)
* CoreDNS doesn't use a resizer. For large clusters, scaling may be required.
* CoreDNS doesn't use a resizer. For large clusters, scaling may be required.
#### AWS
@ -744,7 +817,7 @@ Notable changes between versions.
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
* Users who exposed kube-apiserver on a WAN via their router/load-balancer will need to adjust its configuration (e.g. DNAT 6443). Most apiservers are on a LAN (internal, VPN-only, etc) so if you didn't specially configure network gear for 443, no change is needed. (possible action required)
* Fix possible deadlock when provisioning clusters larger than 10 nodes ([#244](https://github.com/poseidon/typhoon/pull/244))
* Fix possible deadlock when provisioning clusters larger than 10 nodes ([#244](https://github.com/poseidon/typhoon/pull/244))
#### DigitalOcean
@ -812,7 +885,7 @@ Notable changes between versions.
* Please change values stable, beta, or alpha to coreos-stable, coreos-beta, coreos-alpha (**action required!**)
* Replace `container_linux_version` variable with `os_version`
* Add `network_ip_autodetection_method` variable for Calico host IPv4 address detection
* Use Calico's default "first-found" to support single NIC and bonded NIC nodes
* Use Calico's default "first-found" to support single NIC and bonded NIC nodes
* Allow [alternative](https://docs.projectcalico.org/v3.1/reference/node/configuration#ip-autodetection-methods) methods for multi NIC nodes, like can-reach=IP or interface=REGEX
* Deprecate `container_linux_oem` variable
@ -845,7 +918,7 @@ Notable changes between versions.
#### Google Cloud
* Add support for multi-controller clusters (i.e. multi-master) ([#54](https://github.com/poseidon/typhoon/issues/54), [#190](https://github.com/poseidon/typhoon/pull/190))
* Switch from Google Cloud network load balancer to a TCP proxy load balancer. Avoid a [bug](https://issuetracker.google.com/issues/67366622) in Google network load balancers that limited clusters to only bootstrapping one controller node.
* Switch from Google Cloud network load balancer to a TCP proxy load balancer. Avoid a [bug](https://issuetracker.google.com/issues/67366622) in Google network load balancers that limited clusters to only bootstrapping one controller node.
* Add TCP health check for apiserver pods on controllers. Replace kubelet check approximation.
#### Addons
@ -1076,7 +1149,7 @@ Notable changes between versions.
* Container Linux stable, beta, and alpha now provide Docker 17.09 (instead
of 1.12)
* Older clusters (with CLUO addon) auto-update Container Linux version to begin using Docker 17.09
* Fix race where `etcd-member.service` could fail to resolve peers ([#69](https://github.com/poseidon/typhoon/pull/69))
* Fix race where `etcd-member.service` could fail to resolve peers ([#69](https://github.com/poseidon/typhoon/pull/69))
* Add optional `cluster_domain_suffix` variable (#74)
* Use kubernetes-incubator/bootkube v0.9.1

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.0 (upstream)
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
@ -23,18 +23,27 @@ Typhoon provides a Terraform Module for each supported operating system and plat
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Container Linux / Flatcar Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
| AWS | Container Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
| Azure | Container Linux | [azure/container-linux/kubernetes](azure/container-linux/kubernetes) | alpha |
| Bare-Metal | Container Linux / Flatcar Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable |
A preview of Typhoon for [Fedora CoreOS](https://getfedora.org/coreos/) is available for testing.
Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](aws/fedora-coreos/kubernetes) | preview |
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](bare-metal/fedora-coreos/kubernetes) | preview |
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](aws/fedora-coreos/kubernetes) | beta |
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](bare-metal/fedora-coreos/kubernetes) | beta |
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | alpha |
Typhoon is available for [Flatcar Container Linux](https://www.flatcar-linux.org/releases/).
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Flatcar Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
| Bare-Metal | Flatcar Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
| Google Cloud | Flatcar Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | alpha |
## Documentation
@ -48,7 +57,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.17.3"
# Google Cloud
cluster_name = "yavin"
@ -58,7 +67,7 @@ module "yavin" {
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
worker_count = 2
worker_preemptible = true
@ -87,9 +96,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.17.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.17.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.17.0
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.17.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.17.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.17.3
```
List the pods.

View File

@ -23,7 +23,7 @@ spec:
spec:
containers:
- name: grafana
image: docker.io/grafana/grafana:6.5.1
image: docker.io/grafana/grafana:6.6.1
env:
- name: GF_PATHS_CONFIG
value: "/etc/grafana/custom.ini"

View File

@ -22,7 +22,7 @@ spec:
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.28.0
args:
- /nginx-ingress-controller
- --ingress-class=public
@ -76,6 +76,6 @@ spec:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33 # www-data
runAsUser: 101 # www-data
restartPolicy: Always
terminationGracePeriodSeconds: 300

View File

@ -22,7 +22,7 @@ spec:
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.28.0
args:
- /nginx-ingress-controller
- --ingress-class=public
@ -76,6 +76,6 @@ spec:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33 # www-data
runAsUser: 101 # www-data
restartPolicy: Always
terminationGracePeriodSeconds: 300

View File

@ -22,7 +22,7 @@ spec:
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.28.0
args:
- /nginx-ingress-controller
- --ingress-class=public
@ -73,7 +73,7 @@ spec:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33 # www-data
runAsUser: 101 # www-data
restartPolicy: Always
terminationGracePeriodSeconds: 300

View File

@ -22,7 +22,7 @@ spec:
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.28.0
args:
- /nginx-ingress-controller
- --ingress-class=public
@ -76,6 +76,6 @@ spec:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33 # www-data
runAsUser: 101 # www-data
restartPolicy: Always
terminationGracePeriodSeconds: 300

View File

@ -22,7 +22,7 @@ spec:
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.28.0
args:
- /nginx-ingress-controller
- --ingress-class=public
@ -76,6 +76,6 @@ spec:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33 # www-data
runAsUser: 101 # www-data
restartPolicy: Always
terminationGracePeriodSeconds: 300

View File

@ -20,7 +20,7 @@ spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: quay.io/prometheus/prometheus:v2.14.0
image: quay.io/prometheus/prometheus:v2.15.2
args:
- --web.listen-address=0.0.0.0:9090
- --config.file=/etc/prometheus/prometheus.yaml

View File

@ -1,3 +1,4 @@
# Allow Prometheus to scrape service endpoints
apiVersion: v1
kind: Service
metadata:
@ -7,7 +8,6 @@ metadata:
prometheus.io/scrape: 'true'
spec:
type: ClusterIP
# service is created to allow prometheus to scrape endpoints
clusterIP: None
selector:
k8s-app: kube-controller-manager

View File

@ -0,0 +1,19 @@
# Allow Prometheus to scrape service endpoints
apiVersion: v1
kind: Service
metadata:
name: kube-proxy
namespace: kube-system
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '10249'
spec:
type: ClusterIP
clusterIP: None
selector:
k8s-app: kube-proxy
ports:
- name: metrics
protocol: TCP
port: 10249
targetPort: 10249

View File

@ -1,3 +1,4 @@
# Allow Prometheus to scrape service endpoints
apiVersion: v1
kind: Service
metadata:
@ -7,7 +8,6 @@ metadata:
prometheus.io/scrape: 'true'
spec:
type: ClusterIP
# service is created to allow prometheus to scrape endpoints
clusterIP: None
selector:
k8s-app: kube-scheduler

View File

@ -74,6 +74,7 @@ rules:
- storage.k8s.io
resources:
- storageclasses
- volumeattachments
verbs:
- list
- watch
@ -84,4 +85,19 @@ rules:
verbs:
- list
- watch
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- list
- watch

View File

@ -24,7 +24,7 @@ spec:
serviceAccountName: kube-state-metrics
containers:
- name: kube-state-metrics
image: quay.io/coreos/kube-state-metrics:v1.8.0
image: quay.io/coreos/kube-state-metrics:v1.9.4
ports:
- name: metrics
containerPort: 8080

View File

@ -57,7 +57,9 @@ spec:
mountPath: /host/root
readOnly: true
tolerations:
- effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
- key: node.kubernetes.io/not-ready
operator: Exists
volumes:
- name: proc

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.0 (upstream)
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -4,8 +4,8 @@ locals {
# flatcar-stable -> Flatcar Linux AMI
ami_id = local.flavor == "flatcar" ? data.aws_ami.flatcar.image_id : data.aws_ami.coreos.image_id
flavor = element(split("-", var.os_image), 0)
channel = element(split("-", var.os_image), 1)
flavor = split("-", var.os_image)[0]
channel = split("-", var.os_image)[1]
}
data "aws_ami" "coreos" {

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=24e5513ee6da4888005aac02101f73fc9e5e829f"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=796194583426593d1a62b6f1bf3f7ffed8fca140"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -50,29 +50,47 @@ systemd:
Description=Kubelet via Hyperkube
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--insecure-options=image"
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -82,6 +100,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
@ -115,7 +134,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.17.0 \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--net=host \
--dns=host \
--exec=/apply
@ -130,14 +149,6 @@ storage:
contents:
inline: |
${kubeconfig}
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
- path: /opt/bootstrap/layout
filesystem: root
mode: 0544

View File

@ -10,7 +10,7 @@ resource "aws_route53_record" "etcds" {
ttl = 300
# private IPv4 address for etcd
records = [element(aws_instance.controllers.*.private_ip, count.index)]
records = [aws_instance.controllers.*.private_ip[count.index]]
}
# Controller instances
@ -24,7 +24,7 @@ resource "aws_instance" "controllers" {
instance_type = var.controller_type
ami = local.ami_id
user_data = element(data.ct_config.controller-ignitions.*.rendered, count.index)
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
# storage
root_block_device {
@ -36,7 +36,7 @@ resource "aws_instance" "controllers" {
# network
associate_public_ip_address = true
subnet_id = element(aws_subnet.public.*.id, count.index)
subnet_id = aws_subnet.public.*.id[count.index]
vpc_security_group_ids = [aws_security_group.controller.id]
lifecycle {
@ -49,11 +49,8 @@ resource "aws_instance" "controllers" {
# Controller Ignition configs
data "ct_config" "controller-ignitions" {
count = var.controller_count
content = element(
data.template_file.controller-configs.*.rendered,
count.index,
)
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
pretty_print = false
snippets = var.controller_clc_snippets
}
@ -62,7 +59,7 @@ data "ct_config" "controller-ignitions" {
data "template_file" "controller-configs" {
count = var.controller_count
template = file("${path.module}/cl/controller.yaml.tmpl")
template = file("${path.module}/cl/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records

View File

@ -62,6 +62,6 @@ resource "aws_route_table_association" "public" {
count = length(data.aws_availability_zones.all.names)
route_table_id = aws_route_table.default.id
subnet_id = element(aws_subnet.public.*.id, count.index)
subnet_id = aws_subnet.public.*.id[count.index]
}

View File

@ -88,7 +88,7 @@ resource "aws_lb_target_group_attachment" "controllers" {
count = var.controller_count
target_group_arn = aws_lb_target_group.controllers.arn
target_id = element(aws_instance.controllers.*.id, count.index)
target_id = aws_instance.controllers.*.id[count.index]
port = 6443
}

View File

@ -33,6 +33,28 @@ resource "aws_security_group_rule" "controller-etcd" {
self = true
}
# Allow Prometheus to scrape etcd metrics
resource "aws_security_group_rule" "controller-etcd-metrics" {
security_group_id = aws_security_group.controller.id
type = "ingress"
protocol = "tcp"
from_port = 2381
to_port = 2381
source_security_group_id = aws_security_group.worker.id
}
# Allow Prometheus to scrape kube-proxy
resource "aws_security_group_rule" "kube-proxy-metrics" {
security_group_id = aws_security_group.controller.id
type = "ingress"
protocol = "tcp"
from_port = 10249
to_port = 10249
source_security_group_id = aws_security_group.worker.id
}
# Allow Prometheus to scrape kube-scheduler
resource "aws_security_group_rule" "controller-scheduler-metrics" {
security_group_id = aws_security_group.controller.id
@ -55,17 +77,6 @@ resource "aws_security_group_rule" "controller-manager-metrics" {
source_security_group_id = aws_security_group.worker.id
}
# Allow Prometheus to scrape etcd metrics
resource "aws_security_group_rule" "controller-etcd-metrics" {
security_group_id = aws_security_group.controller.id
type = "ingress"
protocol = "tcp"
from_port = 2381
to_port = 2381
source_security_group_id = aws_security_group.worker.id
}
resource "aws_security_group_rule" "controller-vxlan" {
count = var.networking == "flannel" ? 1 : 0
@ -281,14 +292,15 @@ resource "aws_security_group_rule" "worker-node-exporter" {
self = true
}
resource "aws_security_group_rule" "ingress-health" {
# Allow Prometheus to scrape kube-proxy
resource "aws_security_group_rule" "worker-kube-proxy" {
security_group_id = aws_security_group.worker.id
type = "ingress"
protocol = "tcp"
from_port = 10254
to_port = 10254
cidr_blocks = ["0.0.0.0/0"]
type = "ingress"
protocol = "tcp"
from_port = 10249
to_port = 10249
self = true
}
# Allow apiserver to access kubelets for exec, log, port-forward
@ -313,6 +325,16 @@ resource "aws_security_group_rule" "worker-kubelet-self" {
self = true
}
resource "aws_security_group_rule" "ingress-health" {
security_group_id = aws_security_group.worker.id
type = "ingress"
protocol = "tcp"
from_port = 10254
to_port = 10254
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "worker-bgp" {
security_group_id = aws_security_group.worker.id

View File

@ -2,7 +2,7 @@ locals {
# format assets for distribution
assets_bundle = [
# header with the unpack location
for key, value in module.bootstrap.assets_dist:
for key, value in module.bootstrap.assets_dist :
format("##### %s\n%s", key, value)
]
}

View File

@ -4,8 +4,8 @@ locals {
# flatcar-stable -> Flatcar Linux AMI
ami_id = local.flavor == "flatcar" ? data.aws_ami.flatcar.image_id : data.aws_ami.coreos.image_id
flavor = element(split("-", var.os_image), 0)
channel = element(split("-", var.os_image), 1)
flavor = split("-", var.os_image)[0]
channel = split("-", var.os_image)[1]
}
data "aws_ami" "coreos" {

View File

@ -25,29 +25,47 @@ systemd:
Description=Kubelet via Hyperkube
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--insecure-options=image"
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -57,6 +75,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
@ -92,14 +111,6 @@ storage:
contents:
inline: |
${kubeconfig}
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -117,7 +128,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.17.0 \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--net=host \
--dns=host \
-- \

View File

@ -78,7 +78,7 @@ data "ct_config" "worker-ignition" {
# Worker Container Linux config
data "template_file" "worker-config" {
template = file("${path.module}/cl/worker.yaml.tmpl")
template = file("${path.module}/cl/worker.yaml")
vars = {
kubeconfig = indent(10, var.kubeconfig)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.0 (upstream)
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -15,9 +15,9 @@ data "aws_ami" "fedora-coreos" {
filter {
name = "name"
values = ["fedora-coreos-30.*.*-hvm"]
values = ["fedora-coreos-31.*.*.*-hvm"]
}
# try to filter out dev images (AWS filters can't)
name_regex = "^fedora-coreos-30.[0-9]*.[0-9]*-hvm*"
name_regex = "^fedora-coreos-31.[0-9]*.[0-9]*.[0-9]*-hvm*"
}

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=24e5513ee6da4888005aac02101f73fc9e5e829f"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=796194583426593d1a62b6f1bf3f7ffed8fca140"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -77,10 +77,9 @@ systemd:
--volume /var/lib/docker:/var/lib/docker \
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
--volume /var/log:/var/log \
--volume /var/run:/var/run \
--volume /var/run/lock:/var/run/lock:z \
--volume /opt/cni/bin:/opt/cni/bin:z \
k8s.gcr.io/hyperkube:v1.17.0 kubelet \
k8s.gcr.io/hyperkube:v1.17.3 kubelet \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -92,6 +91,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
@ -123,7 +123,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \
k8s.gcr.io/hyperkube:v1.17.0
k8s.gcr.io/hyperkube:v1.17.3
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap
storage:
@ -171,10 +171,6 @@ storage:
echo "Retry applying manifests"
sleep 5
done
- path: /etc/sysctl.d/reverse-path-filter.conf
contents:
inline: |
net.ipv4.conf.all.rp_filter=1
- path: /etc/sysctl.d/max-user-watches.conf
contents:
inline: |
@ -186,6 +182,19 @@ storage:
DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes
- path: /etc/sysconfig/docker
mode: 0644
overwrite: true
contents:
inline: |
# Modify these options if you want to change the way the docker daemon runs
OPTIONS="--selinux-enabled \
--log-driver=json-file \
--live-restore \
--default-ulimit nofile=1024:1024 \
--init-path /usr/libexec/docker/docker-init \
--userland-proxy-path /usr/libexec/docker/docker-proxy \
"
- path: /etc/etcd/etcd.env
mode: 0644
contents:

View File

@ -44,6 +44,17 @@ resource "aws_security_group_rule" "controller-etcd-metrics" {
source_security_group_id = aws_security_group.worker.id
}
# Allow Prometheus to scrape kube-proxy
resource "aws_security_group_rule" "kube-proxy-metrics" {
security_group_id = aws_security_group.controller.id
type = "ingress"
protocol = "tcp"
from_port = 10249
to_port = 10249
source_security_group_id = aws_security_group.worker.id
}
# Allow Prometheus to scrape kube-scheduler
resource "aws_security_group_rule" "controller-scheduler-metrics" {
security_group_id = aws_security_group.controller.id
@ -281,14 +292,15 @@ resource "aws_security_group_rule" "worker-node-exporter" {
self = true
}
resource "aws_security_group_rule" "ingress-health" {
# Allow Prometheus to scrape kube-proxy
resource "aws_security_group_rule" "worker-kube-proxy" {
security_group_id = aws_security_group.worker.id
type = "ingress"
protocol = "tcp"
from_port = 10254
to_port = 10254
cidr_blocks = ["0.0.0.0/0"]
type = "ingress"
protocol = "tcp"
from_port = 10249
to_port = 10249
self = true
}
# Allow apiserver to access kubelets for exec, log, port-forward
@ -313,6 +325,16 @@ resource "aws_security_group_rule" "worker-kubelet-self" {
self = true
}
resource "aws_security_group_rule" "ingress-health" {
security_group_id = aws_security_group.worker.id
type = "ingress"
protocol = "tcp"
from_port = 10254
to_port = 10254
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "worker-bgp" {
security_group_id = aws_security_group.worker.id

View File

@ -2,7 +2,7 @@ locals {
# format assets for distribution
assets_bundle = [
# header with the unpack location
for key, value in module.bootstrap.assets_dist:
for key, value in module.bootstrap.assets_dist :
format("##### %s\n%s", key, value)
]
}
@ -21,7 +21,7 @@ resource "null_resource" "copy-controller-secrets" {
user = "core"
timeout = "15m"
}
provisioner "file" {
content = join("\n", local.assets_bundle)
destination = "$HOME/assets"

View File

@ -15,9 +15,9 @@ data "aws_ami" "fedora-coreos" {
filter {
name = "name"
values = ["fedora-coreos-30.*.*-hvm"]
values = ["fedora-coreos-31.*.*.*-hvm"]
}
# try to filter out dev images (AWS filters can't)
name_regex = "^fedora-coreos-30.[0-9]*.[0-9]*-hvm*"
name_regex = "^fedora-coreos-31.[0-9]*.[0-9]*.[0-9]*-hvm*"
}

View File

@ -47,10 +47,9 @@ systemd:
--volume /var/lib/docker:/var/lib/docker \
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
--volume /var/log:/var/log \
--volume /var/run:/var/run \
--volume /var/run/lock:/var/run/lock:z \
--volume /opt/cni/bin:/opt/cni/bin:z \
k8s.gcr.io/hyperkube:v1.17.0 kubelet \
k8s.gcr.io/hyperkube:v1.17.3 kubelet \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -62,6 +61,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
@ -87,10 +87,6 @@ storage:
contents:
inline: |
${kubeconfig}
- path: /etc/sysctl.d/reverse-path-filter.conf
contents:
inline: |
net.ipv4.conf.all.rp_filter=1
- path: /etc/sysctl.d/max-user-watches.conf
contents:
inline: |
@ -102,6 +98,19 @@ storage:
DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes
- path: /etc/sysconfig/docker
mode: 0644
overwrite: true
contents:
inline: |
# Modify these options if you want to change the way the docker daemon runs
OPTIONS="--selinux-enabled \
--log-driver=json-file \
--live-restore \
--default-ulimit nofile=1024:1024 \
--init-path /usr/libexec/docker/docker-init \
--userland-proxy-path /usr/libexec/docker/docker-proxy \
"
passwd:
users:
- name: core

View File

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.0 (upstream)
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=24e5513ee6da4888005aac02101f73fc9e5e829f"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=796194583426593d1a62b6f1bf3f7ffed8fca140"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -50,29 +50,46 @@ systemd:
Description=Kubelet via Hyperkube
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--hosts-entry=host \
--insecure-options=image"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -81,14 +98,15 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
--node-labels=node.kubernetes.io/master \
--node-labels=node.kubernetes.io/controller="true" \
--pod-manifest-path=/etc/kubernetes/manifests \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
--read-only-port=0 \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
Restart=always
@ -114,7 +132,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.17.0 \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--net=host \
--dns=host \
--exec=/apply
@ -129,14 +147,6 @@ storage:
contents:
inline: |
${kubeconfig}
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
- path: /opt/bootstrap/layout
filesystem: root
mode: 0544

View File

@ -11,16 +11,13 @@ resource "azurerm_dns_a_record" "etcds" {
ttl = 300
# private IPv4 address for etcd
records = [element(
azurerm_network_interface.controllers.*.private_ip_address,
count.index,
)]
records = [azurerm_network_interface.controllers.*.private_ip_address[count.index]]
}
locals {
# Channel for a Container Linux derivative
# coreos-stable -> Container Linux Stable
channel = element(split("-", var.os_image), 1)
channel = split("-", var.os_image)[1]
}
# Controller availability set to spread controllers
@ -63,12 +60,12 @@ resource "azurerm_virtual_machine" "controllers" {
}
# network
network_interface_ids = [element(azurerm_network_interface.controllers.*.id, count.index)]
network_interface_ids = [azurerm_network_interface.controllers.*.id[count.index]]
os_profile {
computer_name = "${var.cluster_name}-controller-${count.index}"
admin_username = "core"
custom_data = element(data.ct_config.controller-ignitions.*.rendered, count.index)
custom_data = data.ct_config.controller-ignitions.*.rendered[count.index]
}
# Azure mandates setting an ssh_key, even though Ignition custom_data handles it too
@ -108,7 +105,7 @@ resource "azurerm_network_interface" "controllers" {
private_ip_address_allocation = "dynamic"
# public IPv4
public_ip_address_id = element(azurerm_public_ip.controllers.*.id, count.index)
public_ip_address_id = azurerm_public_ip.controllers.*.id[count.index]
}
}
@ -134,11 +131,8 @@ resource "azurerm_public_ip" "controllers" {
# Controller Ignition configs
data "ct_config" "controller-ignitions" {
count = var.controller_count
content = element(
data.template_file.controller-configs.*.rendered,
count.index,
)
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
pretty_print = false
snippets = var.controller_clc_snippets
}
@ -147,7 +141,7 @@ data "ct_config" "controller-ignitions" {
data "template_file" "controller-configs" {
count = var.controller_count
template = file("${path.module}/cl/controller.yaml.tmpl")
template = file("${path.module}/cl/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records

View File

@ -53,13 +53,29 @@ resource "azurerm_network_security_rule" "controller-etcd-metrics" {
destination_address_prefix = azurerm_subnet.controller.address_prefix
}
# Allow Prometheus to scrape kube-proxy metrics
resource "azurerm_network_security_rule" "controller-kube-proxy" {
resource_group_name = azurerm_resource_group.cluster.name
name = "allow-kube-proxy-metrics"
network_security_group_name = azurerm_network_security_group.controller.name
priority = "2011"
access = "Allow"
direction = "Inbound"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "10249"
source_address_prefix = azurerm_subnet.worker.address_prefix
destination_address_prefix = azurerm_subnet.controller.address_prefix
}
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
resource "azurerm_network_security_rule" "controller-kube-metrics" {
resource_group_name = azurerm_resource_group.cluster.name
name = "allow-kube-metrics"
network_security_group_name = azurerm_network_security_group.controller.name
priority = "2011"
priority = "2012"
access = "Allow"
direction = "Inbound"
protocol = "Tcp"
@ -251,6 +267,22 @@ resource "azurerm_network_security_rule" "worker-node-exporter" {
destination_address_prefix = azurerm_subnet.worker.address_prefix
}
# Allow Prometheus to scrape kube-proxy
resource "azurerm_network_security_rule" "worker-kube-proxy" {
resource_group_name = azurerm_resource_group.cluster.name
name = "allow-kube-proxy"
network_security_group_name = azurerm_network_security_group.worker.name
priority = "2024"
access = "Allow"
direction = "Inbound"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "10249"
source_address_prefix = azurerm_subnet.worker.address_prefix
destination_address_prefix = azurerm_subnet.worker.address_prefix
}
# Allow apiserver to access kubelet's for exec, log, port-forward
resource "azurerm_network_security_rule" "worker-kubelet" {
resource_group_name = azurerm_resource_group.cluster.name

View File

@ -2,7 +2,7 @@ locals {
# format assets for distribution
assets_bundle = [
# header with the unpack location
for key, value in module.bootstrap.assets_dist:
for key, value in module.bootstrap.assets_dist :
format("##### %s\n%s", key, value)
]
}
@ -22,7 +22,7 @@ resource "null_resource" "copy-controller-secrets" {
user = "core"
timeout = "15m"
}
provisioner "file" {
content = join("\n", local.assets_bundle)
destination = "$HOME/assets"
@ -45,7 +45,7 @@ resource "null_resource" "bootstrap" {
connection {
type = "ssh"
host = element(azurerm_public_ip.controllers.*.ip_address, 0)
host = azurerm_public_ip.controllers.*.ip_address[0]
user = "core"
timeout = "15m"
}

View File

@ -25,28 +25,46 @@ systemd:
Description=Kubelet via Hyperkube
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--insecure-options=image"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -55,6 +73,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
@ -90,14 +109,6 @@ storage:
contents:
inline: |
${kubeconfig}
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -115,7 +126,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.17.0 \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--net=host \
--dns=host \
-- \

View File

@ -1,7 +1,7 @@
locals {
# Channel for a Container Linux derivative
# coreos-stable -> Container Linux Stable
channel = element(split("-", var.os_image), 1)
channel = split("-", var.os_image)[1]
}
# Workers scale set
@ -104,7 +104,7 @@ data "ct_config" "worker-ignition" {
# Worker Container Linux configs
data "template_file" "worker-config" {
template = file("${path.module}/cl/worker.yaml.tmpl")
template = file("${path.module}/cl/worker.yaml")
vars = {
kubeconfig = indent(10, var.kubeconfig)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.0 (upstream)
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=24e5513ee6da4888005aac02101f73fc9e5e829f"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=796194583426593d1a62b6f1bf3f7ffed8fca140"
cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name]

View File

@ -58,33 +58,51 @@ systemd:
Description=Kubelet via Hyperkube
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume iscsiconf,kind=host,source=/etc/iscsi/ \
--mount volume=iscsiconf,target=/etc/iscsi/ \
--volume iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
--mount volume=iscsiadm,target=/sbin/iscsiadm \
--insecure-options=image"
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume etc-iscsi,kind=host,source=/etc/iscsi \
--mount volume=etc-iscsi,target=/etc/iscsi \
--volume usr-sbin-iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
--mount volume=usr-sbin-iscsiadm,target=/sbin/iscsiadm \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -94,6 +112,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--hostname-override=${domain_name} \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
@ -128,7 +147,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.17.0 \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--net=host \
--dns=host \
--exec=/apply
@ -136,15 +155,10 @@ systemd:
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/kubelet.env
directories:
- path: /etc/kubernetes
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
files:
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -33,33 +33,51 @@ systemd:
Description=Kubelet via Hyperkube
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume iscsiconf,kind=host,source=/etc/iscsi/ \
--mount volume=iscsiconf,target=/etc/iscsi/ \
--volume iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
--mount volume=iscsiadm,target=/sbin/iscsiadm \
--insecure-options=image"
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume etc-iscsi,kind=host,source=/etc/iscsi \
--mount volume=etc-iscsi,target=/etc/iscsi \
--volume usr-sbin-iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
--mount volume=usr-sbin-iscsiadm,target=/sbin/iscsiadm \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -69,6 +87,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--hostname-override=${domain_name} \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
@ -84,15 +103,10 @@ systemd:
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/kubelet.env
directories:
- path: /etc/kubernetes
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
files:
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -31,7 +31,7 @@ resource "matchbox_profile" "container-linux-install" {
data "template_file" "container-linux-install-configs" {
count = length(var.controllers) + length(var.workers)
template = file("${path.module}/cl/install.yaml.tmpl")
template = file("${path.module}/cl/install.yaml")
vars = {
os_flavor = local.flavor
@ -72,7 +72,7 @@ resource "matchbox_profile" "cached-container-linux-install" {
data "template_file" "cached-container-linux-install-configs" {
count = length(var.controllers) + length(var.workers)
template = file("${path.module}/cl/install.yaml.tmpl")
template = file("${path.module}/cl/install.yaml")
vars = {
os_flavor = local.flavor
@ -150,7 +150,7 @@ data "ct_config" "controller-ignitions" {
data "template_file" "controller-configs" {
count = length(var.controllers)
template = file("${path.module}/cl/controller.yaml.tmpl")
template = file("${path.module}/cl/controller.yaml")
vars = {
domain_name = var.controllers.*.domain[count.index]
@ -180,7 +180,7 @@ data "ct_config" "worker-ignitions" {
data "template_file" "worker-configs" {
count = length(var.workers)
template = file("${path.module}/cl/worker.yaml.tmpl")
template = file("${path.module}/cl/worker.yaml")
vars = {
domain_name = var.workers.*.domain[count.index]

View File

@ -2,7 +2,7 @@ locals {
# format assets for distribution
assets_bundle = [
# header with the unpack location
for key, value in module.bootstrap.assets_dist:
for key, value in module.bootstrap.assets_dist :
format("##### %s\n%s", key, value)
]
}

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.0 (upstream)
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=24e5513ee6da4888005aac02101f73fc9e5e829f"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=796194583426593d1a62b6f1bf3f7ffed8fca140"
cluster_name = var.cluster_name
api_servers = [var.k8s_domain_name]

View File

@ -76,12 +76,11 @@ systemd:
--volume /var/lib/docker:/var/lib/docker \
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
--volume /var/log:/var/log \
--volume /var/run:/var/run \
--volume /var/run/lock:/var/run/lock:z \
--volume /opt/cni/bin:/opt/cni/bin:z \
--volume /etc/iscsi:/etc/iscsi \
--volume /sbin/iscsiadm:/sbin/iscsiadm \
k8s.gcr.io/hyperkube:v1.17.0 kubelet \
k8s.gcr.io/hyperkube:v1.17.3 kubelet \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -93,6 +92,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--hostname-override=${domain_name} \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
@ -134,7 +134,7 @@ systemd:
--volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \
k8s.gcr.io/hyperkube:v1.17.0
k8s.gcr.io/hyperkube:v1.17.3
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap
storage:
@ -182,10 +182,6 @@ storage:
echo "Retry applying manifests"
sleep 5
done
- path: /etc/sysctl.d/reverse-path-filter.conf
contents:
inline: |
net.ipv4.conf.all.rp_filter=1
- path: /etc/sysctl.d/max-user-watches.conf
contents:
inline: |
@ -197,6 +193,19 @@ storage:
DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes
- path: /etc/sysconfig/docker
mode: 0644
overwrite: true
contents:
inline: |
# Modify these options if you want to change the way the docker daemon runs
OPTIONS="--selinux-enabled \
--log-driver=json-file \
--live-restore \
--default-ulimit nofile=1024:1024 \
--init-path /usr/libexec/docker/docker-init \
--userland-proxy-path /usr/libexec/docker/docker-proxy \
"
- path: /etc/etcd/etcd.env
mode: 0644
contents:

View File

@ -46,12 +46,11 @@ systemd:
--volume /var/lib/docker:/var/lib/docker \
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
--volume /var/log:/var/log \
--volume /var/run:/var/run \
--volume /var/run/lock:/var/run/lock:z \
--volume /opt/cni/bin:/opt/cni/bin:z \
--volume /etc/iscsi:/etc/iscsi \
--volume /sbin/iscsiadm:/sbin/iscsiadm \
k8s.gcr.io/hyperkube:v1.17.0 kubelet \
k8s.gcr.io/hyperkube:v1.17.3 kubelet \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -63,6 +62,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--hostname-override=${domain_name} \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
@ -95,10 +95,6 @@ storage:
contents:
inline:
${domain_name}
- path: /etc/sysctl.d/reverse-path-filter.conf
contents:
inline: |
net.ipv4.conf.all.rp_filter=1
- path: /etc/sysctl.d/max-user-watches.conf
contents:
inline: |
@ -110,6 +106,19 @@ storage:
DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes
- path: /etc/sysconfig/docker
mode: 0644
overwrite: true
contents:
inline: |
# Modify these options if you want to change the way the docker daemon runs
OPTIONS="--selinux-enabled \
--log-driver=json-file \
--live-restore \
--default-ulimit nofile=1024:1024 \
--init-path /usr/libexec/docker/docker-init \
--userland-proxy-path /usr/libexec/docker/docker-proxy \
"
passwd:
users:
- name: core

View File

@ -1,24 +1,28 @@
locals {
remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-installer-kernel-x86_64"
remote_initrd = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-installer-initramfs.x86_64.img"
remote_kernel = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64"
remote_initrd = "https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img"
remote_args = [
"ip=dhcp",
"rd.neednet=1",
"coreos.inst=yes",
"initrd=fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
"coreos.inst.image_url=https://builds.coreos.fedoraproject.org/prod/streams/${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.inst.install_dev=${var.install_disk}"
"coreos.inst.install_dev=${var.install_disk}",
"console=tty0",
"console=ttyS0",
]
cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-installer-kernel-x86_64"
cached_initrd = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-installer-initramfs.x86_64.img"
cached_kernel = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-kernel-x86_64"
cached_initrd = "/assets/fedora-coreos/fedora-coreos-${var.os_version}-live-initramfs.x86_64.img"
cached_args = [
"ip=dhcp",
"rd.neednet=1",
"coreos.inst=yes",
"initrd=fedora-coreos-${var.os_version}-live-initramfs.x86_64.img",
"coreos.inst.image_url=${var.matchbox_http_endpoint}/assets/fedora-coreos/fedora-coreos-${var.os_version}-metal.x86_64.raw.xz",
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.inst.install_dev=${var.install_disk}"
"coreos.inst.install_dev=${var.install_disk}",
"console=tty0",
"console=ttyS0",
]
kernel = var.cached_install ? local.cached_kernel : local.remote_kernel

View File

@ -2,7 +2,7 @@ locals {
# format assets for distribution
assets_bundle = [
# header with the unpack location
for key, value in module.bootstrap.assets_dist:
for key, value in module.bootstrap.assets_dist :
format("##### %s\n%s", key, value)
]
}

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.0 (upstream)
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=24e5513ee6da4888005aac02101f73fc9e5e829f"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=796194583426593d1a62b6f1bf3f7ffed8fca140"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -60,29 +60,47 @@ systemd:
After=coreos-metadata.service
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
EnvironmentFile=/run/metadata/coreos
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--insecure-options=image"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -91,6 +109,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
@ -125,7 +144,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.17.0 \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--net=host \
--dns=host \
--exec=/apply
@ -133,15 +152,10 @@ systemd:
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/kubelet.env
directories:
- path: /etc/kubernetes
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
files:
- path: /opt/bootstrap/layout
filesystem: root
mode: 0544

View File

@ -35,29 +35,47 @@ systemd:
After=coreos-metadata.service
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
EnvironmentFile=/run/metadata/coreos
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--insecure-options=image"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -66,6 +84,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
@ -92,15 +111,10 @@ systemd:
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/kubelet.env
directories:
- path: /etc/kubernetes
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
files:
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -118,7 +132,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.17.0 \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--net=host \
--dns=host \
-- \

View File

@ -11,7 +11,7 @@ resource "digitalocean_record" "controllers" {
ttl = 300
# IPv4 addresses of controllers
value = element(digitalocean_droplet.controllers.*.ipv4_address, count.index)
value = digitalocean_droplet.controllers.*.ipv4_address[count.index]
}
# Discrete DNS records for each controller's private IPv4 for etcd usage
@ -27,7 +27,7 @@ resource "digitalocean_record" "etcds" {
ttl = 300
# private IPv4 address for etcd
value = element(digitalocean_droplet.controllers.*.ipv4_address_private, count.index)
value = digitalocean_droplet.controllers.*.ipv4_address_private[count.index]
}
# Controller droplet instances
@ -44,7 +44,7 @@ resource "digitalocean_droplet" "controllers" {
ipv6 = true
private_networking = true
user_data = element(data.ct_config.controller-ignitions.*.rendered, count.index)
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
ssh_keys = var.ssh_fingerprints
tags = [
@ -64,7 +64,7 @@ resource "digitalocean_tag" "controllers" {
# Controller Ignition configs
data "ct_config" "controller-ignitions" {
count = var.controller_count
content = element(data.template_file.controller-configs.*.rendered, count.index)
content = data.template_file.controller-configs.*.rendered[count.index]
pretty_print = false
snippets = var.controller_clc_snippets
}
@ -73,7 +73,7 @@ data "ct_config" "controller-ignitions" {
data "template_file" "controller-configs" {
count = var.controller_count
template = file("${path.module}/cl/controller.yaml.tmpl")
template = file("${path.module}/cl/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records

View File

@ -16,12 +16,20 @@ resource "digitalocean_firewall" "rules" {
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
}
# Allow Prometheus to scrape node-exporter
inbound_rule {
protocol = "tcp"
port_range = "9100"
source_tags = [digitalocean_tag.workers.name]
}
# Allow Prometheus to scrape kube-proxy
inbound_rule {
protocol = "tcp"
port_range = "10249"
source_tags = [digitalocean_tag.workers.name]
}
inbound_rule {
protocol = "tcp"
port_range = "10250"

View File

@ -27,6 +27,12 @@ output "workers_ipv6" {
value = digitalocean_droplet.workers.*.ipv6_address
}
# Outputs for worker pools
output "kubeconfig" {
value = module.bootstrap.kubeconfig-kubelet
}
# Outputs for custom firewalls
output "controller_tag" {

View File

@ -2,7 +2,7 @@ locals {
# format assets for distribution
assets_bundle = [
# header with the unpack location
for key, value in module.bootstrap.assets_dist:
for key, value in module.bootstrap.assets_dist :
format("##### %s\n%s", key, value)
]
}

View File

@ -8,7 +8,7 @@ resource "digitalocean_record" "workers-record-a" {
name = "${var.cluster_name}-workers"
type = "A"
ttl = 300
value = element(digitalocean_droplet.workers.*.ipv4_address, count.index)
value = digitalocean_droplet.workers.*.ipv4_address[count.index]
}
resource "digitalocean_record" "workers-record-aaaa" {
@ -20,7 +20,7 @@ resource "digitalocean_record" "workers-record-aaaa" {
name = "${var.cluster_name}-workers"
type = "AAAA"
ttl = 300
value = element(digitalocean_droplet.workers.*.ipv6_address, count.index)
value = digitalocean_droplet.workers.*.ipv6_address[count.index]
}
# Worker droplet instances
@ -63,7 +63,7 @@ data "ct_config" "worker-ignition" {
# Worker Container Linux config
data "template_file" "worker-config" {
template = file("${path.module}/cl/worker.yaml.tmpl")
template = file("${path.module}/cl/worker.yaml")
vars = {
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)

View File

@ -79,7 +79,7 @@ Create a cluster following the Azure [tutorial](../cl/azure.md#cluster). Define
```tf
module "ramius-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.17.3"
# Azure
region = module.ramius.region
@ -145,7 +145,7 @@ Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#clu
```tf
module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.17.3"
# Google Cloud
region = "europe-west2"
@ -176,11 +176,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
```
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.17.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.17.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.17.0
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.17.0
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.17.0
yavin-controller-0.c.example-com.internal Ready 6m v1.17.3
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.17.3
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.17.3
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.17.3
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.17.3
```
### Variables

View File

@ -1,5 +1,15 @@
# Announce <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo-small.png">
## Jan 23, 2020
Typhoon for Fedora CoreOS promoted to alpha!
Last summer, Typhoon released the first preview of Kubernetes on Fedora CoreOS for bare-metal and AWS, developing many ideas and patterns from Typhoon for Container Linux and Fedora Atomic. Since then, Typhoon for Fedora CoreOS has evolved and gained features alongside Typhoon, while Fedora CoreOS itself has evolved and improved too.
Fedora recently [announced](https://fedoramagazine.org/fedora-coreos-out-of-preview/) that Fedora CoreOS is available for general use. To align with that change and to better indicate the maturing status, Typhoon for Fedora CoreOS has been promoted to alpha. Many thanks to folks who have worked to make this possbile!
About: For newcomers, Typhoon is a minimal and free (cost and freedom) Kubernetes distribution providing upstream Kubernetes, declarative configuration via Terraform, and support for AWS, Azure, Google Cloud, DigitalOcean, and bare-metal. It is run by former CoreOS engineer [@dghubble](https://twitter.com/dghubble) to power his clusters, with freedom [motivations](https://typhoon.psdn.io/#motivation).
## Jul 18, 2019
Introducing a preview of Typhoon Kubernetes clusters with Fedora CoreOS!
@ -8,8 +18,6 @@ Fedora recently [announced](https://lists.fedoraproject.org/archives/list/coreos
While Typhoon uses Container Linux (or Flatcar Linux) for stable modules, the project hasn't been a stranger to Fedora ideas, once developing a [Fedora Atomic](https://typhoon.psdn.io/announce/#april-26-2018) variant in 2018. That makes the Fedora CoreOS fushion both exciting and familiar. Typhoon with Fedora CoreOS uses Ignition v3 for provisioning, uses rpm-ostree for layering and updates, tries swapping system containers for podman, and brings SELinux enforcement ([table](https://typhoon.psdn.io/architecture/operating-systems/)). This is an early preview (don't go to prod), but do try it out and help identify and solve issues (getting started links above).
About: For newcomers, Typhoon is a minimal and free (cost and freedom) Kubernetes distribution providing upstream Kubernetes, declarative configuration via Terraform, and support for AWS, Azure, Google Cloud, DigitalOcean, and bare-metal. It is run by former CoreOS engineer [@dghubble](https://twitter.com/dghubble) to power his clusters with freedom [motivations](https://typhoon.psdn.io/#motivation).
## March 27, 2019
Last April, Typhoon [introduced](#april-26-2018) alpha support for creating Kubernetes clusters with Fedora Atomic on AWS, Google Cloud, DigitalOcean, and bare-metal. Fedora Atomic shared many of Container Linux's aims for a container-optimized operating system, introduced novel ideas, and provided technical diversification for an uncertain future. However, Project Atomic efforts were merged into Fedora CoreOS and future Fedora Atomic releases are [not expected](http://www.projectatomic.io/blog/2018/06/welcome-to-fedora-coreos/). *Typhoon modules for Fedora Atomic will not be updated much beyond Kubernetes v1.13*. They may later be removed.
@ -46,7 +54,7 @@ Typhoon for Fedora Atomic reflects many of the same principles that created Typh
Meanwhile, Fedora Atomic adds some promising new low-level technologies:
* [ostree](https://github.com/ostreedev/ostree) & [rpm-ostree](https://github.com/projectatomic/rpm-ostree) - a hybrid, layered, image and package system that lets you perform atomic updates and rollbacks, layer on packages, "rebase" your system, or manage a remote tree repo. See Dusty Mabe's great [intro](https://dustymabe.com/2017/09/01/atomic-host-101-lab-part-3-rebase-upgrade-rollback/).
* [ostree](https://github.com/ostreedev/ostree) & [rpm-ostree](https://github.com/projectatomic/rpm-ostree) - a hybrid, layered, image and package system that lets you perform atomic updates and rollbacks, layer on packages, "rebase" your system, or manage a remote tree repo. See Dusty Mabe's great [intro](https://dustymabe.com/2017/09/01/atomic-host-101-lab-part-3-rebase-upgrade-rollback/).
* [system containers](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/) - OCI container images that embed systemd and runc metadata for starting low-level host services before container runtimes are ready. Typhoon uses system containers under runc for `etcd`, `kubelet`, and `bootkube` on Fedora Atomic (instead of rkt-fly).

View File

@ -1,6 +1,6 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.17.0 cluster on AWS with Container Linux.
In this tutorial, we'll create a Kubernetes v1.17.3 cluster on AWS with CoreOS Container Linux or Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -18,7 +18,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your sy
```sh
$ terraform version
Terraform v0.12.16
Terraform v0.12.20
```
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
@ -49,7 +49,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
```tf
provider "aws" {
version = "2.41.0"
version = "2.48.0"
region = "eu-central-1"
shared_credentials_file = "/home/user/.config/aws/credentials"
}
@ -70,7 +70,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
```tf
module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.17.3"
# AWS
cluster_name = "tempest"
@ -143,27 +143,27 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.17.0
ip-10-0-26-65 Ready <none> 10m v1.17.0
ip-10-0-41-21 Ready <none> 10m v1.17.0
ip-10-0-3-155 Ready <none> 10m v1.17.3
ip-10-0-26-65 Ready <none> 10m v1.17.3
ip-10-0-41-21 Ready <none> 10m v1.17.3
```
List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1m5bf 2/2 Running 0 34m
kube-system calico-node-7jmr1 2/2 Running 0 34m
kube-system calico-node-bknc8 2/2 Running 0 34m
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1m5bf 2/2 Running 0 34m
kube-system calico-node-7jmr1 2/2 Running 0 34m
kube-system calico-node-bknc8 2/2 Running 0 34m
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
kube-system kube-controller-manager-ip-10-0-3-155 1/1 Running 0 34m
kube-system kube-proxy-14wxv 1/1 Running 0 34m
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
kube-system kube-scheduler-ip-10-0-3-155 1/1 Running 1 34m
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
kube-system kube-controller-manager-ip-10-0-3-155 1/1 Running 0 34m
kube-system kube-proxy-14wxv 1/1 Running 0 34m
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
kube-system kube-scheduler-ip-10-0-3-155 1/1 Running 1 34m
```
## Going Further

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Azure is alpha. For production, use AWS, Google Cloud, or bare-metal. As Azure matures, check [errata](https://github.com/poseidon/typhoon/wiki/Errata) for known shortcomings.
In this tutorial, we'll create a Kubernetes v1.17.0 cluster on Azure with Container Linux.
In this tutorial, we'll create a Kubernetes v1.17.3 cluster on Azure with CoreOS Container Linux or Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
@ -21,7 +21,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your sy
```sh
$ terraform version
Terraform v0.12.16
Terraform v0.12.20
```
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
@ -50,7 +50,7 @@ Configure the Azure provider in a `providers.tf` file.
```tf
provider "azurerm" {
version = "1.38.0"
version = "1.43.0"
}
provider "ct" {
@ -66,7 +66,7 @@ Define a Kubernetes cluster using the module `azure/container-linux/kubernetes`.
```tf
module "ramius" {
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.17.3"
# Azure
cluster_name = "ramius"
@ -140,9 +140,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ramius-controller-0 Ready <none> 24m v1.17.0
ramius-worker-000001 Ready <none> 25m v1.17.0
ramius-worker-000002 Ready <none> 24m v1.17.0
ramius-controller-0 Ready <none> 24m v1.17.3
ramius-worker-000001 Ready <none> 25m v1.17.3
ramius-worker-000002 Ready <none> 24m v1.17.3
```
List the pods.
@ -152,9 +152,9 @@ $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
kube-system calico-node-1m5bf 2/2 Running 0 26m
kube-system calico-node-7jmr1 2/2 Running 0 26m
kube-system calico-node-bknc8 2/2 Running 0 26m
kube-system calico-node-1m5bf 2/2 Running 0 26m
kube-system calico-node-7jmr1 2/2 Running 0 26m
kube-system calico-node-bknc8 2/2 Running 0 26m
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
kube-system kube-proxy-j4vpq 1/1 Running 0 26m

View File

@ -1,6 +1,6 @@
# Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.17.0 cluster on bare-metal with Container Linux.
In this tutorial, we'll network boot and provision a Kubernetes v1.17.3 cluster on bare-metal with CoreOS Container Linux or Flatcar Linux.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -27,7 +27,7 @@ Configure each machine to boot from the disk through IPMI or the BIOS menu.
```
ipmitool -H node1 -U USER -P PASS chassis bootdev disk options=persistent
```
During provisioning, you'll explicitly set the boot device to `pxe` for the next boot only. Machines will install (overwrite) the operating system to disk on PXE boot and reboot into the disk install.
!!! tip ""
@ -111,7 +111,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your sy
```sh
$ terraform version
Terraform v0.12.16
Terraform v0.12.20
```
Add the [terraform-provider-matchbox](https://github.com/poseidon/terraform-provider-matchbox) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
@ -160,8 +160,8 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
```tf
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.17.3"
# bare-metal
cluster_name = "mercury"
matchbox_http_endpoint = "http://matchbox.example.com"
@ -264,9 +264,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
```
# before v1.17.0
# before v1.10.1
$ ssh debug@node1.example.com
# after v1.17.0
# after v1.10.1
$ ssh -p 2222 core@node1.example.com
```
@ -299,9 +299,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.17.0
node2.example.com Ready <none> 10m v1.17.0
node3.example.com Ready <none> 10m v1.17.0
node1.example.com Ready <none> 10m v1.17.3
node2.example.com Ready <none> 10m v1.17.3
node3.example.com Ready <none> 10m v1.17.3
```
List the pods.
@ -355,7 +355,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| cached_install | PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux or Flatcar images into the cache | false | true |
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
| clc_snippets | Map from machine names to lists of Container Linux Config snippets | {} | [example](/advanced/customization/#usage) |
| network_ip_autodetection_method | Method to detect host IPv4 address (calico-only) | "first-found" | "can-reach=10.0.0.1" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |

View File

@ -1,6 +1,6 @@
# Digital Ocean
In this tutorial, we'll create a Kubernetes v1.17.0 cluster on DigitalOcean with Container Linux.
In this tutorial, we'll create a Kubernetes v1.17.3 cluster on DigitalOcean with CoreOS Container Linux or Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -18,7 +18,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your sy
```sh
$ terraform version
Terraform v0.12.16
Terraform v0.12.20
```
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
@ -50,7 +50,7 @@ Configure the DigitalOcean provider to use your token in a `providers.tf` file.
```tf
provider "digitalocean" {
version = "1.11.0"
version = "1.14.0"
token = "${chomp(file("~/.config/digital-ocean/token"))}"
}
@ -65,7 +65,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
```tf
module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.17.3"
# Digital Ocean
cluster_name = "nemo"
@ -74,7 +74,7 @@ module "nemo" {
# configuration
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
# optional
worker_count = 2
}
@ -138,9 +138,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.17.0
10.132.115.81 Ready <none> 10m v1.17.0
10.132.124.107 Ready <none> 10m v1.17.0
10.132.110.130 Ready <none> 10m v1.17.3
10.132.115.81 Ready <none> 10m v1.17.3
10.132.124.107 Ready <none> 10m v1.17.3
```
List the pods.
@ -149,9 +149,9 @@ List the pods.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
kube-system calico-node-1m5bf 2/2 Running 0 11m
kube-system calico-node-7jmr1 2/2 Running 0 11m
kube-system calico-node-bknc8 2/2 Running 0 11m
kube-system calico-node-1m5bf 2/2 Running 0 11m
kube-system calico-node-7jmr1 2/2 Running 0 11m
kube-system calico-node-bknc8 2/2 Running 0 11m
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-proxy-6kxjf 1/1 Running 0 11m

View File

@ -1,6 +1,6 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.17.0 cluster on Google Compute Engine with Container Linux.
In this tutorial, we'll create a Kubernetes v1.17.3 cluster on Google Compute Engine with CoreOS Container Linux or Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -18,7 +18,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your sy
```sh
$ terraform version
Terraform v0.12.16
Terraform v0.12.20
```
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
@ -49,7 +49,7 @@ Configure the Google Cloud provider to use your service account key, project-id,
```tf
provider "google" {
version = "2.20.0"
version = "3.7.0"
project = "project-id"
region = "us-central1"
credentials = file("~/.config/google-cloud/terraform.json")
@ -71,7 +71,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.17.3"
# Google Cloud
cluster_name = "yavin"
@ -81,7 +81,7 @@ module "yavin" {
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
worker_count = 2
}
@ -89,6 +89,31 @@ module "yavin" {
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/variables.tf) source.
### Flatcar Linux Images
!!! success
Skip this section when using CoreOS Container Linux (default). CoreOS Container Linux publishes official images to Google Cloud.
!!! danger
Typhoon for Flatcar Linux on Google Cloud is alpha.
Flatcar Linux publishes images for Google Cloud, but does not yet upload them. Google Cloud allows [custom boot images](https://cloud.google.com/compute/docs/images/import-existing-image) to be uploaded to a bucket and imported into your project.
[Download](https://www.flatcar-linux.org/releases/) the Flatcar Linux GCE gzipped tarball and upload it to a Google Cloud storage bucket.
```
gsutil list
gsutil cp flatcar_production_gce.tar.gz gs://BUCKET
```
Create a Compute Engine image from the file.
```
gcloud compute images create flatcar-linux-2303-4-0 --source-uri gs://BUCKET_NAME/flatcar_production_gce.tar.gz
```
Set the [os_image](#variables) to the image name (e.g. `flatcar-linux-2303-4-0`)
## ssh-agent
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
@ -145,9 +170,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.17.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.17.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.17.0
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.17.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.17.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.17.3
```
List the pods.
@ -217,7 +242,7 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| os_image | Container Linux image for compute instances | "coreos-stable" | "coreos-stable-1632-3-0-v20180215" |
| os_image | Container Linux image for compute instances | "coreos-stable" | "flatcar-linux-2303-4-0" |
| disk_size | Size of the disk in GB | 40 | 100 |
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |

View File

@ -1,9 +1,6 @@
# AWS
!!! danger
Typhoon for Fedora CoreOS is an early preview! Fedora CoreOS itself is a preview! Expect bugs and design shifts. Please help both projects solve problems. Report Fedora CoreOS bugs to [Fedora](https://github.com/coreos/fedora-coreos-tracker/issues). Report Typhoon issues to Typhoon.
In this tutorial, we'll create a Kubernetes v1.17.0 cluster on AWS with Fedora CoreOS.
In this tutorial, we'll create a Kubernetes v1.17.3 cluster on AWS with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -21,7 +18,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your sy
```sh
$ terraform version
Terraform v0.12.16
Terraform v0.12.20
```
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
@ -52,7 +49,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
```tf
provider "aws" {
version = "2.41.0"
version = "2.48.0"
region = "eu-central-1"
shared_credentials_file = "/home/user/.config/aws/credentials"
}
@ -73,7 +70,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
```tf
module "tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.17.3"
# AWS
cluster_name = "tempest"
@ -146,9 +143,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-3-155 Ready <none> 10m v1.17.0
ip-10-0-26-65 Ready <none> 10m v1.17.0
ip-10-0-41-21 Ready <none> 10m v1.17.0
ip-10-0-3-155 Ready <none> 10m v1.17.3
ip-10-0-26-65 Ready <none> 10m v1.17.3
ip-10-0-41-21 Ready <none> 10m v1.17.3
```
List the pods.

View File

@ -1,9 +1,6 @@
# Bare-Metal
!!! danger
Typhoon for Fedora CoreOS is an early preview! Fedora CoreOS itself is a preview! Expect bugs and design shifts. Please help both projects solve problems. Report Fedora CoreOS bugs to [Fedora](https://github.com/coreos/fedora-coreos-tracker/issues). Report Typhoon issues to Typhoon.
In this tutorial, we'll network boot and provision a Kubernetes v1.17.0 cluster on bare-metal with Fedora CoreOS.
In this tutorial, we'll network boot and provision a Kubernetes v1.17.3 cluster on bare-metal with Fedora CoreOS.
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -30,7 +27,7 @@ Configure each machine to boot from the disk through IPMI or the BIOS menu.
```
ipmitool -H node1 -U USER -P PASS chassis bootdev disk options=persistent
```
During provisioning, you'll explicitly set the boot device to `pxe` for the next boot only. Machines will install (overwrite) the operating system to disk on PXE boot and reboot into the disk install.
!!! tip ""
@ -106,7 +103,7 @@ Read about the [many ways](https://coreos.com/matchbox/docs/latest/network-setup
TFTP chainloading to modern boot firmware, like iPXE, avoids issues with old NICs and allows faster transfer protocols like HTTP to be used.
!!! warning
Compile iPXE from [source](https://github.com/ipxe/ipxe) with support for [HTTPS downloads](https://ipxe.org/crypto). iPXE's pre-built firmware binaries do not enable this. Fedora does not provide images over HTTP.
Compile iPXE from [source](https://github.com/ipxe/ipxe) with support for [HTTPS downloads](https://ipxe.org/crypto). iPXE's pre-built firmware binaries do not enable this. Fedora CoreOS downloads are HTTPS-only.
## Terraform Setup
@ -114,7 +111,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your sy
```sh
$ terraform version
Terraform v0.12.16
Terraform v0.12.20
```
Add the [terraform-provider-matchbox](https://github.com/poseidon/terraform-provider-matchbox) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
@ -163,14 +160,13 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
```tf
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.17.3"
# bare-metal
cluster_name = "mercury"
matchbox_http_endpoint = "http://matchbox.example.com"
os_stream = "testing"
os_version = "30.20191002.0"
cached_install = true
os_stream = "stable"
os_version = "31.20200113.3.1"
# configuration
k8s_domain_name = "node1.example.com"
@ -293,9 +289,9 @@ List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1.example.com Ready <none> 10m v1.17.0
node2.example.com Ready <none> 10m v1.17.0
node3.example.com Ready <none> 10m v1.17.0
node1.example.com Ready <none> 10m v1.17.3
node2.example.com Ready <none> 10m v1.17.3
node3.example.com Ready <none> 10m v1.17.3
```
List the pods.
@ -330,8 +326,8 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|:-----|:------------|:--------|
| cluster_name | Unique cluster name | "mercury" |
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | "http://matchbox.example.com:port" |
| os_stream | Fedora CoreOS release stream | "testing" |
| os_version | Fedora CoreOS version to PXE and install | "30.20190716.1" |
| os_stream | Fedora CoreOS release stream | "stable" |
| os_version | Fedora CoreOS version to PXE and install | "31.20200113.3.1" |
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
| controllers | List of controller machine detail objects (unique name, identifying MAC address, FQDN) | `[{name="node1", mac="52:54:00:a1:9c:ae", domain="node1.example.com"}]` |
@ -345,7 +341,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| cached_install | PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Fedora CoreOS images into the cache | false | true |
| install_disk | Disk device where Fedora CoreOS should be installed | "sda" (not "/dev/sda" like Container Linux) | "sdb" |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
| snippets | Map from machine names to lists of Fedora CoreOS Config snippets | {} | UNSUPPORTED |
| network_ip_autodetection_method | Method to detect host IPv4 address (calico-only) | "first-found" | "can-reach=10.0.0.1" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |

View File

@ -0,0 +1,255 @@
# Google Cloud
!!! danger
Typhoon for Fedora CoreOS is an alpha. Please report Fedora CoreOS bugs to [Fedora](https://github.com/coreos/fedora-coreos-tracker/issues) and Typhoon issues to Typhoon.
In this tutorial, we'll create a Kubernetes v1.17.3 cluster on Google Compute Engine with Fedora CoreOS.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
* Google Cloud Account and Service Account
* Google Cloud DNS Zone (registered Domain Name or delegated subdomain)
* Terraform v0.12.6+ and [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) installed locally
## Terraform Setup
Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your system.
```sh
$ terraform version
Terraform v0.12.20
```
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
```sh
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.4.0/terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
tar xzf terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
mv terraform-provider-ct-v0.4.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.4.0
```
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
```
cd infra/clusters
```
## Provider
Login to your Google Console [API Manager](https://console.cloud.google.com/apis/dashboard) and select a project, or [signup](https://cloud.google.com/free/) if you don't have an account.
Select "Credentials" and create a service account key. Choose the "Compute Engine Admin" and "DNS Administrator" roles and save the JSON private key to a file that can be referenced in configs.
```sh
mv ~/Downloads/project-id-43048204.json ~/.config/google-cloud/terraform.json
```
Configure the Google Cloud provider to use your service account key, project-id, and region in a `providers.tf` file.
```tf
provider "google" {
version = "3.7.0"
project = "project-id"
region = "us-central1"
credentials = file("~/.config/google-cloud/terraform.json")
}
provider "ct" {
version = "0.4.0"
}
```
Additional configuration options are described in the `google` provider [docs](https://www.terraform.io/docs/providers/google/index.html).
!!! tip
Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`. A project may contain multiple clusters across different regions.
## Fedora CoreOS Images
Fedora CoreOS publishes images for Google Cloud, but does not yet upload them. Google Cloud allows [custom boot images](https://cloud.google.com/compute/docs/images/import-existing-image) to be uploaded to a bucket and imported into your project.
[Download](https://getfedora.org/coreos/download/) a Fedora CoreOS GCP gzipped tarball and upload it to a Google Cloud storage bucket.
```
gsutil list
gsutil cp fedora-coreos-31.20200113.3.1-gcp.x86_64.tar.gz gs://BUCKET
```
Create a Compute Engine image from the file.
```
gcloud compute images create fedora-coreos-31-20200113-3-1 --source-uri gs://BUCKET/fedora-coreos-31.20200113.3.1-gcp.x86_64.tar.gz
```
## Cluster
Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kubernetes`.
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=development-sha"
# Google Cloud
cluster_name = "yavin"
region = "us-central1"
dns_zone = "example.com"
dns_zone_name = "example-zone"
# custom image name from above
os_image = "fedora-coreos-31-20200113-3-1"
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
worker_count = 2
}
```
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/variables.tf) source.
## ssh-agent
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
```sh
ssh-add ~/.ssh/id_rsa
ssh-add -L
```
## Apply
Initialize the config directory if this is the first use with Terraform.
```sh
terraform init
```
Plan the resources to be created.
```sh
$ terraform plan
Plan: 64 to add, 0 to change, 0 to destroy.
```
Apply the changes to create the cluster.
```sh
$ terraform apply
module.yavin.null_resource.bootstrap: Still creating... (10s elapsed)
...
module.yavin.null_resource.bootstrap: Still creating... (5m30s elapsed)
module.yavin.null_resource.bootstrap: Still creating... (5m40s elapsed)
module.yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358)
Apply complete! Resources: 62 added, 0 changed, 0 destroyed.
```
In 4-8 minutes, the Kubernetes cluster will be ready.
## Verify
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
```
resource "local_file" "kubeconfig-yavin" {
content = module.yavin.kubeconfig-admin
filename = "/home/user/.kube/configs/yavin-config"
}
```
List nodes in the cluster.
```
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.17.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.17.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.17.3
```
List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1cs8z 2/2 Running 0 6m
kube-system calico-node-d1l5b 2/2 Running 0 6m
kube-system calico-node-sp9ps 2/2 Running 0 6m
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
kube-system kube-controller-manager-controller-0 1/1 Running 0 6m
kube-system kube-proxy-117v6 1/1 Running 0 6m
kube-system kube-proxy-9886n 1/1 Running 0 6m
kube-system kube-proxy-njn47 1/1 Running 0 6m
kube-system kube-scheduler-controller-0 1/1 Running 0 6m
```
## Going Further
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
## Variables
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/variables.tf) source.
### Required
| Name | Description | Example |
|:-----|:------------|:--------|
| cluster_name | Unique cluster name (prepended to dns_zone) | "yavin" |
| region | Google Cloud region | "us-central1" |
| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" |
| dns_zone_name | Google Cloud DNS zone name | "example-zone" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-zones/regions-zones) and list Fedora CoreOS [images](https://cloud.google.com/compute/docs/images) with `gcloud compute images list | grep fedora-coreos`.
#### DNS Zone
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a TCP proxy load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`.
You'll need a registered domain name or delegated subdomain on Google Cloud DNS. You can set this up once and create many clusters with unique names.
```tf
resource "google_dns_managed_zone" "zone-for-clusters" {
dns_name = "google-cloud.example.com."
name = "example-zone"
description = "Production DNS zone"
}
```
!!! tip ""
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers).
### Optional
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| asset_dir | Absolute path to a directory where generated assets should be placed (contains secrets) | "" (disabled) | "/home/user/.secrets/clusters/yavin" |
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| os_image | Fedora CoreOS image for compute instances | "" | "fedora-coreos-31-20200113-3-1" |
| disk_size | Size of the disk in GB | 40 | 100 |
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | UNSUPPORTED |
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | UNSUPPORTED |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
#### Preemption
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.0 (upstream)
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](cl/google-cloud/#preemption) workers, and [snippets](advanced/customization/#container-linux) customization
@ -23,18 +23,27 @@ Typhoon provides a Terraform Module for each supported operating system and plat
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Container Linux / Flatcar Linux | [aws/container-linux/kubernetes](cl/aws.md) | stable |
| AWS | Container Linux | [aws/container-linux/kubernetes](cl/aws.md) | stable |
| Azure | Container Linux | [azure/container-linux/kubernetes](cl/azure.md) | alpha |
| Bare-Metal | Container Linux / Flatcar Linux | [bare-metal/container-linux/kubernetes](cl/bare-metal.md) | stable |
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](cl/bare-metal.md) | stable |
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | beta |
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | stable |
A preview of Typhoon for [Fedora CoreOS](https://getfedora.org/coreos/) is available for testing.
Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](fedora-coreos/aws.md) | preview |
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](fedora-coreos/bare-metal.md) | preview |
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](fedora-coreos/aws.md) | beta |
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](fedora-coreos/bare-metal.md) | beta |
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | alpha |
Typhoon is available for [Flatcar Container Linux](https://www.flatcar-linux.org/releases/).
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Flatcar Linux | [aws/container-linux/kubernetes](cl/aws.md) | stable |
| Bare-Metal | Flatcar Linux | [bare-metal/container-linux/kubernetes](cl/bare-metal.md) | stable |
| Google Cloud | Flatcar Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | alpha |
## Documentation
@ -47,7 +56,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.17.3"
# Google Cloud
cluster_name = "yavin"
@ -57,7 +66,7 @@ module "yavin" {
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
# optional
worker_count = 2
}
@ -85,9 +94,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
$ kubectl get nodes
NAME ROLES STATUS AGE VERSION
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.17.0
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.17.0
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.17.0
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.17.3
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.17.3
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.17.3
```
List the pods.

View File

@ -18,7 +18,7 @@ module "yavin" {
}
module "mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.17.0"
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.17.3"
...
}
```
@ -279,15 +279,15 @@ Typhoon modules have been adapted for Terraform v0.12. Provider plugins requirem
| Typhoon Release | Terraform version |
|-------------------|---------------------|
| v1.17.0 - ? | v0.12.x |
| v1.10.3 - v1.17.0 | v0.11.x |
| v1.17.3 - ? | v0.12.x |
| v1.10.3 - v1.17.3 | v0.11.x |
| v1.9.2 - v1.10.2 | v0.10.4+ or v0.11.x |
| v1.7.3 - v1.9.1 | v0.10.x |
| v1.6.4 - v1.7.2 | v0.9.x |
### New users
New users can start with Terraform v0.12.x and follow the docs for Typhoon v1.17.0+ without issue.
New users can start with Terraform v0.12.x and follow the docs for Typhoon v1.17.3+ without issue.
### Existing users
@ -395,7 +395,7 @@ Verify no changes are proposed and commit changes to version control. You've mig
Alternately, continue maintaining existing clusters using Terraform v0.11.x and existing Terraform configuration directory(ies). Create new Terraform directory(ies) and move resources there to be managed with Terraform v0.12. This approach allows resources to be migrated incrementally and ensures existing resources can always be managed (e.g. emergency patches).
Create a new Terraform [config directory](/architecture/concepts#organize) for *new* resources.
Create a new Terraform [config directory](/architecture/concepts/#organize) for *new* resources.
```shell
mkdir infra2
@ -404,7 +404,7 @@ tree .
└── infraB <- new Terraform v0.12.x configs
```
Define Typhoon clusters in the new config directory using Terraform v0.12 syntax. Follow the Typhoon v1.17.0+ docs (e.g. use `terraform12` in the `infraB` dir). See [AWS](/cl/aws), [Azure](/cl/azure), [Bare-Metal](/cl/bare-metal), [Digital Ocean](/cl/digital-ocean), or [Google-Cloud](/cl/google-cloud)) to create new clusters. Follow the usual [upgrade](/topics/maintenance/#upgrades) process to apply workloads and shift traffic. Later, switch back to the old config directory and deprovision clusters with Terraform v0.11.
Define Typhoon clusters in the new config directory using Terraform v0.12 syntax. Follow the Typhoon v1.15.0+ docs (e.g. use `terraform12` in the `infraB` dir). See [AWS](/cl/aws), [Azure](/cl/azure), [Bare-Metal](/cl/bare-metal), [Digital Ocean](/cl/digital-ocean), or [Google-Cloud](/cl/google-cloud)) to create new clusters. Follow the usual [upgrade](/topics/maintenance/#upgrades) process to apply workloads and shift traffic. Later, switch back to the old config directory and deprovision clusters with Terraform v0.11.
```shell
terraform12 init

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.0 (upstream)
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization

View File

@ -1,6 +1,6 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=24e5513ee6da4888005aac02101f73fc9e5e829f"
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=796194583426593d1a62b6f1bf3f7ffed8fca140"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]

View File

@ -50,28 +50,46 @@ systemd:
Description=Kubelet via Hyperkube
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--insecure-options=image"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -80,14 +98,15 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
--node-labels=node.kubernetes.io/master \
--node-labels=node.kubernetes.io/controller="true" \
--pod-manifest-path=/etc/kubernetes/manifests \
--read-only-port=0 \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
--read-only-port=0 \
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
Restart=always
@ -113,7 +132,7 @@ systemd:
--volume script,kind=host,source=/opt/bootstrap/apply \
--mount volume=script,target=/apply \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.17.0 \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--net=host \
--dns=host \
--exec=/apply
@ -128,14 +147,6 @@ storage:
contents:
inline: |
${kubeconfig}
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
- path: /opt/bootstrap/layout
filesystem: root
mode: 0544

View File

@ -11,7 +11,7 @@ resource "google_dns_record_set" "etcds" {
ttl = 300
# private IPv4 address for etcd
rrdatas = [element(google_compute_instance.controllers.*.network_interface.0.network_ip, count.index)]
rrdatas = [google_compute_instance.controllers.*.network_interface.0.network_ip[count.index]]
}
# Zones in the region
@ -29,12 +29,13 @@ locals {
resource "google_compute_instance" "controllers" {
count = var.controller_count
name = "${var.cluster_name}-controller-${count.index}"
name = "${var.cluster_name}-controller-${count.index}"
# use a zone in the region and wrap around (e.g. controllers > zones)
zone = element(local.zones, count.index)
machine_type = var.controller_type
metadata = {
user-data = element(data.ct_config.controller-ignitions.*.rendered, count.index)
user-data = data.ct_config.controller-ignitions.*.rendered[count.index]
}
boot_disk {
@ -64,11 +65,8 @@ resource "google_compute_instance" "controllers" {
# Controller Ignition configs
data "ct_config" "controller-ignitions" {
count = var.controller_count
content = element(
data.template_file.controller-configs.*.rendered,
count.index,
)
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
pretty_print = false
snippets = var.controller_clc_snippets
}
@ -77,7 +75,7 @@ data "ct_config" "controller-ignitions" {
data "template_file" "controller-configs" {
count = var.controller_count
template = file("${path.module}/cl/controller.yaml.tmpl")
template = file("${path.module}/cl/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records

View File

@ -126,6 +126,20 @@ resource "google_compute_firewall" "internal-node-exporter" {
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
}
# Allow Prometheus to scrape kube-proxy metrics
resource "google_compute_firewall" "internal-kube-proxy" {
name = "${var.cluster_name}-internal-kube-proxy"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [10249]
}
source_tags = ["${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
}
# Allow apiserver to access kubelets for exec, log, port-forward
resource "google_compute_firewall" "internal-kubelet" {
name = "${var.cluster_name}-internal-kubelet"

View File

@ -2,7 +2,7 @@ locals {
# format assets for distribution
assets_bundle = [
# header with the unpack location
for key, value in module.bootstrap.assets_dist:
for key, value in module.bootstrap.assets_dist :
format("##### %s\n%s", key, value)
]
}
@ -21,7 +21,7 @@ resource "null_resource" "copy-controller-secrets" {
user = "core"
timeout = "15m"
}
provisioner "file" {
content = join("\n", local.assets_bundle)
destination = "$HOME/assets"

View File

@ -3,7 +3,7 @@
terraform {
required_version = "~> 0.12.6"
required_providers {
google = "~> 2.19"
google = ">= 2.19, < 4.0"
ct = "~> 0.3"
template = "~> 2.1"
null = "~> 2.1"

View File

@ -25,29 +25,46 @@ systemd:
Description=Kubelet via Hyperkube
Wants=rpc-statd.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--hosts-entry=host \
--insecure-options=image"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
ExecStart=/usr/bin/rkt run \
--uuid-file-save=/var/cache/kubelet-pod.uuid \
--stage1-from-dir=stage1-fly.aci \
--hosts-entry host \
--insecure-options=image \
--volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=true \
--mount volume=etc-kubernetes,target=/etc/kubernetes \
--volume etc-machine-id,kind=host,source=/etc/machine-id,readOnly=true \
--mount volume=etc-machine-id,target=/etc/machine-id \
--volume etc-os-release,kind=host,source=/usr/lib/os-release,readOnly=true \
--mount volume=etc-os-release,target=/etc/os-release \
--volume=etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true \
--mount volume=etc-resolv,target=/etc/resolv.conf \
--volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \
--mount volume=etc-ssl-certs,target=/etc/ssl/certs \
--volume lib-modules,kind=host,source=/lib/modules,readOnly=true \
--mount volume=lib-modules,target=/lib/modules \
--volume run,kind=host,source=/run \
--mount volume=run,target=/run \
--volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true \
--mount volume=usr-share-certs,target=/usr/share/ca-certificates \
--volume var-lib-calico,kind=host,source=/var/lib/calico \
--mount volume=var-lib-calico,target=/var/lib/calico \
--volume var-lib-docker,kind=host,source=/var/lib/docker \
--mount volume=var-lib-docker,target=/var/lib/docker \
--volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,recursive=true \
--mount volume=var-lib-kubelet,target=/var/lib/kubelet \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
--mount volume=opt-cni-bin,target=/opt/cni/bin \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--exec=/usr/local/bin/kubelet -- \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
@ -56,6 +73,7 @@ systemd:
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
@ -91,14 +109,6 @@ storage:
contents:
inline: |
${kubeconfig}
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.17.0
KUBELET_IMAGE_ARGS="--exec=/usr/local/bin/kubelet"
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -116,7 +126,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.17.0 \
docker://k8s.gcr.io/hyperkube:v1.17.3 \
--net=host \
--dns=host \
-- \

View File

@ -78,7 +78,7 @@ data "ct_config" "worker-ignition" {
# Worker Container Linux config
data "template_file" "worker-config" {
template = file("${path.module}/cl/worker.yaml.tmpl")
template = file("${path.module}/cl/worker.yaml")
vars = {
kubeconfig = indent(10, var.kubeconfig)

View File

@ -0,0 +1,23 @@
The MIT License (MIT)
Copyright (c) 2020 Typhoon Authors
Copyright (c) 2020 Dalton Hubble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -0,0 +1,23 @@
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
Typhoon is a minimal and free Kubernetes distribution.
* Minimal, stable base Kubernetes distribution
* Declarative infrastructure and configuration
* Free (freedom and cost) and privacy-respecting
* Practical for labs, datacenters, and clouds
Typhoon distributes upstream Kubernetes, architectural conventions, and cluster addons, much like a GNU/Linux distribution provides the Linux kernel and userspace components.
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.17.3 (upstream)
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
## Docs
Please see the [official docs](https://typhoon.psdn.io) and the Google Cloud [tutorial](https://typhoon.psdn.io/cl/google-cloud/).

View File

@ -0,0 +1,93 @@
# TCP Proxy load balancer DNS record
resource "google_dns_record_set" "apiserver" {
# DNS Zone name where record should be created
managed_zone = var.dns_zone_name
# DNS record
name = format("%s.%s.", var.cluster_name, var.dns_zone)
type = "A"
ttl = 300
# IPv4 address of apiserver TCP Proxy load balancer
rrdatas = [google_compute_global_address.apiserver-ipv4.address]
}
# Static IPv4 address for the TCP Proxy Load Balancer
resource "google_compute_global_address" "apiserver-ipv4" {
name = "${var.cluster_name}-apiserver-ip"
ip_version = "IPV4"
}
# Forward IPv4 TCP traffic to the TCP proxy load balancer
resource "google_compute_global_forwarding_rule" "apiserver" {
name = "${var.cluster_name}-apiserver"
ip_address = google_compute_global_address.apiserver-ipv4.address
ip_protocol = "TCP"
port_range = "443"
target = google_compute_target_tcp_proxy.apiserver.self_link
}
# Global TCP Proxy Load Balancer for apiservers
resource "google_compute_target_tcp_proxy" "apiserver" {
name = "${var.cluster_name}-apiserver"
description = "Distribute TCP load across ${var.cluster_name} controllers"
backend_service = google_compute_backend_service.apiserver.self_link
}
# Global backend service backed by unmanaged instance groups
resource "google_compute_backend_service" "apiserver" {
name = "${var.cluster_name}-apiserver"
description = "${var.cluster_name} apiserver service"
protocol = "TCP"
port_name = "apiserver"
session_affinity = "NONE"
timeout_sec = "300"
# controller(s) spread across zonal instance groups
dynamic "backend" {
for_each = google_compute_instance_group.controllers
content {
group = backend.value.self_link
}
}
health_checks = [google_compute_health_check.apiserver.self_link]
}
# Instance group of heterogeneous (unmanged) controller instances
resource "google_compute_instance_group" "controllers" {
count = min(var.controller_count, length(local.zones))
name = format("%s-controllers-%s", var.cluster_name, element(local.zones, count.index))
zone = element(local.zones, count.index)
named_port {
name = "apiserver"
port = "6443"
}
# add instances in the zone into the instance group
instances = matchkeys(
google_compute_instance.controllers.*.self_link,
google_compute_instance.controllers.*.zone,
[element(local.zones, count.index)],
)
}
# TCP health check for apiserver
resource "google_compute_health_check" "apiserver" {
name = "${var.cluster_name}-apiserver-tcp-health"
description = "TCP health check for kube-apiserver"
timeout_sec = 5
check_interval_sec = 5
healthy_threshold = 1
unhealthy_threshold = 3
tcp_health_check {
port = "6443"
}
}

View File

@ -0,0 +1,22 @@
# Kubernetes assets (kubeconfig, manifests)
module "bootstrap" {
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=796194583426593d1a62b6f1bf3f7ffed8fca140"
cluster_name = var.cluster_name
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
etcd_servers = google_dns_record_set.etcds.*.name
asset_dir = var.asset_dir
networking = var.networking
network_mtu = 1440
pod_cidr = var.pod_cidr
service_cidr = var.service_cidr
cluster_domain_suffix = var.cluster_domain_suffix
enable_reporting = var.enable_reporting
enable_aggregation = var.enable_aggregation
trusted_certs_dir = "/etc/pki/tls/certs"
// temporary
external_apiserver_port = 443
}

View File

@ -0,0 +1,103 @@
# Discrete DNS records for each controller's private IPv4 for etcd usage
resource "google_dns_record_set" "etcds" {
count = var.controller_count
# DNS Zone name where record should be created
managed_zone = var.dns_zone_name
# DNS record
name = format("%s-etcd%d.%s.", var.cluster_name, count.index, var.dns_zone)
type = "A"
ttl = 300
# private IPv4 address for etcd
rrdatas = [google_compute_instance.controllers.*.network_interface.0.network_ip[count.index]]
}
# Zones in the region
data "google_compute_zones" "all" {
region = var.region
}
locals {
zones = data.google_compute_zones.all.names
controllers_ipv4_public = google_compute_instance.controllers.*.network_interface.0.access_config.0.nat_ip
}
# Controller instances
resource "google_compute_instance" "controllers" {
count = var.controller_count
name = "${var.cluster_name}-controller-${count.index}"
# use a zone in the region and wrap around (e.g. controllers > zones)
zone = element(local.zones, count.index)
machine_type = var.controller_type
metadata = {
user-data = data.ct_config.controller-ignitions.*.rendered[count.index]
}
boot_disk {
auto_delete = true
initialize_params {
image = var.os_image
size = var.disk_size
}
}
network_interface {
network = google_compute_network.network.name
# Ephemeral external IP
access_config {
}
}
can_ip_forward = true
tags = ["${var.cluster_name}-controller"]
lifecycle {
ignore_changes = [metadata]
}
}
# Controller Ignition configs
data "ct_config" "controller-ignitions" {
count = var.controller_count
content = data.template_file.controller-configs.*.rendered[count.index]
strict = true
snippets = var.controller_snippets
}
# Controller Fedora CoreOS configs
data "template_file" "controller-configs" {
count = var.controller_count
template = file("${path.module}/fcc/controller.yaml")
vars = {
# Cannot use cyclic dependencies on controllers or their DNS records
etcd_name = "etcd${count.index}"
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
ssh_authorized_key = var.ssh_authorized_key
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
cluster_domain_suffix = var.cluster_domain_suffix
}
}
data "template_file" "etcds" {
count = var.controller_count
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
vars = {
index = count.index
cluster_name = var.cluster_name
dns_zone = var.dns_zone
}
}

View File

@ -0,0 +1,225 @@
---
variant: fcos
version: 1.0.0
systemd:
units:
- name: etcd-member.service
enabled: true
contents: |
[Unit]
Description=etcd (System Container)
Documentation=https://github.com/coreos/etcd
Wants=network-online.target network.target
After=network-online.target
[Service]
# https://github.com/opencontainers/runc/pull/1807
# Type=notify
# NotifyAccess=exec
Type=exec
Restart=on-failure
RestartSec=10s
TimeoutStartSec=0
LimitNOFILE=40000
ExecStartPre=/bin/mkdir -p /var/lib/etcd
ExecStartPre=-/usr/bin/podman rm etcd
#--volume $${NOTIFY_SOCKET}:/run/systemd/notify \
ExecStart=/usr/bin/podman run --name etcd \
--env-file /etc/etcd/etcd.env \
--network host \
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
quay.io/coreos/etcd:v3.4.3
ExecStop=/usr/bin/podman stop etcd
[Install]
WantedBy=multi-user.target
- name: docker.service
enabled: true
- name: wait-for-dns.service
enabled: true
contents: |
[Unit]
Description=Wait for DNS entries
Before=kubelet.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
[Install]
RequiredBy=kubelet.service
RequiredBy=etcd-member.service
- name: kubelet.service
enabled: true
contents: |
[Unit]
Description=Kubelet via Hyperkube (System Container)
Wants=rpc-statd.service
[Service]
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /opt/cni/bin
ExecStartPre=/bin/mkdir -p /var/lib/calico
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/podman rm kubelet
ExecStart=/usr/bin/podman run --name kubelet \
--privileged \
--pid host \
--network host \
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
--volume /usr/lib/os-release:/etc/os-release:ro \
--volume /etc/ssl/certs:/etc/ssl/certs:ro \
--volume /lib/modules:/lib/modules:ro \
--volume /run:/run \
--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
--volume /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
--volume /etc/pki/tls/certs:/usr/share/ca-certificates:ro \
--volume /var/lib/calico:/var/lib/calico \
--volume /var/lib/docker:/var/lib/docker \
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
--volume /var/log:/var/log \
--volume /var/run/lock:/var/run/lock:z \
--volume /opt/cni/bin:/opt/cni/bin:z \
k8s.gcr.io/hyperkube:v1.17.3 kubelet \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--cgroup-driver=systemd \
--cgroups-per-qos=true \
--enforce-node-allocatable=pods \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${cluster_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--exit-on-lock-contention \
--healthz-port=0 \
--kubeconfig=/etc/kubernetes/kubeconfig \
--lock-file=/var/run/lock/kubelet.lock \
--network-plugin=cni \
--node-labels=node.kubernetes.io/master \
--node-labels=node.kubernetes.io/controller="true" \
--pod-manifest-path=/etc/kubernetes/manifests \
--read-only-port=0 \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
ExecStop=-/usr/bin/podman stop kubelet
Delegate=yes
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: bootstrap.service
contents: |
[Unit]
Description=Kubernetes control plane
ConditionPathExists=!/opt/bootstrap/bootstrap.done
[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootstrap
ExecStartPre=-/usr/bin/bash -c 'set -x && [ -n "$(ls /opt/bootstrap/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootstrap/assets/manifests-*/* /opt/bootstrap/assets/manifests && rm -rf /opt/bootstrap/assets/manifests-*'
ExecStart=/usr/bin/podman run --name bootstrap \
--network host \
--volume /etc/kubernetes/bootstrap-secrets:/etc/kubernetes/secrets:ro,Z \
--volume /opt/bootstrap/assets:/assets:ro,Z \
--volume /opt/bootstrap/apply:/apply:ro,Z \
--entrypoint=/apply \
k8s.gcr.io/hyperkube:v1.17.3
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
ExecStartPost=-/usr/bin/podman stop bootstrap
storage:
directories:
- path: /etc/kubernetes
- path: /opt/bootstrap
files:
- path: /etc/kubernetes/kubeconfig
mode: 0644
contents:
inline: |
${kubeconfig}
- path: /opt/bootstrap/layout
mode: 0544
contents:
inline: |
#!/bin/bash -e
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
awk '/#####/ {filename=$2; next} {print > filename}' assets
mkdir -p /etc/ssl/etcd/etcd
mkdir -p /etc/kubernetes/bootstrap-secrets
mv tls/etcd/{peer*,server*} /etc/ssl/etcd/etcd/
mv tls/etcd/etcd-client* /etc/kubernetes/bootstrap-secrets/
chown -R etcd:etcd /etc/ssl/etcd
chmod -R 500 /etc/ssl/etcd
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
sudo mkdir -p /etc/kubernetes/manifests
sudo mv static-manifests/* /etc/kubernetes/manifests/
sudo mkdir -p /opt/bootstrap/assets
sudo mv manifests /opt/bootstrap/assets/manifests
sudo mv manifests-networking /opt/bootstrap/assets/manifests-networking
rm -rf assets auth static-manifests tls
- path: /opt/bootstrap/apply
mode: 0544
contents:
inline: |
#!/bin/bash -e
export KUBECONFIG=/etc/kubernetes/secrets/kubeconfig
until kubectl version; do
echo "Waiting for static pod control plane"
sleep 5
done
until kubectl apply -f /assets/manifests -R; do
echo "Retry applying manifests"
sleep 5
done
- path: /etc/sysctl.d/max-user-watches.conf
contents:
inline: |
fs.inotify.max_user_watches=16184
- path: /etc/systemd/system.conf.d/accounting.conf
contents:
inline: |
[Manager]
DefaultCPUAccounting=yes
DefaultMemoryAccounting=yes
DefaultBlockIOAccounting=yes
- path: /etc/sysconfig/docker
mode: 0644
overwrite: true
contents:
inline: |
# Modify these options if you want to change the way the docker daemon runs
OPTIONS="--selinux-enabled \
--log-driver=json-file \
--live-restore \
--default-ulimit nofile=1024:1024 \
--init-path /usr/libexec/docker/docker-init \
--userland-proxy-path /usr/libexec/docker/docker-proxy \
"
- path: /etc/etcd/etcd.env
mode: 0644
contents:
inline: |
# TODO: Use a systemd dropin once podman v1.4.5 is avail.
NOTIFY_SOCKET=/run/systemd/notify
ETCD_NAME=${etcd_name}
ETCD_DATA_DIR=/var/lib/etcd
ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
ETCD_LISTEN_METRICS_URLS=http://0.0.0.0:2381
ETCD_INITIAL_CLUSTER=${etcd_initial_cluster}
ETCD_STRICT_RECONFIG_CHECK=true
ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/server-ca.crt
ETCD_CERT_FILE=/etc/ssl/certs/etcd/server.crt
ETCD_KEY_FILE=/etc/ssl/certs/etcd/server.key
ETCD_CLIENT_CERT_AUTH=true
ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/peer-ca.crt
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
ETCD_PEER_CLIENT_CERT_AUTH=true
passwd:
users:
- name: core
ssh_authorized_keys:
- ${ssh_authorized_key}

View File

@ -0,0 +1,123 @@
# Static IPv4 address for Ingress Load Balancing
resource "google_compute_global_address" "ingress-ipv4" {
name = "${var.cluster_name}-ingress-ipv4"
ip_version = "IPV4"
}
# Static IPv6 address for Ingress Load Balancing
resource "google_compute_global_address" "ingress-ipv6" {
name = "${var.cluster_name}-ingress-ipv6"
ip_version = "IPV6"
}
# Forward IPv4 TCP traffic to the HTTP proxy load balancer
# Google Cloud does not allow TCP proxies for port 80. Must use HTTP proxy.
resource "google_compute_global_forwarding_rule" "ingress-http-ipv4" {
name = "${var.cluster_name}-ingress-http-ipv4"
ip_address = google_compute_global_address.ingress-ipv4.address
ip_protocol = "TCP"
port_range = "80"
target = google_compute_target_http_proxy.ingress-http.self_link
}
# Forward IPv4 TCP traffic to the TCP proxy load balancer
resource "google_compute_global_forwarding_rule" "ingress-https-ipv4" {
name = "${var.cluster_name}-ingress-https-ipv4"
ip_address = google_compute_global_address.ingress-ipv4.address
ip_protocol = "TCP"
port_range = "443"
target = google_compute_target_tcp_proxy.ingress-https.self_link
}
# Forward IPv6 TCP traffic to the HTTP proxy load balancer
# Google Cloud does not allow TCP proxies for port 80. Must use HTTP proxy.
resource "google_compute_global_forwarding_rule" "ingress-http-ipv6" {
name = "${var.cluster_name}-ingress-http-ipv6"
ip_address = google_compute_global_address.ingress-ipv6.address
ip_protocol = "TCP"
port_range = "80"
target = google_compute_target_http_proxy.ingress-http.self_link
}
# Forward IPv6 TCP traffic to the TCP proxy load balancer
resource "google_compute_global_forwarding_rule" "ingress-https-ipv6" {
name = "${var.cluster_name}-ingress-https-ipv6"
ip_address = google_compute_global_address.ingress-ipv6.address
ip_protocol = "TCP"
port_range = "443"
target = google_compute_target_tcp_proxy.ingress-https.self_link
}
# HTTP proxy load balancer for ingress controllers
resource "google_compute_target_http_proxy" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
description = "Distribute HTTP load across ${var.cluster_name} workers"
url_map = google_compute_url_map.ingress-http.self_link
}
# TCP proxy load balancer for ingress controllers
resource "google_compute_target_tcp_proxy" "ingress-https" {
name = "${var.cluster_name}-ingress-https"
description = "Distribute HTTPS load across ${var.cluster_name} workers"
backend_service = google_compute_backend_service.ingress-https.self_link
}
# HTTP URL Map (required)
resource "google_compute_url_map" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
# Do not add host/path rules for applications here. Use Ingress resources.
default_service = google_compute_backend_service.ingress-http.self_link
}
# Backend service backed by managed instance group of workers
resource "google_compute_backend_service" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
description = "${var.cluster_name} ingress service"
protocol = "HTTP"
port_name = "http"
session_affinity = "NONE"
timeout_sec = "60"
backend {
group = module.workers.instance_group
}
health_checks = [google_compute_health_check.ingress.self_link]
}
# Backend service backed by managed instance group of workers
resource "google_compute_backend_service" "ingress-https" {
name = "${var.cluster_name}-ingress-https"
description = "${var.cluster_name} ingress service"
protocol = "TCP"
port_name = "https"
session_affinity = "NONE"
timeout_sec = "60"
backend {
group = module.workers.instance_group
}
health_checks = [google_compute_health_check.ingress.self_link]
}
# Ingress HTTP Health Check
resource "google_compute_health_check" "ingress" {
name = "${var.cluster_name}-ingress-health"
description = "Health check for Ingress controller"
timeout_sec = 5
check_interval_sec = 5
healthy_threshold = 2
unhealthy_threshold = 4
http_health_check {
port = 10254
request_path = "/healthz"
}
}

View File

@ -0,0 +1,193 @@
resource "google_compute_network" "network" {
name = var.cluster_name
description = "Network for the ${var.cluster_name} cluster"
auto_create_subnetworks = true
timeouts {
delete = "6m"
}
}
resource "google_compute_firewall" "allow-ssh" {
name = "${var.cluster_name}-allow-ssh"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [22]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
}
resource "google_compute_firewall" "internal-etcd" {
name = "${var.cluster_name}-internal-etcd"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [2379, 2380]
}
source_tags = ["${var.cluster_name}-controller"]
target_tags = ["${var.cluster_name}-controller"]
}
# Allow Prometheus to scrape etcd metrics
resource "google_compute_firewall" "internal-etcd-metrics" {
name = "${var.cluster_name}-internal-etcd-metrics"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [2381]
}
source_tags = ["${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller"]
}
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
resource "google_compute_firewall" "internal-kube-metrics" {
name = "${var.cluster_name}-internal-kube-metrics"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [10251, 10252]
}
source_tags = ["${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller"]
}
resource "google_compute_firewall" "allow-apiserver" {
name = "${var.cluster_name}-allow-apiserver"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [6443]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["${var.cluster_name}-controller"]
}
# BGP and IPIP
# https://docs.projectcalico.org/latest/reference/public-cloud/gce
resource "google_compute_firewall" "internal-bgp" {
count = var.networking != "flannel" ? 1 : 0
name = "${var.cluster_name}-internal-bgp"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = ["179"]
}
allow {
protocol = "ipip"
}
source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
}
# flannel VXLAN
resource "google_compute_firewall" "internal-vxlan" {
count = var.networking == "flannel" ? 1 : 0
name = "${var.cluster_name}-internal-vxlan"
network = google_compute_network.network.name
allow {
protocol = "udp"
ports = [4789]
}
source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
}
# Allow Prometheus to scrape node-exporter daemonset
resource "google_compute_firewall" "internal-node-exporter" {
name = "${var.cluster_name}-internal-node-exporter"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [9100]
}
source_tags = ["${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
}
# Allow Prometheus to scrape kube-proxy metrics
resource "google_compute_firewall" "internal-kube-proxy" {
name = "${var.cluster_name}-internal-kube-proxy"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [10249]
}
source_tags = ["${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
}
# Allow apiserver to access kubelets for exec, log, port-forward
resource "google_compute_firewall" "internal-kubelet" {
name = "${var.cluster_name}-internal-kubelet"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [10250]
}
# allow Prometheus to scrape kubelet metrics too
source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
}
# Workers
resource "google_compute_firewall" "allow-ingress" {
name = "${var.cluster_name}-allow-ingress"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [80, 443]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["${var.cluster_name}-worker"]
}
resource "google_compute_firewall" "google-ingress-health-checks" {
name = "${var.cluster_name}-ingress-health"
network = google_compute_network.network.name
allow {
protocol = "tcp"
ports = [10254]
}
# https://cloud.google.com/load-balancing/docs/health-check-concepts#method
source_ranges = [
"35.191.0.0/16",
"130.211.0.0/22",
"35.191.0.0/16",
"209.85.152.0/22",
"209.85.204.0/22",
]
target_tags = ["${var.cluster_name}-worker"]
}

View File

@ -0,0 +1,44 @@
output "kubeconfig-admin" {
value = module.bootstrap.kubeconfig-admin
}
# Outputs for Kubernetes Ingress
output "ingress_static_ipv4" {
description = "Global IPv4 address for proxy load balancing to the nearest Ingress controller"
value = google_compute_global_address.ingress-ipv4.address
}
output "ingress_static_ipv6" {
description = "Global IPv6 address for proxy load balancing to the nearest Ingress controller"
value = google_compute_global_address.ingress-ipv6.address
}
# Outputs for worker pools
output "network_name" {
value = google_compute_network.network.name
}
output "kubeconfig" {
value = module.bootstrap.kubeconfig-kubelet
}
# Outputs for custom firewalling
output "network_self_link" {
value = google_compute_network.network.self_link
}
# Outputs for custom load balancing
output "worker_instance_group" {
description = "Worker managed instance group full URL"
value = module.workers.instance_group
}
output "worker_target_pool" {
description = "Worker target pool self link"
value = module.workers.target_pool
}

View File

@ -0,0 +1,58 @@
locals {
# format assets for distribution
assets_bundle = [
# header with the unpack location
for key, value in module.bootstrap.assets_dist :
format("##### %s\n%s", key, value)
]
}
# Secure copy assets to controllers.
resource "null_resource" "copy-controller-secrets" {
count = var.controller_count
depends_on = [
module.bootstrap,
]
connection {
type = "ssh"
host = local.controllers_ipv4_public[count.index]
user = "core"
timeout = "15m"
}
provisioner "file" {
content = join("\n", local.assets_bundle)
destination = "$HOME/assets"
}
provisioner "remote-exec" {
inline = [
"sudo /opt/bootstrap/layout",
]
}
}
# Connect to a controller to perform one-time cluster bootstrap.
resource "null_resource" "bootstrap" {
depends_on = [
null_resource.copy-controller-secrets,
module.workers,
google_dns_record_set.apiserver,
]
connection {
type = "ssh"
host = local.controllers_ipv4_public[0]
user = "core"
timeout = "15m"
}
provisioner "remote-exec" {
inline = [
"sudo systemctl start bootstrap",
]
}
}

View File

@ -0,0 +1,138 @@
variable "cluster_name" {
type = string
description = "Unique cluster name (prepended to dns_zone)"
}
# Google Cloud
variable "region" {
type = string
description = "Google Cloud Region (e.g. us-central1, see `gcloud compute regions list`)"
}
variable "dns_zone" {
type = string
description = "Google Cloud DNS Zone (e.g. google-cloud.example.com)"
}
variable "dns_zone_name" {
type = string
description = "Google Cloud DNS Zone name (e.g. example-zone)"
}
# instances
variable "controller_count" {
type = number
description = "Number of controllers (i.e. masters)"
default = 1
}
variable "worker_count" {
type = number
description = "Number of workers"
default = 1
}
variable "controller_type" {
type = string
description = "Machine type for controllers (see `gcloud compute machine-types list`)"
default = "n1-standard-1"
}
variable "worker_type" {
type = string
description = "Machine type for controllers (see `gcloud compute machine-types list`)"
default = "n1-standard-1"
}
variable "os_image" {
type = string
description = "Fedora CoreOS image for compute instances (e.g. fedora-coreos)"
}
variable "disk_size" {
type = number
description = "Size of the disk in GB"
default = 40
}
variable "worker_preemptible" {
type = bool
description = "If enabled, Compute Engine will terminate workers randomly within 24 hours"
default = false
}
variable "controller_snippets" {
type = list(string)
description = "Controller Fedora CoreOS Config snippets"
default = []
}
variable "worker_snippets" {
type = list(string)
description = "Worker Fedora CoreOS Config snippets"
default = []
}
# configuration
variable "ssh_authorized_key" {
type = string
description = "SSH public key for user 'core'"
}
variable "asset_dir" {
type = string
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
default = ""
}
variable "networking" {
type = string
description = "Choice of networking provider (flannel or calico)"
default = "calico"
}
variable "pod_cidr" {
type = string
description = "CIDR IPv4 range to assign Kubernetes pods"
default = "10.2.0.0/16"
}
variable "service_cidr" {
type = string
description = <<EOD
CIDR IPv4 range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
EOD
default = "10.3.0.0/16"
}
variable "enable_reporting" {
type = bool
description = "Enable usage or analytics reporting to upstreams (Calico)"
default = false
}
variable "enable_aggregation" {
type = bool
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
default = false
}
variable "worker_node_labels" {
type = list(string)
description = "List of initial worker node labels"
default = []
}
# unofficial, undocumented, unsupported
variable "cluster_domain_suffix" {
type = string
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
default = "cluster.local"
}

View File

@ -0,0 +1,11 @@
# Terraform version and plugin versions
terraform {
required_version = "~> 0.12.6"
required_providers {
google = ">= 2.19, < 4.0"
ct = "~> 0.3"
template = "~> 2.1"
null = "~> 2.1"
}
}

Some files were not shown because too many files have changed in this diff Show More