mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-10 07:26:03 +02:00
Compare commits
79 Commits
Author | SHA1 | Date | |
---|---|---|---|
dbdc3fc850 | |||
e00f97c578 | |||
f7ebdf475d | |||
716dfe4d17 | |||
edc250d62a | |||
db64ce3312 | |||
7c327b8bf4 | |||
e6720cf738 | |||
844f380b4e | |||
13beb13aab | |||
90c4a7483d | |||
4e7dfc115d | |||
ec5ea51141 | |||
d8d524d10b | |||
02cd8eb8d3 | |||
84d6cfe7b3 | |||
3352388fe6 | |||
915f89d3c8 | |||
f40f60b83c | |||
6f958d7577 | |||
ee31074679 | |||
97517fa7f3 | |||
18502d64d6 | |||
a3349b5c68 | |||
74dc6b0bf9 | |||
fd1de27aef | |||
93de7506ef | |||
def445a344 | |||
8464b258d8 | |||
855aec5af3 | |||
0c4d59db87 | |||
2eaf04c68b | |||
0227014fa0 | |||
fb6f40051f | |||
316f06df06 | |||
f4d3059b00 | |||
6c5a1964aa | |||
6e64634748 | |||
d5de41e07a | |||
05b99178ae | |||
ed0b781296 | |||
51906bf398 | |||
18dd7ccc09 | |||
0764bd30b5 | |||
899424c94f | |||
ca8c0a7ac0 | |||
cbe646fba6 | |||
c166b2ba33 | |||
6676484490 | |||
79260c48f6 | |||
589c3569b7 | |||
4d75ae1373 | |||
d32e6797ae | |||
32a9a83190 | |||
6e968cd152 | |||
6a581ab577 | |||
4ac4d7cbaf | |||
4ea1fde9c5 | |||
1e2eec6487 | |||
28d0891729 | |||
2ae126bf68 | |||
714419342e | |||
3701c0b1fe | |||
0c3557e68e | |||
adc6c6866d | |||
9ac7b0655f | |||
983489bb52 | |||
c2b719dc75 | |||
37981f9fb1 | |||
5eb11f5104 | |||
f2ee75ac98 | |||
8b8e364915 | |||
fb88113523 | |||
1854f5c104 | |||
726b58b697 | |||
a5916da0e2 | |||
a54e3c0da1 | |||
9d4cbb38f6 | |||
cc29530ba0 |
176
CHANGES.md
176
CHANGES.md
@ -2,10 +2,182 @@
|
|||||||
|
|
||||||
Notable changes between versions.
|
Notable changes between versions.
|
||||||
|
|
||||||
## Latest
|
## v1.11.2
|
||||||
|
|
||||||
|
* Kubernetes [v1.11.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1112)
|
||||||
|
* Update etcd from v3.3.8 to [v3.3.9](https://github.com/coreos/etcd/blob/master/CHANGELOG-3.3.md#v339-2018-07-24)
|
||||||
|
* Use kubernetes-incubator/bootkube v0.13.0
|
||||||
|
* Fix Fedora Atomic modules' Kubelet version ([#270](https://github.com/poseidon/typhoon/issues/270))
|
||||||
|
|
||||||
|
#### Bare-Metal
|
||||||
|
|
||||||
|
* Introduce [Container Linux Config snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) on bare-metal
|
||||||
|
* Validate and additively merge custom Container Linux Configs during terraform plan
|
||||||
|
* Define files, systemd units, dropins, networkd configs, mounts, users, and more
|
||||||
|
* [Require](https://typhoon.psdn.io/cl/bare-metal/#terraform-setup) `terraform-provider-ct` plugin v0.2.1 (action required!)
|
||||||
|
|
||||||
|
#### Addons
|
||||||
|
|
||||||
|
* Update nginx-ingress from 0.16.2 to 0.17.1
|
||||||
|
* Add nginx-ingress manifests for bare-metal
|
||||||
|
* Update Grafana from 5.2.1 to 5.2.2
|
||||||
|
* Update heapster from v1.5.3 to v1.5.4
|
||||||
|
|
||||||
|
## v1.11.1
|
||||||
|
|
||||||
|
* Kubernetes [v1.11.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1111)
|
||||||
|
|
||||||
|
#### Addons
|
||||||
|
|
||||||
|
* Update Prometheus from v2.3.1 to v2.3.2
|
||||||
|
|
||||||
|
#### Errata
|
||||||
|
|
||||||
|
* Fedora Atomic modules shipped with Kubelet v1.11.0, instead of v1.11.1. Fixed in [#270](https://github.com/poseidon/typhoon/issues/270).
|
||||||
|
|
||||||
|
## v1.11.0
|
||||||
|
|
||||||
|
* Kubernetes [v1.11.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1110)
|
||||||
|
* Force apiserver to stop listening on `127.0.0.1:8080`
|
||||||
|
* Replace `kube-dns` with [CoreDNS](https://coredns.io/) ([#261](https://github.com/poseidon/typhoon/pull/261))
|
||||||
|
* Edit the `coredns` ConfigMap to [customize](https://coredns.io/plugins/)
|
||||||
|
* CoreDNS doesn't use a resizer. For large clusters, scaling may be required.
|
||||||
|
|
||||||
|
#### AWS
|
||||||
|
|
||||||
|
* Update from Fedora Atomic 27 to 28 ([#258](https://github.com/poseidon/typhoon/pull/258))
|
||||||
|
|
||||||
|
#### Bare-Metal
|
||||||
|
|
||||||
|
* Update from Fedora Atomic 27 to 28 ([#263](https://github.com/poseidon/typhoon/pull/263))
|
||||||
|
|
||||||
|
#### Google
|
||||||
|
|
||||||
|
* Promote Google Cloud to stable
|
||||||
|
* Update from Fedora Atomic 27 to 28 ([#259](https://github.com/poseidon/typhoon/pull/259))
|
||||||
|
* Remove `ingress_static_ip` module output. Use `ingress_static_ipv4`.
|
||||||
|
* Remove `controllers_ipv4_public` module output.
|
||||||
|
|
||||||
|
#### Addons
|
||||||
|
|
||||||
|
* Update nginx-ingress from 0.15.0 to 0.16.2
|
||||||
|
* Update Grafana from 5.1.4 to [5.2.1](http://docs.grafana.org/guides/whats-new-in-v5-2/)
|
||||||
|
* Update heapster from v1.5.2 to v1.5.3
|
||||||
|
|
||||||
|
## v1.10.5
|
||||||
|
|
||||||
|
* Kubernetes [v1.10.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1105)
|
||||||
|
* Update etcd from v3.3.6 to v3.3.8 ([#243](https://github.com/poseidon/typhoon/pull/243), [#247](https://github.com/poseidon/typhoon/pull/247))
|
||||||
|
|
||||||
|
#### AWS
|
||||||
|
|
||||||
|
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
|
||||||
|
* Combine apiserver and ingress NLBs ([#249](https://github.com/poseidon/typhoon/pull/249))
|
||||||
|
* Reduce cost by ~$18/month per cluster. Typhoon AWS clusters now use one network load balancer.
|
||||||
|
* Ingress addon users may keep using CNAME records to the `ingress_dns_name` module output (few million RPS)
|
||||||
|
* Ingress users with heavy traffic (many million RPS) should create a separate NLB(s)
|
||||||
|
* Worker pools no longer include an extraneous load balancer. Remove worker module's `ingress_dns_name` output
|
||||||
|
* Disable detailed (paid) monitoring on worker nodes ([#251](https://github.com/poseidon/typhoon/pull/251))
|
||||||
|
* Favor Prometheus for cloud-agnostic metrics, aggregation, and alerting
|
||||||
|
* Add `worker_target_group_http` and `worker_target_group_https` module outputs to allow custom load balancing
|
||||||
|
* Add `target_group_http` and `target_group_https` worker module outputs to allow custom load balancing
|
||||||
|
|
||||||
|
#### Bare-Metal
|
||||||
|
|
||||||
|
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
|
||||||
|
* Users who exposed kube-apiserver on a WAN via their router/load-balancer will need to adjust its configuration (e.g. DNAT 6443). Most apiservers are on a LAN (internal, VPN-only, etc) so if you didn't specially configure network gear for 443, no change is needed. (possible action required)
|
||||||
|
* Fix possible deadlock when provisioning clusters larger than 10 nodes ([#244](https://github.com/poseidon/typhoon/pull/244))
|
||||||
|
|
||||||
|
#### DigitalOcean
|
||||||
|
|
||||||
|
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
|
||||||
|
* Update firewall rules and generated kubeconfig's
|
||||||
|
|
||||||
|
#### Google Cloud
|
||||||
|
|
||||||
|
* Use global HTTP and TCP proxy load balancing for Kubernetes Ingress ([#252](https://github.com/poseidon/typhoon/pull/252))
|
||||||
|
* Switch Ingress from regional network load balancers to global HTTP/TCP Proxy load balancing
|
||||||
|
* Reduce cost by ~$19/month per cluster. Google bills the first 5 global and regional forwarding rules separately. Typhoon clusters now use 3 global and 0 regional forwarding rules.
|
||||||
|
* Worker pools no longer include an extraneous load balancer. Remove worker module's `ingress_static_ip` output
|
||||||
|
* Allow using nginx-ingress addon on Fedora Atomic clusters ([#200](https://github.com/poseidon/typhoon/issues/200))
|
||||||
|
* Add `worker_instance_group` module output to allow custom global load balancing
|
||||||
|
* Add `instance_group` worker module output to allow custom global load balancing
|
||||||
|
* Deprecate `ingress_static_ip` module output. Add `ingress_static_ipv4` module output instead.
|
||||||
|
* Deprecate `controllers_ipv4_public` module output
|
||||||
|
|
||||||
|
#### Addons
|
||||||
|
|
||||||
|
* Update CLUO from v0.6.0 to v0.7.0 ([#242](https://github.com/poseidon/typhoon/pull/242))
|
||||||
|
* Update Prometheus from v2.3.0 to v2.3.1
|
||||||
|
* Update Grafana from 5.1.3 to 5.1.4
|
||||||
|
* Drop `hostNetwork` from nginx-ingress addon
|
||||||
|
* Both flannel and Calico support host port via `portmap`
|
||||||
|
* Allows writing NetworkPolicies that reference ingress pods in `from` or `to`. HostNetwork pods were difficult to write network policy for since they could circumvent the CNI network to communicate with pods on the same node.
|
||||||
|
|
||||||
|
## v1.10.4
|
||||||
|
|
||||||
|
* Kubernetes [v1.10.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1104)
|
||||||
|
* Update etcd from v3.3.5 to v3.3.6
|
||||||
|
* Update Calico from v3.1.2 to v3.1.3
|
||||||
|
|
||||||
|
#### Addons
|
||||||
|
|
||||||
|
* Update Prometheus from v2.2.1 to v2.3.0
|
||||||
|
* Add Prometheus liveness and readiness probes
|
||||||
|
* Annotate Grafana service so Prometheus scrapes metrics
|
||||||
|
* Label namespaces to ease writing Network Policies
|
||||||
|
|
||||||
|
## v1.10.3
|
||||||
|
|
||||||
|
* Kubernetes [v1.10.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1103)
|
||||||
|
* Add [Flatcar Linux](https://docs.flatcar-linux.org/) (Container Linux derivative) as an option for AWS and bare-metal (thanks @kinvolk folks)
|
||||||
|
* Allow bearer token authentication to the Kubelet ([#216](https://github.com/poseidon/typhoon/issues/216))
|
||||||
|
* Require Webhook authorization to the Kubelet
|
||||||
|
* Switch apiserver X509 client cert org to satisfy new authorization requirement
|
||||||
|
* Require Terraform v0.11.x and drop support for v0.10.x ([migration guide](https://typhoon.psdn.io/topics/maintenance/#terraform-v011x))
|
||||||
|
* Update etcd from v3.3.4 to v3.3.5 ([#213](https://github.com/poseidon/typhoon/pull/213))
|
||||||
|
* Update Calico from v3.1.1 to v3.1.2
|
||||||
|
|
||||||
|
#### AWS
|
||||||
|
|
||||||
|
* Allow Flatcar Linux by setting `os_image` to flatcar-stable (default), flatcar-beta, flatcar-alpha ([#211](https://github.com/poseidon/typhoon/pull/211))
|
||||||
|
* Replace `os_channel` variable with `os_image` to align naming across clouds
|
||||||
|
* Please change values stable, beta, or alpha to coreos-stable, coreos-beta, coreos-alpha (**action required!**)
|
||||||
|
* Allow preemptible workers via spot instances ([#202](https://github.com/poseidon/typhoon/pull/202))
|
||||||
|
* Add `worker_price` to allow worker spot instances. Default to empty string for the worker autoscaling group to use regular on-demand instances
|
||||||
|
* Add `spot_price` to internal `workers` module for spot [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||||
|
|
||||||
|
#### Bare-Metal
|
||||||
|
|
||||||
|
* Allow Flatcar Linux by setting `os_channel` to flatcar-stable, flatcar-beta, flatcar-alpha ([#220](https://github.com/poseidon/typhoon/pull/220))
|
||||||
|
* Replace `container_linux_channel` variable with `os_channel`
|
||||||
|
* Please change values stable, beta, or alpha to coreos-stable, coreos-beta, coreos-alpha (**action required!**)
|
||||||
|
* Replace `container_linux_version` variable with `os_version`
|
||||||
|
* Add `network_ip_autodetection_method` variable for Calico host IPv4 address detection
|
||||||
|
* Use Calico's default "first-found" to support single NIC and bonded NIC nodes
|
||||||
|
* Allow [alternative](https://docs.projectcalico.org/v3.1/reference/node/configuration#ip-autodetection-methods) methods for multi NIC nodes, like can-reach=IP or interface=REGEX
|
||||||
|
* Deprecate `container_linux_oem` variable
|
||||||
|
|
||||||
|
#### DigitalOcean
|
||||||
|
|
||||||
|
* Update Fedora Atomic module to use Fedora Atomic 28 ([#225](https://github.com/poseidon/typhoon/pull/225))
|
||||||
|
* Fedora Atomic 27 images disappeared from DigitalOcean and forced this early update
|
||||||
|
|
||||||
|
#### Addons
|
||||||
|
|
||||||
|
* Fix Prometheus data directory location ([#203](https://github.com/poseidon/typhoon/pull/203))
|
||||||
|
* Configure Prometheus to scrape Kubelets directly with bearer token auth instead of proxying through the apiserver ([#217](https://github.com/poseidon/typhoon/pull/217))
|
||||||
|
* Security improvement: Drop RBAC permission from `nodes/proxy` to `nodes/metrics`
|
||||||
|
* Scale: Remove per-node proxied scrape load from the apiserver
|
||||||
|
* Update Grafana from v5.04 to v5.1.3 ([#208](https://github.com/poseidon/typhoon/pull/208))
|
||||||
|
* Disable Grafana Google Analytics by default ([#214](https://github.com/poseidon/typhoon/issues/214))
|
||||||
|
* Update nginx-ingress from 0.14.0 to 0.15.0
|
||||||
|
* Annotate nginx-ingress service so Prometheus auto-discovers and scrapes service endpoints ([#222](https://github.com/poseidon/typhoon/pull/222))
|
||||||
|
|
||||||
|
## v1.10.2
|
||||||
|
|
||||||
* [Introduce](https://typhoon.psdn.io/announce/#april-26-2018) Typhoon for Fedora Atomic ([#199](https://github.com/poseidon/typhoon/pull/199))
|
|
||||||
* Kubernetes [v1.10.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1102)
|
* Kubernetes [v1.10.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1102)
|
||||||
|
* [Introduce](https://typhoon.psdn.io/announce/#april-26-2018) Typhoon for Fedora Atomic ([#199](https://github.com/poseidon/typhoon/pull/199))
|
||||||
* Update Calico from v3.0.4 to v3.1.1 ([#197](https://github.com/poseidon/typhoon/pull/197))
|
* Update Calico from v3.0.4 to v3.1.1 ([#197](https://github.com/poseidon/typhoon/pull/197))
|
||||||
* https://www.projectcalico.org/announcing-calico-v3-1/
|
* https://www.projectcalico.org/announcing-calico-v3-1/
|
||||||
* https://github.com/projectcalico/calico/releases/tag/v3.1.0
|
* https://github.com/projectcalico/calico/releases/tag/v3.1.0
|
||||||
|
18
README.md
18
README.md
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
|
||||||
@ -29,8 +29,10 @@ Typhoon provides a Terraform Module for each supported operating system and plat
|
|||||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
||||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | beta |
|
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable |
|
||||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | very alpha |
|
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
|
||||||
|
|
||||||
|
The AWS and bare-metal `container-linux` modules allow picking Red Hat Container Linux (formerly CoreOS Container Linux) or Kinvolk's Flatcar Linux friendly fork.
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
@ -44,7 +46,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "google-cloud-yavin" {
|
module "google-cloud-yavin" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.2"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.2"
|
||||||
|
|
||||||
providers = {
|
providers = {
|
||||||
google = "google.default"
|
google = "google.default"
|
||||||
@ -86,9 +88,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
|||||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS AGE VERSION
|
NAME STATUS AGE VERSION
|
||||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
|
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
|
||||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
|
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
|
||||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
|
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -99,10 +101,10 @@ NAMESPACE NAME READY STATUS RESTART
|
|||||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||||
|
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||||
kube-system kube-dns-1187388186-zj5dl 3/3 Running 0 6m
|
|
||||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||||
|
@ -18,7 +18,7 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: update-agent
|
- name: update-agent
|
||||||
image: quay.io/coreos/container-linux-update-operator:v0.6.0
|
image: quay.io/coreos/container-linux-update-operator:v0.7.0
|
||||||
command:
|
command:
|
||||||
- "/bin/update-agent"
|
- "/bin/update-agent"
|
||||||
volumeMounts:
|
volumeMounts:
|
||||||
|
@ -15,7 +15,7 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: update-operator
|
- name: update-operator
|
||||||
image: quay.io/coreos/container-linux-update-operator:v0.6.0
|
image: quay.io/coreos/container-linux-update-operator:v0.7.0
|
||||||
command:
|
command:
|
||||||
- "/bin/update-operator"
|
- "/bin/update-operator"
|
||||||
env:
|
env:
|
||||||
|
@ -21,7 +21,7 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- name: grafana
|
- name: grafana
|
||||||
image: grafana/grafana:5.0.4
|
image: grafana/grafana:5.2.2
|
||||||
env:
|
env:
|
||||||
- name: GF_SERVER_HTTP_PORT
|
- name: GF_SERVER_HTTP_PORT
|
||||||
value: "8080"
|
value: "8080"
|
||||||
@ -31,6 +31,8 @@ spec:
|
|||||||
value: "true"
|
value: "true"
|
||||||
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
|
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
|
||||||
value: Viewer
|
value: Viewer
|
||||||
|
- name: GF_ANALYTICS_REPORTING_ENABLED
|
||||||
|
value: "false"
|
||||||
ports:
|
ports:
|
||||||
- name: http
|
- name: http
|
||||||
containerPort: 8080
|
containerPort: 8080
|
||||||
|
@ -3,6 +3,9 @@ kind: Service
|
|||||||
metadata:
|
metadata:
|
||||||
name: grafana
|
name: grafana
|
||||||
namespace: monitoring
|
namespace: monitoring
|
||||||
|
annotations:
|
||||||
|
prometheus.io/scrape: 'true'
|
||||||
|
prometheus.io/port: '8080'
|
||||||
spec:
|
spec:
|
||||||
type: ClusterIP
|
type: ClusterIP
|
||||||
selector:
|
selector:
|
||||||
|
@ -14,11 +14,13 @@ spec:
|
|||||||
labels:
|
labels:
|
||||||
name: heapster
|
name: heapster
|
||||||
phase: prod
|
phase: prod
|
||||||
|
annotations:
|
||||||
|
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||||
spec:
|
spec:
|
||||||
serviceAccountName: heapster
|
serviceAccountName: heapster
|
||||||
containers:
|
containers:
|
||||||
- name: heapster
|
- name: heapster
|
||||||
image: k8s.gcr.io/heapster-amd64:v1.5.2
|
image: k8s.gcr.io/heapster-amd64:v1.5.4
|
||||||
command:
|
command:
|
||||||
- /heapster
|
- /heapster
|
||||||
- --source=kubernetes.summary_api:''
|
- --source=kubernetes.summary_api:''
|
||||||
|
@ -2,3 +2,5 @@ apiVersion: v1
|
|||||||
kind: Namespace
|
kind: Namespace
|
||||||
metadata:
|
metadata:
|
||||||
name: ingress
|
name: ingress
|
||||||
|
labels:
|
||||||
|
name: ingress
|
||||||
|
@ -20,10 +20,9 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
nodeSelector:
|
nodeSelector:
|
||||||
node-role.kubernetes.io/node: ""
|
node-role.kubernetes.io/node: ""
|
||||||
hostNetwork: true
|
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-ingress-controller
|
- name: nginx-ingress-controller
|
||||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
|
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||||
args:
|
args:
|
||||||
- /nginx-ingress-controller
|
- /nginx-ingress-controller
|
||||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||||
@ -67,5 +66,12 @@ spec:
|
|||||||
periodSeconds: 10
|
periodSeconds: 10
|
||||||
successThreshold: 1
|
successThreshold: 1
|
||||||
timeoutSeconds: 1
|
timeoutSeconds: 1
|
||||||
|
securityContext:
|
||||||
|
capabilities:
|
||||||
|
add:
|
||||||
|
- NET_BIND_SERVICE
|
||||||
|
drop:
|
||||||
|
- ALL
|
||||||
|
runAsUser: 33 # www-data
|
||||||
restartPolicy: Always
|
restartPolicy: Always
|
||||||
terminationGracePeriodSeconds: 60
|
terminationGracePeriodSeconds: 60
|
||||||
|
@ -3,6 +3,9 @@ kind: Service
|
|||||||
metadata:
|
metadata:
|
||||||
name: nginx-ingress-controller
|
name: nginx-ingress-controller
|
||||||
namespace: ingress
|
namespace: ingress
|
||||||
|
annotations:
|
||||||
|
prometheus.io/scrape: 'true'
|
||||||
|
prometheus.io/port: '10254'
|
||||||
spec:
|
spec:
|
||||||
type: ClusterIP
|
type: ClusterIP
|
||||||
selector:
|
selector:
|
||||||
|
6
addons/nginx-ingress/bare-metal/0-namespace.yaml
Normal file
6
addons/nginx-ingress/bare-metal/0-namespace.yaml
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: ingress
|
||||||
|
labels:
|
||||||
|
name: ingress
|
@ -0,0 +1,40 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: default-backend
|
||||||
|
namespace: ingress
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
name: default-backend
|
||||||
|
phase: prod
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: default-backend
|
||||||
|
phase: prod
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: default-backend
|
||||||
|
# Any image is permissable as long as:
|
||||||
|
# 1. It serves a 404 page at /
|
||||||
|
# 2. It serves 200 on a /healthz endpoint
|
||||||
|
image: k8s.gcr.io/defaultbackend:1.4
|
||||||
|
ports:
|
||||||
|
- containerPort: 8080
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpu: 10m
|
||||||
|
memory: 20Mi
|
||||||
|
requests:
|
||||||
|
cpu: 10m
|
||||||
|
memory: 20Mi
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /healthz
|
||||||
|
port: 8080
|
||||||
|
scheme: HTTP
|
||||||
|
initialDelaySeconds: 30
|
||||||
|
timeoutSeconds: 5
|
||||||
|
terminationGracePeriodSeconds: 60
|
15
addons/nginx-ingress/bare-metal/default-backend/service.yaml
Normal file
15
addons/nginx-ingress/bare-metal/default-backend/service.yaml
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: default-backend
|
||||||
|
namespace: ingress
|
||||||
|
spec:
|
||||||
|
type: ClusterIP
|
||||||
|
selector:
|
||||||
|
name: default-backend
|
||||||
|
phase: prod
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
protocol: TCP
|
||||||
|
port: 80
|
||||||
|
targetPort: 8080
|
73
addons/nginx-ingress/bare-metal/deployment.yaml
Normal file
73
addons/nginx-ingress/bare-metal/deployment.yaml
Normal file
@ -0,0 +1,73 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: ingress-controller-public
|
||||||
|
namespace: ingress
|
||||||
|
spec:
|
||||||
|
replicas: 2
|
||||||
|
strategy:
|
||||||
|
rollingUpdate:
|
||||||
|
maxUnavailable: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
name: ingress-controller-public
|
||||||
|
phase: prod
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
name: ingress-controller-public
|
||||||
|
phase: prod
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx-ingress-controller
|
||||||
|
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||||
|
args:
|
||||||
|
- /nginx-ingress-controller
|
||||||
|
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||||
|
- --ingress-class=public
|
||||||
|
# use downward API
|
||||||
|
env:
|
||||||
|
- name: POD_NAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.name
|
||||||
|
- name: POD_NAMESPACE
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.namespace
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
containerPort: 80
|
||||||
|
- name: https
|
||||||
|
containerPort: 443
|
||||||
|
- name: health
|
||||||
|
containerPort: 10254
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /healthz
|
||||||
|
port: 10254
|
||||||
|
scheme: HTTP
|
||||||
|
initialDelaySeconds: 10
|
||||||
|
periodSeconds: 10
|
||||||
|
successThreshold: 1
|
||||||
|
failureThreshold: 3
|
||||||
|
timeoutSeconds: 1
|
||||||
|
readinessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /healthz
|
||||||
|
port: 10254
|
||||||
|
scheme: HTTP
|
||||||
|
periodSeconds: 10
|
||||||
|
successThreshold: 1
|
||||||
|
failureThreshold: 3
|
||||||
|
timeoutSeconds: 1
|
||||||
|
securityContext:
|
||||||
|
capabilities:
|
||||||
|
add:
|
||||||
|
- NET_BIND_SERVICE
|
||||||
|
drop:
|
||||||
|
- ALL
|
||||||
|
runAsUser: 33 # www-data
|
||||||
|
restartPolicy: Always
|
||||||
|
terminationGracePeriodSeconds: 60
|
||||||
|
|
@ -0,0 +1,12 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: ingress
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: ingress
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
namespace: ingress
|
||||||
|
name: default
|
51
addons/nginx-ingress/bare-metal/rbac/cluster-role.yaml
Normal file
51
addons/nginx-ingress/bare-metal/rbac/cluster-role.yaml
Normal file
@ -0,0 +1,51 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: ingress
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- configmaps
|
||||||
|
- endpoints
|
||||||
|
- nodes
|
||||||
|
- pods
|
||||||
|
- secrets
|
||||||
|
verbs:
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- nodes
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- services
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- apiGroups:
|
||||||
|
- "extensions"
|
||||||
|
resources:
|
||||||
|
- ingresses
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- events
|
||||||
|
verbs:
|
||||||
|
- create
|
||||||
|
- patch
|
||||||
|
- apiGroups:
|
||||||
|
- "extensions"
|
||||||
|
resources:
|
||||||
|
- ingresses/status
|
||||||
|
verbs:
|
||||||
|
- update
|
13
addons/nginx-ingress/bare-metal/rbac/role-binding.yaml
Normal file
13
addons/nginx-ingress/bare-metal/rbac/role-binding.yaml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: RoleBinding
|
||||||
|
metadata:
|
||||||
|
name: ingress
|
||||||
|
namespace: ingress
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: Role
|
||||||
|
name: ingress
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
namespace: ingress
|
||||||
|
name: default
|
41
addons/nginx-ingress/bare-metal/rbac/role.yaml
Normal file
41
addons/nginx-ingress/bare-metal/rbac/role.yaml
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: Role
|
||||||
|
metadata:
|
||||||
|
name: ingress
|
||||||
|
namespace: ingress
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- configmaps
|
||||||
|
- pods
|
||||||
|
- secrets
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- configmaps
|
||||||
|
resourceNames:
|
||||||
|
# Defaults to "<election-id>-<ingress-class>"
|
||||||
|
# Here: "<ingress-controller-leader>-<nginx>"
|
||||||
|
# This has to be adapted if you change either parameter
|
||||||
|
# when launching the nginx-ingress-controller.
|
||||||
|
- "ingress-controller-leader-public"
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- update
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- configmaps
|
||||||
|
verbs:
|
||||||
|
- create
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- endpoints
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- create
|
||||||
|
- update
|
23
addons/nginx-ingress/bare-metal/service.yaml
Normal file
23
addons/nginx-ingress/bare-metal/service.yaml
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: ingress-controller-public
|
||||||
|
namespace: ingress
|
||||||
|
annotations:
|
||||||
|
prometheus.io/scrape: 'true'
|
||||||
|
prometheus.io/port: '10254'
|
||||||
|
spec:
|
||||||
|
type: ClusterIP
|
||||||
|
clusterIP: 10.3.0.12
|
||||||
|
selector:
|
||||||
|
name: ingress-controller-public
|
||||||
|
phase: prod
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
protocol: TCP
|
||||||
|
port: 80
|
||||||
|
targetPort: 80
|
||||||
|
- name: https
|
||||||
|
protocol: TCP
|
||||||
|
port: 443
|
||||||
|
targetPort: 443
|
@ -2,3 +2,5 @@ apiVersion: v1
|
|||||||
kind: Namespace
|
kind: Namespace
|
||||||
metadata:
|
metadata:
|
||||||
name: ingress
|
name: ingress
|
||||||
|
labels:
|
||||||
|
name: ingress
|
||||||
|
@ -20,10 +20,9 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
nodeSelector:
|
nodeSelector:
|
||||||
node-role.kubernetes.io/node: ""
|
node-role.kubernetes.io/node: ""
|
||||||
hostNetwork: true
|
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-ingress-controller
|
- name: nginx-ingress-controller
|
||||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
|
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||||
args:
|
args:
|
||||||
- /nginx-ingress-controller
|
- /nginx-ingress-controller
|
||||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||||
@ -67,5 +66,12 @@ spec:
|
|||||||
periodSeconds: 10
|
periodSeconds: 10
|
||||||
successThreshold: 1
|
successThreshold: 1
|
||||||
timeoutSeconds: 1
|
timeoutSeconds: 1
|
||||||
|
securityContext:
|
||||||
|
capabilities:
|
||||||
|
add:
|
||||||
|
- NET_BIND_SERVICE
|
||||||
|
drop:
|
||||||
|
- ALL
|
||||||
|
runAsUser: 33 # www-data
|
||||||
restartPolicy: Always
|
restartPolicy: Always
|
||||||
terminationGracePeriodSeconds: 60
|
terminationGracePeriodSeconds: 60
|
||||||
|
@ -3,6 +3,9 @@ kind: Service
|
|||||||
metadata:
|
metadata:
|
||||||
name: nginx-ingress-controller
|
name: nginx-ingress-controller
|
||||||
namespace: ingress
|
namespace: ingress
|
||||||
|
annotations:
|
||||||
|
prometheus.io/scrape: 'true'
|
||||||
|
prometheus.io/port: '10254'
|
||||||
spec:
|
spec:
|
||||||
type: ClusterIP
|
type: ClusterIP
|
||||||
selector:
|
selector:
|
||||||
|
@ -2,3 +2,5 @@ apiVersion: v1
|
|||||||
kind: Namespace
|
kind: Namespace
|
||||||
metadata:
|
metadata:
|
||||||
name: ingress
|
name: ingress
|
||||||
|
labels:
|
||||||
|
name: ingress
|
||||||
|
@ -20,10 +20,9 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
nodeSelector:
|
nodeSelector:
|
||||||
node-role.kubernetes.io/node: ""
|
node-role.kubernetes.io/node: ""
|
||||||
hostNetwork: true
|
|
||||||
containers:
|
containers:
|
||||||
- name: nginx-ingress-controller
|
- name: nginx-ingress-controller
|
||||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
|
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||||
args:
|
args:
|
||||||
- /nginx-ingress-controller
|
- /nginx-ingress-controller
|
||||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||||
@ -67,5 +66,12 @@ spec:
|
|||||||
periodSeconds: 10
|
periodSeconds: 10
|
||||||
successThreshold: 1
|
successThreshold: 1
|
||||||
timeoutSeconds: 1
|
timeoutSeconds: 1
|
||||||
|
securityContext:
|
||||||
|
capabilities:
|
||||||
|
add:
|
||||||
|
- NET_BIND_SERVICE
|
||||||
|
drop:
|
||||||
|
- ALL
|
||||||
|
runAsUser: 33 # www-data
|
||||||
restartPolicy: Always
|
restartPolicy: Always
|
||||||
terminationGracePeriodSeconds: 60
|
terminationGracePeriodSeconds: 60
|
||||||
|
@ -3,6 +3,9 @@ kind: Service
|
|||||||
metadata:
|
metadata:
|
||||||
name: nginx-ingress-controller
|
name: nginx-ingress-controller
|
||||||
namespace: ingress
|
namespace: ingress
|
||||||
|
annotations:
|
||||||
|
prometheus.io/scrape: 'true'
|
||||||
|
prometheus.io/port: '10254'
|
||||||
spec:
|
spec:
|
||||||
type: ClusterIP
|
type: ClusterIP
|
||||||
selector:
|
selector:
|
||||||
|
@ -2,3 +2,5 @@ apiVersion: v1
|
|||||||
kind: Namespace
|
kind: Namespace
|
||||||
metadata:
|
metadata:
|
||||||
name: monitoring
|
name: monitoring
|
||||||
|
labels:
|
||||||
|
name: monitoring
|
||||||
|
@ -56,12 +56,7 @@ data:
|
|||||||
target_label: job
|
target_label: job
|
||||||
|
|
||||||
# Scrape config for node (i.e. kubelet) /metrics (e.g. 'kubelet_'). Explore
|
# Scrape config for node (i.e. kubelet) /metrics (e.g. 'kubelet_'). Explore
|
||||||
# metrics from a node by scraping kubelet (127.0.0.1:10255/metrics).
|
# metrics from a node by scraping kubelet (127.0.0.1:10250/metrics).
|
||||||
#
|
|
||||||
# Rather than connecting directly to the node, the scrape is proxied though the
|
|
||||||
# Kubernetes apiserver. This means it will work if Prometheus is running out of
|
|
||||||
# cluster, or can't connect to nodes for some other reason (e.g. because of
|
|
||||||
# firewalling).
|
|
||||||
- job_name: 'kubelet'
|
- job_name: 'kubelet'
|
||||||
kubernetes_sd_configs:
|
kubernetes_sd_configs:
|
||||||
- role: node
|
- role: node
|
||||||
@ -69,50 +64,34 @@ data:
|
|||||||
scheme: https
|
scheme: https
|
||||||
tls_config:
|
tls_config:
|
||||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||||
|
# Kubelet certs don't have any fixed IP SANs
|
||||||
|
insecure_skip_verify: true
|
||||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||||
|
|
||||||
relabel_configs:
|
relabel_configs:
|
||||||
- action: labelmap
|
- action: labelmap
|
||||||
regex: __meta_kubernetes_node_label_(.+)
|
regex: __meta_kubernetes_node_label_(.+)
|
||||||
- target_label: __address__
|
|
||||||
replacement: kubernetes.default.svc:443
|
|
||||||
- source_labels: [__meta_kubernetes_node_name]
|
|
||||||
regex: (.+)
|
|
||||||
target_label: __metrics_path__
|
|
||||||
replacement: /api/v1/nodes/${1}/proxy/metrics
|
|
||||||
|
|
||||||
# Scrape config for Kubelet cAdvisor. Explore metrics from a node by
|
# Scrape config for Kubelet cAdvisor. Explore metrics from a node by
|
||||||
# scraping kubelet (127.0.0.1:10255/metrics/cadvisor).
|
# scraping kubelet (127.0.0.1:10250/metrics/cadvisor).
|
||||||
#
|
|
||||||
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
|
|
||||||
# (those whose names begin with 'container_') have been removed from the
|
|
||||||
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
|
|
||||||
# retrieve those metrics.
|
|
||||||
#
|
|
||||||
# Rather than connecting directly to the node, the scrape is proxied though the
|
|
||||||
# Kubernetes apiserver. This means it will work if Prometheus is running out of
|
|
||||||
# cluster, or can't connect to nodes for some other reason (e.g. because of
|
|
||||||
# firewalling).
|
|
||||||
- job_name: 'kubernetes-cadvisor'
|
- job_name: 'kubernetes-cadvisor'
|
||||||
kubernetes_sd_configs:
|
kubernetes_sd_configs:
|
||||||
- role: node
|
- role: node
|
||||||
|
|
||||||
scheme: https
|
scheme: https
|
||||||
|
metrics_path: /metrics/cadvisor
|
||||||
tls_config:
|
tls_config:
|
||||||
|
# Kubelet certs don't have any fixed IP SANs
|
||||||
|
insecure_skip_verify: true
|
||||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||||
|
|
||||||
relabel_configs:
|
relabel_configs:
|
||||||
- action: labelmap
|
- action: labelmap
|
||||||
regex: __meta_kubernetes_node_label_(.+)
|
regex: __meta_kubernetes_node_label_(.+)
|
||||||
- target_label: __address__
|
|
||||||
replacement: kubernetes.default.svc:443
|
|
||||||
- source_labels: [__meta_kubernetes_node_name]
|
# Scrap etcd metrics from controllers via listen-metrics-urls
|
||||||
regex: (.+)
|
|
||||||
target_label: __metrics_path__
|
|
||||||
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
|
|
||||||
|
|
||||||
# Scrap etcd metrics from controllers
|
|
||||||
- job_name: 'etcd'
|
- job_name: 'etcd'
|
||||||
kubernetes_sd_configs:
|
kubernetes_sd_configs:
|
||||||
- role: node
|
- role: node
|
||||||
|
@ -17,29 +17,41 @@ spec:
|
|||||||
spec:
|
spec:
|
||||||
serviceAccountName: prometheus
|
serviceAccountName: prometheus
|
||||||
containers:
|
containers:
|
||||||
- name: prometheus
|
- name: prometheus
|
||||||
image: quay.io/prometheus/prometheus:v2.2.1
|
image: quay.io/prometheus/prometheus:v2.3.2
|
||||||
args:
|
args:
|
||||||
- '--config.file=/etc/prometheus/prometheus.yaml'
|
- --web.listen-address=0.0.0.0:9090
|
||||||
ports:
|
- --config.file=/etc/prometheus/prometheus.yaml
|
||||||
- name: web
|
- --storage.tsdb.path=/var/lib/prometheus
|
||||||
containerPort: 9090
|
ports:
|
||||||
volumeMounts:
|
- name: web
|
||||||
- name: config
|
containerPort: 9090
|
||||||
mountPath: /etc/prometheus
|
volumeMounts:
|
||||||
- name: rules
|
- name: config
|
||||||
mountPath: /etc/prometheus/rules
|
mountPath: /etc/prometheus
|
||||||
- name: data
|
- name: rules
|
||||||
mountPath: /var/lib/prometheus
|
mountPath: /etc/prometheus/rules
|
||||||
dnsPolicy: ClusterFirst
|
- name: data
|
||||||
restartPolicy: Always
|
mountPath: /var/lib/prometheus
|
||||||
|
readinessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /-/ready
|
||||||
|
port: 9090
|
||||||
|
initialDelaySeconds: 10
|
||||||
|
timeoutSeconds: 10
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /-/healthy
|
||||||
|
port: 9090
|
||||||
|
initialDelaySeconds: 10
|
||||||
|
timeoutSeconds: 10
|
||||||
terminationGracePeriodSeconds: 30
|
terminationGracePeriodSeconds: 30
|
||||||
volumes:
|
volumes:
|
||||||
- name: config
|
- name: config
|
||||||
configMap:
|
configMap:
|
||||||
name: prometheus-config
|
name: prometheus-config
|
||||||
- name: rules
|
- name: rules
|
||||||
configMap:
|
configMap:
|
||||||
name: prometheus-rules
|
name: prometheus-rules
|
||||||
- name: data
|
- name: data
|
||||||
emptyDir: {}
|
emptyDir: {}
|
||||||
|
@ -6,7 +6,7 @@ rules:
|
|||||||
- apiGroups: [""]
|
- apiGroups: [""]
|
||||||
resources:
|
resources:
|
||||||
- nodes
|
- nodes
|
||||||
- nodes/proxy
|
- nodes/metrics
|
||||||
- services
|
- services
|
||||||
- endpoints
|
- endpoints
|
||||||
- pods
|
- pods
|
||||||
|
@ -496,6 +496,13 @@ data:
|
|||||||
annotations:
|
annotations:
|
||||||
description: device {{$labels.device}} on node {{$labels.instance}} is running
|
description: device {{$labels.device}} on node {{$labels.instance}} is running
|
||||||
full within the next 2 hours (mounted at {{$labels.mountpoint}})
|
full within the next 2 hours (mounted at {{$labels.mountpoint}})
|
||||||
|
- alert: InactiveRAIDDisk
|
||||||
|
expr: node_md_disks - node_md_disks_active > 0
|
||||||
|
for: 10m
|
||||||
|
labels:
|
||||||
|
severity: warning
|
||||||
|
annotations:
|
||||||
|
description: '{{$value}} RAID disk(s) on node {{$labels.instance}} are inactive'
|
||||||
prometheus.rules.yaml: |
|
prometheus.rules.yaml: |
|
||||||
groups:
|
groups:
|
||||||
- name: prometheus.rules
|
- name: prometheus.rules
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||||
@ -19,5 +19,5 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Docs
|
## Docs
|
||||||
|
|
||||||
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/aws/).
|
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/cl/aws/).
|
||||||
|
|
||||||
|
@ -1,3 +1,13 @@
|
|||||||
|
locals {
|
||||||
|
# Pick a CoreOS Container Linux derivative
|
||||||
|
# coreos-stable -> Container Linux AMI
|
||||||
|
# flatcar-stable -> Flatcar Linux AMI
|
||||||
|
ami_id = "${local.flavor == "flatcar" ? data.aws_ami.flatcar.image_id : data.aws_ami.coreos.image_id}"
|
||||||
|
|
||||||
|
flavor = "${element(split("-", var.os_image), 0)}"
|
||||||
|
channel = "${element(split("-", var.os_image), 1)}"
|
||||||
|
}
|
||||||
|
|
||||||
data "aws_ami" "coreos" {
|
data "aws_ami" "coreos" {
|
||||||
most_recent = true
|
most_recent = true
|
||||||
owners = ["595879546273"]
|
owners = ["595879546273"]
|
||||||
@ -14,6 +24,26 @@ data "aws_ami" "coreos" {
|
|||||||
|
|
||||||
filter {
|
filter {
|
||||||
name = "name"
|
name = "name"
|
||||||
values = ["CoreOS-${var.os_channel}-*"]
|
values = ["CoreOS-${local.channel}-*"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
data "aws_ami" "flatcar" {
|
||||||
|
most_recent = true
|
||||||
|
owners = ["075585003325"]
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "architecture"
|
||||||
|
values = ["x86_64"]
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "virtualization-type"
|
||||||
|
values = ["hvm"]
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "name"
|
||||||
|
values = ["Flatcar-${local.channel}-*"]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootkube" {
|
module "bootkube" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||||
|
|
||||||
cluster_name = "${var.cluster_name}"
|
cluster_name = "${var.cluster_name}"
|
||||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||||
|
@ -7,7 +7,7 @@ systemd:
|
|||||||
- name: 40-etcd-cluster.conf
|
- name: 40-etcd-cluster.conf
|
||||||
contents: |
|
contents: |
|
||||||
[Service]
|
[Service]
|
||||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||||
Environment="ETCD_NAME=${etcd_name}"
|
Environment="ETCD_NAME=${etcd_name}"
|
||||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||||
@ -74,8 +74,9 @@ systemd:
|
|||||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||||
--allow-privileged \
|
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -121,7 +122,7 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||||
KUBELET_IMAGE_TAG=v1.10.2
|
KUBELET_IMAGE_TAG=v1.11.2
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
contents:
|
contents:
|
||||||
@ -142,7 +143,7 @@ storage:
|
|||||||
# Move experimental manifests
|
# Move experimental manifests
|
||||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
|
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||||
exec /usr/bin/rkt run \
|
exec /usr/bin/rkt run \
|
||||||
--trust-keys-from-https \
|
--trust-keys-from-https \
|
||||||
|
@ -23,7 +23,7 @@ resource "aws_instance" "controllers" {
|
|||||||
|
|
||||||
instance_type = "${var.controller_type}"
|
instance_type = "${var.controller_type}"
|
||||||
|
|
||||||
ami = "${data.aws_ami.coreos.image_id}"
|
ami = "${local.ami_id}"
|
||||||
user_data = "${element(data.ct_config.controller_ign.*.rendered, count.index)}"
|
user_data = "${element(data.ct_config.controller_ign.*.rendered, count.index)}"
|
||||||
|
|
||||||
# storage
|
# storage
|
||||||
@ -54,7 +54,7 @@ data "template_file" "controller_config" {
|
|||||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||||
|
|
||||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
|
||||||
|
|
||||||
kubeconfig = "${indent(10, module.bootkube.kubeconfig)}"
|
kubeconfig = "${indent(10, module.bootkube.kubeconfig)}"
|
||||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||||
@ -63,14 +63,14 @@ data "template_file" "controller_config" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
data "template_file" "etcds" {
|
||||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
count = "${var.controller_count}"
|
||||||
resource null_resource "repeat" {
|
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||||
count = "${var.controller_count}"
|
|
||||||
|
|
||||||
triggers {
|
vars {
|
||||||
name = "etcd${count.index}"
|
index = "${count.index}"
|
||||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
cluster_name = "${var.cluster_name}"
|
||||||
|
dns_zone = "${var.dns_zone}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -7,15 +7,15 @@ resource "aws_route53_record" "apiserver" {
|
|||||||
|
|
||||||
# AWS recommends their special "alias" records for ELBs
|
# AWS recommends their special "alias" records for ELBs
|
||||||
alias {
|
alias {
|
||||||
name = "${aws_lb.apiserver.dns_name}"
|
name = "${aws_lb.nlb.dns_name}"
|
||||||
zone_id = "${aws_lb.apiserver.zone_id}"
|
zone_id = "${aws_lb.nlb.zone_id}"
|
||||||
evaluate_target_health = true
|
evaluate_target_health = true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Network Load Balancer for apiservers
|
# Network Load Balancer for apiservers and ingress
|
||||||
resource "aws_lb" "apiserver" {
|
resource "aws_lb" "nlb" {
|
||||||
name = "${var.cluster_name}-apiserver"
|
name = "${var.cluster_name}-nlb"
|
||||||
load_balancer_type = "network"
|
load_balancer_type = "network"
|
||||||
internal = false
|
internal = false
|
||||||
|
|
||||||
@ -24,11 +24,11 @@ resource "aws_lb" "apiserver" {
|
|||||||
enable_cross_zone_load_balancing = true
|
enable_cross_zone_load_balancing = true
|
||||||
}
|
}
|
||||||
|
|
||||||
# Forward TCP traffic to controllers
|
# Forward TCP apiserver traffic to controllers
|
||||||
resource "aws_lb_listener" "apiserver-https" {
|
resource "aws_lb_listener" "apiserver-https" {
|
||||||
load_balancer_arn = "${aws_lb.apiserver.arn}"
|
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||||
protocol = "TCP"
|
protocol = "TCP"
|
||||||
port = "443"
|
port = "6443"
|
||||||
|
|
||||||
default_action {
|
default_action {
|
||||||
type = "forward"
|
type = "forward"
|
||||||
@ -36,6 +36,30 @@ resource "aws_lb_listener" "apiserver-https" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Forward HTTP ingress traffic to workers
|
||||||
|
resource "aws_lb_listener" "ingress-http" {
|
||||||
|
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||||
|
protocol = "TCP"
|
||||||
|
port = 80
|
||||||
|
|
||||||
|
default_action {
|
||||||
|
type = "forward"
|
||||||
|
target_group_arn = "${module.workers.target_group_http}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Forward HTTPS ingress traffic to workers
|
||||||
|
resource "aws_lb_listener" "ingress-https" {
|
||||||
|
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||||
|
protocol = "TCP"
|
||||||
|
port = 443
|
||||||
|
|
||||||
|
default_action {
|
||||||
|
type = "forward"
|
||||||
|
target_group_arn = "${module.workers.target_group_https}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
# Target group of controllers
|
# Target group of controllers
|
||||||
resource "aws_lb_target_group" "controllers" {
|
resource "aws_lb_target_group" "controllers" {
|
||||||
name = "${var.cluster_name}-controllers"
|
name = "${var.cluster_name}-controllers"
|
||||||
@ -43,12 +67,12 @@ resource "aws_lb_target_group" "controllers" {
|
|||||||
target_type = "instance"
|
target_type = "instance"
|
||||||
|
|
||||||
protocol = "TCP"
|
protocol = "TCP"
|
||||||
port = 443
|
port = 6443
|
||||||
|
|
||||||
# TCP health check for apiserver
|
# TCP health check for apiserver
|
||||||
health_check {
|
health_check {
|
||||||
protocol = "TCP"
|
protocol = "TCP"
|
||||||
port = 443
|
port = 6443
|
||||||
|
|
||||||
# NLBs required to use same healthy and unhealthy thresholds
|
# NLBs required to use same healthy and unhealthy thresholds
|
||||||
healthy_threshold = 3
|
healthy_threshold = 3
|
||||||
@ -65,5 +89,5 @@ resource "aws_lb_target_group_attachment" "controllers" {
|
|||||||
|
|
||||||
target_group_arn = "${aws_lb_target_group.controllers.arn}"
|
target_group_arn = "${aws_lb_target_group.controllers.arn}"
|
||||||
target_id = "${element(aws_instance.controllers.*.id, count.index)}"
|
target_id = "${element(aws_instance.controllers.*.id, count.index)}"
|
||||||
port = 443
|
port = 6443
|
||||||
}
|
}
|
@ -1,5 +1,7 @@
|
|||||||
|
# Outputs for Kubernetes Ingress
|
||||||
|
|
||||||
output "ingress_dns_name" {
|
output "ingress_dns_name" {
|
||||||
value = "${module.workers.ingress_dns_name}"
|
value = "${aws_lb.nlb.dns_name}"
|
||||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -23,3 +25,15 @@ output "worker_security_groups" {
|
|||||||
output "kubeconfig" {
|
output "kubeconfig" {
|
||||||
value = "${module.bootkube.kubeconfig}"
|
value = "${module.bootkube.kubeconfig}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Outputs for custom load balancing
|
||||||
|
|
||||||
|
output "worker_target_group_http" {
|
||||||
|
description = "ARN of a target group of workers for HTTP traffic"
|
||||||
|
value = "${module.workers.target_group_http}"
|
||||||
|
}
|
||||||
|
|
||||||
|
output "worker_target_group_https" {
|
||||||
|
description = "ARN of a target group of workers for HTTPS traffic"
|
||||||
|
value = "${module.workers.target_group_https}"
|
||||||
|
}
|
||||||
|
@ -1,11 +1,11 @@
|
|||||||
# Terraform version and plugin versions
|
# Terraform version and plugin versions
|
||||||
|
|
||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.10.4"
|
required_version = ">= 0.11.0"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "aws" {
|
provider "aws" {
|
||||||
version = "~> 1.11"
|
version = "~> 1.13"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "local" {
|
provider "local" {
|
||||||
|
@ -36,8 +36,8 @@ resource "aws_security_group_rule" "controller-apiserver" {
|
|||||||
|
|
||||||
type = "ingress"
|
type = "ingress"
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
from_port = 443
|
from_port = 6443
|
||||||
to_port = 443
|
to_port = 6443
|
||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -91,6 +91,16 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
|||||||
source_security_group_id = "${aws_security_group.worker.id}"
|
source_security_group_id = "${aws_security_group.worker.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "controller-kubelet" {
|
||||||
|
security_group_id = "${aws_security_group.controller.id}"
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 10250
|
||||||
|
to_port = 10250
|
||||||
|
source_security_group_id = "${aws_security_group.worker.id}"
|
||||||
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "controller-kubelet-self" {
|
resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||||
security_group_id = "${aws_security_group.controller.id}"
|
security_group_id = "${aws_security_group.controller.id}"
|
||||||
|
|
||||||
|
@ -41,10 +41,10 @@ variable "worker_type" {
|
|||||||
description = "EC2 instance type for workers"
|
description = "EC2 instance type for workers"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "os_channel" {
|
variable "os_image" {
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "stable"
|
default = "coreos-stable"
|
||||||
description = "Container Linux AMI channel (stable, beta, alpha)"
|
description = "AMI channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "disk_size" {
|
variable "disk_size" {
|
||||||
@ -59,6 +59,12 @@ variable "disk_type" {
|
|||||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "worker_price" {
|
||||||
|
type = "string"
|
||||||
|
default = ""
|
||||||
|
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
|
||||||
|
}
|
||||||
|
|
||||||
variable "controller_clc_snippets" {
|
variable "controller_clc_snippets" {
|
||||||
type = "list"
|
type = "list"
|
||||||
description = "Controller Container Linux Config snippets"
|
description = "Controller Container Linux Config snippets"
|
||||||
@ -110,7 +116,7 @@ variable "pod_cidr" {
|
|||||||
variable "service_cidr" {
|
variable "service_cidr" {
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
CIDR IPv4 range to assign Kubernetes services.
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
EOD
|
EOD
|
||||||
|
|
||||||
type = "string"
|
type = "string"
|
||||||
@ -118,7 +124,7 @@ EOD
|
|||||||
}
|
}
|
||||||
|
|
||||||
variable "cluster_domain_suffix" {
|
variable "cluster_domain_suffix" {
|
||||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "cluster.local"
|
default = "cluster.local"
|
||||||
}
|
}
|
||||||
|
@ -8,8 +8,9 @@ module "workers" {
|
|||||||
security_groups = ["${aws_security_group.worker.id}"]
|
security_groups = ["${aws_security_group.worker.id}"]
|
||||||
count = "${var.worker_count}"
|
count = "${var.worker_count}"
|
||||||
instance_type = "${var.worker_type}"
|
instance_type = "${var.worker_type}"
|
||||||
os_channel = "${var.os_channel}"
|
os_image = "${var.os_image}"
|
||||||
disk_size = "${var.disk_size}"
|
disk_size = "${var.disk_size}"
|
||||||
|
spot_price = "${var.worker_price}"
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
kubeconfig = "${module.bootkube.kubeconfig}"
|
kubeconfig = "${module.bootkube.kubeconfig}"
|
||||||
|
@ -1,3 +1,13 @@
|
|||||||
|
locals {
|
||||||
|
# Pick a CoreOS Container Linux derivative
|
||||||
|
# coreos-stable -> Container Linux AMI
|
||||||
|
# flatcar-stable -> Flatcar Linux AMI
|
||||||
|
ami_id = "${local.flavor == "flatcar" ? data.aws_ami.flatcar.image_id : data.aws_ami.coreos.image_id}"
|
||||||
|
|
||||||
|
flavor = "${element(split("-", var.os_image), 0)}"
|
||||||
|
channel = "${element(split("-", var.os_image), 1)}"
|
||||||
|
}
|
||||||
|
|
||||||
data "aws_ami" "coreos" {
|
data "aws_ami" "coreos" {
|
||||||
most_recent = true
|
most_recent = true
|
||||||
owners = ["595879546273"]
|
owners = ["595879546273"]
|
||||||
@ -14,6 +24,26 @@ data "aws_ami" "coreos" {
|
|||||||
|
|
||||||
filter {
|
filter {
|
||||||
name = "name"
|
name = "name"
|
||||||
values = ["CoreOS-${var.os_channel}-*"]
|
values = ["CoreOS-${local.channel}-*"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
data "aws_ami" "flatcar" {
|
||||||
|
most_recent = true
|
||||||
|
owners = ["075585003325"]
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "architecture"
|
||||||
|
values = ["x86_64"]
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "virtualization-type"
|
||||||
|
values = ["hvm"]
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
name = "name"
|
||||||
|
values = ["Flatcar-${local.channel}-*"]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -47,8 +47,9 @@ systemd:
|
|||||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||||
--allow-privileged \
|
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -91,7 +92,7 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||||
KUBELET_IMAGE_TAG=v1.10.2
|
KUBELET_IMAGE_TAG=v1.11.2
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
contents:
|
contents:
|
||||||
@ -109,7 +110,7 @@ storage:
|
|||||||
--volume config,kind=host,source=/etc/kubernetes \
|
--volume config,kind=host,source=/etc/kubernetes \
|
||||||
--mount volume=config,target=/etc/kubernetes \
|
--mount volume=config,target=/etc/kubernetes \
|
||||||
--insecure-options=image \
|
--insecure-options=image \
|
||||||
docker://k8s.gcr.io/hyperkube:v1.10.2 \
|
docker://k8s.gcr.io/hyperkube:v1.11.2 \
|
||||||
--net=host \
|
--net=host \
|
||||||
--dns=host \
|
--dns=host \
|
||||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||||
|
@ -1,39 +1,4 @@
|
|||||||
# Network Load Balancer for Ingress
|
# Target groups of instances for use with load balancers
|
||||||
resource "aws_lb" "ingress" {
|
|
||||||
name = "${var.name}-ingress"
|
|
||||||
load_balancer_type = "network"
|
|
||||||
internal = false
|
|
||||||
|
|
||||||
subnets = ["${var.subnet_ids}"]
|
|
||||||
|
|
||||||
enable_cross_zone_load_balancing = true
|
|
||||||
}
|
|
||||||
|
|
||||||
# Forward HTTP traffic to workers
|
|
||||||
resource "aws_lb_listener" "ingress-http" {
|
|
||||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
|
||||||
protocol = "TCP"
|
|
||||||
port = 80
|
|
||||||
|
|
||||||
default_action {
|
|
||||||
type = "forward"
|
|
||||||
target_group_arn = "${aws_lb_target_group.workers-http.arn}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Forward HTTPS traffic to workers
|
|
||||||
resource "aws_lb_listener" "ingress-https" {
|
|
||||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
|
||||||
protocol = "TCP"
|
|
||||||
port = 443
|
|
||||||
|
|
||||||
default_action {
|
|
||||||
type = "forward"
|
|
||||||
target_group_arn = "${aws_lb_target_group.workers-https.arn}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Network Load Balancer target groups of instances
|
|
||||||
|
|
||||||
resource "aws_lb_target_group" "workers-http" {
|
resource "aws_lb_target_group" "workers-http" {
|
||||||
name = "${var.name}-workers-http"
|
name = "${var.name}-workers-http"
|
||||||
|
@ -1,4 +1,9 @@
|
|||||||
output "ingress_dns_name" {
|
output "target_group_http" {
|
||||||
value = "${aws_lb.ingress.dns_name}"
|
description = "ARN of a target group of workers for HTTP traffic"
|
||||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
value = "${aws_lb_target_group.workers-http.arn}"
|
||||||
|
}
|
||||||
|
|
||||||
|
output "target_group_https" {
|
||||||
|
description = "ARN of a target group of workers for HTTPS traffic"
|
||||||
|
value = "${aws_lb_target_group.workers-https.arn}"
|
||||||
}
|
}
|
||||||
|
@ -34,10 +34,10 @@ variable "instance_type" {
|
|||||||
description = "EC2 instance type"
|
description = "EC2 instance type"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "os_channel" {
|
variable "os_image" {
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "stable"
|
default = "coreos-stable"
|
||||||
description = "Container Linux AMI channel (stable, beta, alpha)"
|
description = "AMI channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "disk_size" {
|
variable "disk_size" {
|
||||||
@ -52,6 +52,12 @@ variable "disk_type" {
|
|||||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "spot_price" {
|
||||||
|
type = "string"
|
||||||
|
default = ""
|
||||||
|
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
|
||||||
|
}
|
||||||
|
|
||||||
variable "clc_snippets" {
|
variable "clc_snippets" {
|
||||||
type = "list"
|
type = "list"
|
||||||
description = "Container Linux Config snippets"
|
description = "Container Linux Config snippets"
|
||||||
@ -73,7 +79,7 @@ variable "ssh_authorized_key" {
|
|||||||
variable "service_cidr" {
|
variable "service_cidr" {
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
CIDR IPv4 range to assign Kubernetes services.
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
EOD
|
EOD
|
||||||
|
|
||||||
type = "string"
|
type = "string"
|
||||||
@ -81,7 +87,7 @@ EOD
|
|||||||
}
|
}
|
||||||
|
|
||||||
variable "cluster_domain_suffix" {
|
variable "cluster_domain_suffix" {
|
||||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "cluster.local"
|
default = "cluster.local"
|
||||||
}
|
}
|
||||||
|
@ -26,6 +26,12 @@ resource "aws_autoscaling_group" "workers" {
|
|||||||
create_before_destroy = true
|
create_before_destroy = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Waiting for instance creation delays adding the ASG to state. If instances
|
||||||
|
# can't be created (e.g. spot price too low), the ASG will be orphaned.
|
||||||
|
# Orphaned ASGs escape cleanup, can't be updated, and keep bidding if spot is
|
||||||
|
# used. Disable wait to avoid issues and align with other clouds.
|
||||||
|
wait_for_capacity_timeout = "0"
|
||||||
|
|
||||||
tags = [{
|
tags = [{
|
||||||
key = "Name"
|
key = "Name"
|
||||||
value = "${var.name}-worker"
|
value = "${var.name}-worker"
|
||||||
@ -35,8 +41,10 @@ resource "aws_autoscaling_group" "workers" {
|
|||||||
|
|
||||||
# Worker template
|
# Worker template
|
||||||
resource "aws_launch_configuration" "worker" {
|
resource "aws_launch_configuration" "worker" {
|
||||||
image_id = "${data.aws_ami.coreos.image_id}"
|
image_id = "${local.ami_id}"
|
||||||
instance_type = "${var.instance_type}"
|
instance_type = "${var.instance_type}"
|
||||||
|
spot_price = "${var.spot_price}"
|
||||||
|
enable_monitoring = false
|
||||||
|
|
||||||
user_data = "${data.ct_config.worker_ign.rendered}"
|
user_data = "${data.ct_config.worker_ign.rendered}"
|
||||||
|
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||||
@ -19,5 +19,5 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Docs
|
## Docs
|
||||||
|
|
||||||
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/aws/).
|
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/cl/aws/).
|
||||||
|
|
||||||
|
@ -14,6 +14,6 @@ data "aws_ami" "fedora" {
|
|||||||
|
|
||||||
filter {
|
filter {
|
||||||
name = "name"
|
name = "name"
|
||||||
values = ["Fedora-Atomic-27-20180419.0.x86_64-*-gp2-*"]
|
values = ["Fedora-AtomicHost-28-20180625.1.x86_64-*-gp2-*"]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootkube" {
|
module "bootkube" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||||
|
|
||||||
cluster_name = "${var.cluster_name}"
|
cluster_name = "${var.cluster_name}"
|
||||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||||
|
@ -51,8 +51,9 @@ write_files:
|
|||||||
RestartSec=10
|
RestartSec=10
|
||||||
- path: /etc/kubernetes/kubelet.conf
|
- path: /etc/kubernetes/kubelet.conf
|
||||||
content: |
|
content: |
|
||||||
ARGS="--allow-privileged \
|
ARGS="--anonymous-auth=false \
|
||||||
--anonymous-auth=false \
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -91,9 +92,9 @@ bootcmd:
|
|||||||
runcmd:
|
runcmd:
|
||||||
- [systemctl, daemon-reload]
|
- [systemctl, daemon-reload]
|
||||||
- [systemctl, restart, NetworkManager]
|
- [systemctl, restart, NetworkManager]
|
||||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||||
- [systemctl, start, --no-block, etcd.service]
|
- [systemctl, start, --no-block, etcd.service]
|
||||||
- [systemctl, enable, cloud-metadata.service]
|
- [systemctl, enable, cloud-metadata.service]
|
||||||
- [systemctl, start, --no-block, kubelet.service]
|
- [systemctl, start, --no-block, kubelet.service]
|
||||||
|
@ -54,7 +54,7 @@ data "template_file" "controller-cloudinit" {
|
|||||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||||
|
|
||||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
|
||||||
|
|
||||||
kubeconfig = "${indent(6, module.bootkube.kubeconfig)}"
|
kubeconfig = "${indent(6, module.bootkube.kubeconfig)}"
|
||||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||||
@ -63,13 +63,13 @@ data "template_file" "controller-cloudinit" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
data "template_file" "etcds" {
|
||||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
count = "${var.controller_count}"
|
||||||
resource null_resource "repeat" {
|
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||||
count = "${var.controller_count}"
|
|
||||||
|
|
||||||
triggers {
|
vars {
|
||||||
name = "etcd${count.index}"
|
index = "${count.index}"
|
||||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
cluster_name = "${var.cluster_name}"
|
||||||
|
dns_zone = "${var.dns_zone}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -7,15 +7,15 @@ resource "aws_route53_record" "apiserver" {
|
|||||||
|
|
||||||
# AWS recommends their special "alias" records for ELBs
|
# AWS recommends their special "alias" records for ELBs
|
||||||
alias {
|
alias {
|
||||||
name = "${aws_lb.apiserver.dns_name}"
|
name = "${aws_lb.nlb.dns_name}"
|
||||||
zone_id = "${aws_lb.apiserver.zone_id}"
|
zone_id = "${aws_lb.nlb.zone_id}"
|
||||||
evaluate_target_health = true
|
evaluate_target_health = true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Network Load Balancer for apiservers
|
# Network Load Balancer for apiservers and ingress
|
||||||
resource "aws_lb" "apiserver" {
|
resource "aws_lb" "nlb" {
|
||||||
name = "${var.cluster_name}-apiserver"
|
name = "${var.cluster_name}-nlb"
|
||||||
load_balancer_type = "network"
|
load_balancer_type = "network"
|
||||||
internal = false
|
internal = false
|
||||||
|
|
||||||
@ -24,11 +24,11 @@ resource "aws_lb" "apiserver" {
|
|||||||
enable_cross_zone_load_balancing = true
|
enable_cross_zone_load_balancing = true
|
||||||
}
|
}
|
||||||
|
|
||||||
# Forward TCP traffic to controllers
|
# Forward TCP apiserver traffic to controllers
|
||||||
resource "aws_lb_listener" "apiserver-https" {
|
resource "aws_lb_listener" "apiserver-https" {
|
||||||
load_balancer_arn = "${aws_lb.apiserver.arn}"
|
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||||
protocol = "TCP"
|
protocol = "TCP"
|
||||||
port = "443"
|
port = "6443"
|
||||||
|
|
||||||
default_action {
|
default_action {
|
||||||
type = "forward"
|
type = "forward"
|
||||||
@ -36,6 +36,30 @@ resource "aws_lb_listener" "apiserver-https" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Forward HTTP ingress traffic to workers
|
||||||
|
resource "aws_lb_listener" "ingress-http" {
|
||||||
|
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||||
|
protocol = "TCP"
|
||||||
|
port = 80
|
||||||
|
|
||||||
|
default_action {
|
||||||
|
type = "forward"
|
||||||
|
target_group_arn = "${module.workers.target_group_http}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Forward HTTPS ingress traffic to workers
|
||||||
|
resource "aws_lb_listener" "ingress-https" {
|
||||||
|
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||||
|
protocol = "TCP"
|
||||||
|
port = 443
|
||||||
|
|
||||||
|
default_action {
|
||||||
|
type = "forward"
|
||||||
|
target_group_arn = "${module.workers.target_group_https}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
# Target group of controllers
|
# Target group of controllers
|
||||||
resource "aws_lb_target_group" "controllers" {
|
resource "aws_lb_target_group" "controllers" {
|
||||||
name = "${var.cluster_name}-controllers"
|
name = "${var.cluster_name}-controllers"
|
||||||
@ -43,12 +67,12 @@ resource "aws_lb_target_group" "controllers" {
|
|||||||
target_type = "instance"
|
target_type = "instance"
|
||||||
|
|
||||||
protocol = "TCP"
|
protocol = "TCP"
|
||||||
port = 443
|
port = 6443
|
||||||
|
|
||||||
# TCP health check for apiserver
|
# TCP health check for apiserver
|
||||||
health_check {
|
health_check {
|
||||||
protocol = "TCP"
|
protocol = "TCP"
|
||||||
port = 443
|
port = 6443
|
||||||
|
|
||||||
# NLBs required to use same healthy and unhealthy thresholds
|
# NLBs required to use same healthy and unhealthy thresholds
|
||||||
healthy_threshold = 3
|
healthy_threshold = 3
|
||||||
@ -65,5 +89,5 @@ resource "aws_lb_target_group_attachment" "controllers" {
|
|||||||
|
|
||||||
target_group_arn = "${aws_lb_target_group.controllers.arn}"
|
target_group_arn = "${aws_lb_target_group.controllers.arn}"
|
||||||
target_id = "${element(aws_instance.controllers.*.id, count.index)}"
|
target_id = "${element(aws_instance.controllers.*.id, count.index)}"
|
||||||
port = 443
|
port = 6443
|
||||||
}
|
}
|
@ -1,5 +1,7 @@
|
|||||||
|
# Outputs for Kubernetes Ingress
|
||||||
|
|
||||||
output "ingress_dns_name" {
|
output "ingress_dns_name" {
|
||||||
value = "${module.workers.ingress_dns_name}"
|
value = "${aws_lb.nlb.dns_name}"
|
||||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -23,3 +25,15 @@ output "worker_security_groups" {
|
|||||||
output "kubeconfig" {
|
output "kubeconfig" {
|
||||||
value = "${module.bootkube.kubeconfig}"
|
value = "${module.bootkube.kubeconfig}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Outputs for custom load balancing
|
||||||
|
|
||||||
|
output "worker_target_group_http" {
|
||||||
|
description = "ARN of a target group of workers for HTTP traffic"
|
||||||
|
value = "${module.workers.target_group_http}"
|
||||||
|
}
|
||||||
|
|
||||||
|
output "worker_target_group_https" {
|
||||||
|
description = "ARN of a target group of workers for HTTPS traffic"
|
||||||
|
value = "${module.workers.target_group_https}"
|
||||||
|
}
|
||||||
|
@ -1,11 +1,11 @@
|
|||||||
# Terraform version and plugin versions
|
# Terraform version and plugin versions
|
||||||
|
|
||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.10.4"
|
required_version = ">= 0.11.0"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "aws" {
|
provider "aws" {
|
||||||
version = "~> 1.11"
|
version = "~> 1.13"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "local" {
|
provider "local" {
|
||||||
|
@ -36,8 +36,8 @@ resource "aws_security_group_rule" "controller-apiserver" {
|
|||||||
|
|
||||||
type = "ingress"
|
type = "ingress"
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
from_port = 443
|
from_port = 6443
|
||||||
to_port = 443
|
to_port = 6443
|
||||||
cidr_blocks = ["0.0.0.0/0"]
|
cidr_blocks = ["0.0.0.0/0"]
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -91,6 +91,16 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
|||||||
source_security_group_id = "${aws_security_group.worker.id}"
|
source_security_group_id = "${aws_security_group.worker.id}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group_rule" "controller-kubelet" {
|
||||||
|
security_group_id = "${aws_security_group.controller.id}"
|
||||||
|
|
||||||
|
type = "ingress"
|
||||||
|
protocol = "tcp"
|
||||||
|
from_port = 10250
|
||||||
|
to_port = 10250
|
||||||
|
source_security_group_id = "${aws_security_group.worker.id}"
|
||||||
|
}
|
||||||
|
|
||||||
resource "aws_security_group_rule" "controller-kubelet-self" {
|
resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||||
security_group_id = "${aws_security_group.controller.id}"
|
security_group_id = "${aws_security_group.controller.id}"
|
||||||
|
|
||||||
|
@ -53,6 +53,12 @@ variable "disk_type" {
|
|||||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "worker_price" {
|
||||||
|
type = "string"
|
||||||
|
default = ""
|
||||||
|
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
|
||||||
|
}
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
|
|
||||||
variable "ssh_authorized_key" {
|
variable "ssh_authorized_key" {
|
||||||
@ -92,7 +98,7 @@ variable "pod_cidr" {
|
|||||||
variable "service_cidr" {
|
variable "service_cidr" {
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
CIDR IPv4 range to assign Kubernetes services.
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
EOD
|
EOD
|
||||||
|
|
||||||
type = "string"
|
type = "string"
|
||||||
@ -100,7 +106,7 @@ EOD
|
|||||||
}
|
}
|
||||||
|
|
||||||
variable "cluster_domain_suffix" {
|
variable "cluster_domain_suffix" {
|
||||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "cluster.local"
|
default = "cluster.local"
|
||||||
}
|
}
|
||||||
|
@ -9,6 +9,7 @@ module "workers" {
|
|||||||
count = "${var.worker_count}"
|
count = "${var.worker_count}"
|
||||||
instance_type = "${var.worker_type}"
|
instance_type = "${var.worker_type}"
|
||||||
disk_size = "${var.disk_size}"
|
disk_size = "${var.disk_size}"
|
||||||
|
spot_price = "${var.worker_price}"
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
kubeconfig = "${module.bootkube.kubeconfig}"
|
kubeconfig = "${module.bootkube.kubeconfig}"
|
||||||
|
@ -14,6 +14,6 @@ data "aws_ami" "fedora" {
|
|||||||
|
|
||||||
filter {
|
filter {
|
||||||
name = "name"
|
name = "name"
|
||||||
values = ["Fedora-Atomic-27-20180419.0.x86_64-*-gp2-*"]
|
values = ["Fedora-AtomicHost-28-20180625.1.x86_64-*-gp2-*"]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -30,8 +30,9 @@ write_files:
|
|||||||
RestartSec=10
|
RestartSec=10
|
||||||
- path: /etc/kubernetes/kubelet.conf
|
- path: /etc/kubernetes/kubelet.conf
|
||||||
content: |
|
content: |
|
||||||
ARGS="--allow-privileged \
|
ARGS="--anonymous-auth=false \
|
||||||
--anonymous-auth=false \
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -68,7 +69,7 @@ runcmd:
|
|||||||
- [systemctl, daemon-reload]
|
- [systemctl, daemon-reload]
|
||||||
- [systemctl, restart, NetworkManager]
|
- [systemctl, restart, NetworkManager]
|
||||||
- [systemctl, enable, cloud-metadata.service]
|
- [systemctl, enable, cloud-metadata.service]
|
||||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||||
- [systemctl, start, --no-block, kubelet.service]
|
- [systemctl, start, --no-block, kubelet.service]
|
||||||
users:
|
users:
|
||||||
- default
|
- default
|
||||||
|
@ -1,39 +1,4 @@
|
|||||||
# Network Load Balancer for Ingress
|
# Target groups of instances for use with load balancers
|
||||||
resource "aws_lb" "ingress" {
|
|
||||||
name = "${var.name}-ingress"
|
|
||||||
load_balancer_type = "network"
|
|
||||||
internal = false
|
|
||||||
|
|
||||||
subnets = ["${var.subnet_ids}"]
|
|
||||||
|
|
||||||
enable_cross_zone_load_balancing = true
|
|
||||||
}
|
|
||||||
|
|
||||||
# Forward HTTP traffic to workers
|
|
||||||
resource "aws_lb_listener" "ingress-http" {
|
|
||||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
|
||||||
protocol = "TCP"
|
|
||||||
port = 80
|
|
||||||
|
|
||||||
default_action {
|
|
||||||
type = "forward"
|
|
||||||
target_group_arn = "${aws_lb_target_group.workers-http.arn}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Forward HTTPS traffic to workers
|
|
||||||
resource "aws_lb_listener" "ingress-https" {
|
|
||||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
|
||||||
protocol = "TCP"
|
|
||||||
port = 443
|
|
||||||
|
|
||||||
default_action {
|
|
||||||
type = "forward"
|
|
||||||
target_group_arn = "${aws_lb_target_group.workers-https.arn}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Network Load Balancer target groups of instances
|
|
||||||
|
|
||||||
resource "aws_lb_target_group" "workers-http" {
|
resource "aws_lb_target_group" "workers-http" {
|
||||||
name = "${var.name}-workers-http"
|
name = "${var.name}-workers-http"
|
||||||
|
@ -1,4 +1,9 @@
|
|||||||
output "ingress_dns_name" {
|
output "target_group_http" {
|
||||||
value = "${aws_lb.ingress.dns_name}"
|
description = "ARN of a target group of workers for HTTP traffic"
|
||||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
value = "${aws_lb_target_group.workers-http.arn}"
|
||||||
|
}
|
||||||
|
|
||||||
|
output "target_group_https" {
|
||||||
|
description = "ARN of a target group of workers for HTTPS traffic"
|
||||||
|
value = "${aws_lb_target_group.workers-https.arn}"
|
||||||
}
|
}
|
||||||
|
@ -46,6 +46,12 @@ variable "disk_type" {
|
|||||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "spot_price" {
|
||||||
|
type = "string"
|
||||||
|
default = ""
|
||||||
|
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
|
||||||
|
}
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
|
|
||||||
variable "kubeconfig" {
|
variable "kubeconfig" {
|
||||||
@ -61,7 +67,7 @@ variable "ssh_authorized_key" {
|
|||||||
variable "service_cidr" {
|
variable "service_cidr" {
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
CIDR IPv4 range to assign Kubernetes services.
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
EOD
|
EOD
|
||||||
|
|
||||||
type = "string"
|
type = "string"
|
||||||
@ -69,7 +75,7 @@ EOD
|
|||||||
}
|
}
|
||||||
|
|
||||||
variable "cluster_domain_suffix" {
|
variable "cluster_domain_suffix" {
|
||||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "cluster.local"
|
default = "cluster.local"
|
||||||
}
|
}
|
||||||
|
@ -26,6 +26,12 @@ resource "aws_autoscaling_group" "workers" {
|
|||||||
create_before_destroy = true
|
create_before_destroy = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Waiting for instance creation delays adding the ASG to state. If instances
|
||||||
|
# can't be created (e.g. spot price too low), the ASG will be orphaned.
|
||||||
|
# Orphaned ASGs escape cleanup, can't be updated, and keep bidding if spot is
|
||||||
|
# used. Disable wait to avoid issues and align with other clouds.
|
||||||
|
wait_for_capacity_timeout = "0"
|
||||||
|
|
||||||
tags = [{
|
tags = [{
|
||||||
key = "Name"
|
key = "Name"
|
||||||
value = "${var.name}-worker"
|
value = "${var.name}-worker"
|
||||||
@ -35,8 +41,10 @@ resource "aws_autoscaling_group" "workers" {
|
|||||||
|
|
||||||
# Worker template
|
# Worker template
|
||||||
resource "aws_launch_configuration" "worker" {
|
resource "aws_launch_configuration" "worker" {
|
||||||
image_id = "${data.aws_ami.fedora.image_id}"
|
image_id = "${data.aws_ami.fedora.image_id}"
|
||||||
instance_type = "${var.instance_type}"
|
instance_type = "${var.instance_type}"
|
||||||
|
spot_price = "${var.spot_price}"
|
||||||
|
enable_monitoring = false
|
||||||
|
|
||||||
user_data = "${data.template_file.worker-cloudinit.rendered}"
|
user_data = "${data.template_file.worker-cloudinit.rendered}"
|
||||||
|
|
||||||
|
@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
## Docs
|
## Docs
|
||||||
|
|
||||||
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/bare-metal/).
|
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/cl/bare-metal/).
|
||||||
|
|
||||||
|
@ -1,14 +1,15 @@
|
|||||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootkube" {
|
module "bootkube" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||||
|
|
||||||
cluster_name = "${var.cluster_name}"
|
cluster_name = "${var.cluster_name}"
|
||||||
api_servers = ["${var.k8s_domain_name}"]
|
api_servers = ["${var.k8s_domain_name}"]
|
||||||
etcd_servers = ["${var.controller_domains}"]
|
etcd_servers = ["${var.controller_domains}"]
|
||||||
asset_dir = "${var.asset_dir}"
|
asset_dir = "${var.asset_dir}"
|
||||||
networking = "${var.networking}"
|
networking = "${var.networking}"
|
||||||
network_mtu = "${var.network_mtu}"
|
network_mtu = "${var.network_mtu}"
|
||||||
pod_cidr = "${var.pod_cidr}"
|
network_ip_autodetection_method = "${var.network_ip_autodetection_method}"
|
||||||
service_cidr = "${var.service_cidr}"
|
pod_cidr = "${var.pod_cidr}"
|
||||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
service_cidr = "${var.service_cidr}"
|
||||||
|
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||||
}
|
}
|
||||||
|
@ -7,7 +7,7 @@ systemd:
|
|||||||
- name: 40-etcd-cluster.conf
|
- name: 40-etcd-cluster.conf
|
||||||
contents: |
|
contents: |
|
||||||
[Service]
|
[Service]
|
||||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||||
Environment="ETCD_NAME=${etcd_name}"
|
Environment="ETCD_NAME=${etcd_name}"
|
||||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
|
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
|
||||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
|
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
|
||||||
@ -82,8 +82,9 @@ systemd:
|
|||||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||||
--allow-privileged \
|
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -122,7 +123,7 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||||
KUBELET_IMAGE_TAG=v1.10.2
|
KUBELET_IMAGE_TAG=v1.11.2
|
||||||
- path: /etc/hostname
|
- path: /etc/hostname
|
||||||
filesystem: root
|
filesystem: root
|
||||||
mode: 0644
|
mode: 0644
|
||||||
@ -149,7 +150,7 @@ storage:
|
|||||||
# Move experimental manifests
|
# Move experimental manifests
|
||||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
|
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||||
exec /usr/bin/rkt run \
|
exec /usr/bin/rkt run \
|
||||||
--trust-keys-from-https \
|
--trust-keys-from-https \
|
||||||
|
@ -31,10 +31,10 @@ storage:
|
|||||||
inline: |
|
inline: |
|
||||||
#!/bin/bash -ex
|
#!/bin/bash -ex
|
||||||
curl --retry 10 "${ignition_endpoint}?{{.request.raw_query}}&os=installed" -o ignition.json
|
curl --retry 10 "${ignition_endpoint}?{{.request.raw_query}}&os=installed" -o ignition.json
|
||||||
coreos-install \
|
${os_flavor}-install \
|
||||||
-d ${install_disk} \
|
-d ${install_disk} \
|
||||||
-C ${container_linux_channel} \
|
-C ${os_channel} \
|
||||||
-V ${container_linux_version} \
|
-V ${os_version} \
|
||||||
-o "${container_linux_oem}" \
|
-o "${container_linux_oem}" \
|
||||||
${baseurl_flag} \
|
${baseurl_flag} \
|
||||||
-i ignition.json
|
-i ignition.json
|
@ -55,8 +55,9 @@ systemd:
|
|||||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||||
--allow-privileged \
|
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -83,7 +84,7 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||||
KUBELET_IMAGE_TAG=v1.10.2
|
KUBELET_IMAGE_TAG=v1.11.2
|
||||||
- path: /etc/hostname
|
- path: /etc/hostname
|
||||||
filesystem: root
|
filesystem: root
|
||||||
mode: 0644
|
mode: 0644
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
// Install Container Linux to disk
|
resource "matchbox_group" "install" {
|
||||||
resource "matchbox_group" "container-linux-install" {
|
|
||||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||||
|
|
||||||
name = "${format("container-linux-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
name = "${format("install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||||
profile = "${var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
|
|
||||||
|
profile = "${local.flavor == "flatcar" ? element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
|
||||||
|
|
||||||
selector {
|
selector {
|
||||||
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
||||||
|
@ -1,12 +1,20 @@
|
|||||||
|
locals {
|
||||||
|
# coreos-stable -> coreos flavor, stable channel
|
||||||
|
# flatcar-stable -> flatcar flavor, stable channel
|
||||||
|
flavor = "${element(split("-", var.os_channel), 0)}"
|
||||||
|
|
||||||
|
channel = "${element(split("-", var.os_channel), 1)}"
|
||||||
|
}
|
||||||
|
|
||||||
// Container Linux Install profile (from release.core-os.net)
|
// Container Linux Install profile (from release.core-os.net)
|
||||||
resource "matchbox_profile" "container-linux-install" {
|
resource "matchbox_profile" "container-linux-install" {
|
||||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||||
name = "${format("%s-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
name = "${format("%s-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||||
|
|
||||||
kernel = "http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
|
kernel = "http://${local.channel}.release.core-os.net/amd64-usr/${var.os_version}/coreos_production_pxe.vmlinuz"
|
||||||
|
|
||||||
initrd = [
|
initrd = [
|
||||||
"http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
|
"http://${local.channel}.release.core-os.net/amd64-usr/${var.os_version}/coreos_production_pxe_image.cpio.gz",
|
||||||
]
|
]
|
||||||
|
|
||||||
args = [
|
args = [
|
||||||
@ -24,15 +32,16 @@ resource "matchbox_profile" "container-linux-install" {
|
|||||||
data "template_file" "container-linux-install-configs" {
|
data "template_file" "container-linux-install-configs" {
|
||||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||||
|
|
||||||
template = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
|
template = "${file("${path.module}/cl/install.yaml.tmpl")}"
|
||||||
|
|
||||||
vars {
|
vars {
|
||||||
container_linux_channel = "${var.container_linux_channel}"
|
os_flavor = "${local.flavor}"
|
||||||
container_linux_version = "${var.container_linux_version}"
|
os_channel = "${local.channel}"
|
||||||
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
|
os_version = "${var.os_version}"
|
||||||
install_disk = "${var.install_disk}"
|
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
|
||||||
container_linux_oem = "${var.container_linux_oem}"
|
install_disk = "${var.install_disk}"
|
||||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
container_linux_oem = "${var.container_linux_oem}"
|
||||||
|
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||||
|
|
||||||
# only cached-container-linux profile adds -b baseurl
|
# only cached-container-linux profile adds -b baseurl
|
||||||
baseurl_flag = ""
|
baseurl_flag = ""
|
||||||
@ -40,15 +49,15 @@ data "template_file" "container-linux-install-configs" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Container Linux Install profile (from matchbox /assets cache)
|
// Container Linux Install profile (from matchbox /assets cache)
|
||||||
// Note: Admin must have downloaded container_linux_version into matchbox assets.
|
// Note: Admin must have downloaded os_version into matchbox assets.
|
||||||
resource "matchbox_profile" "cached-container-linux-install" {
|
resource "matchbox_profile" "cached-container-linux-install" {
|
||||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||||
name = "${format("%s-cached-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
name = "${format("%s-cached-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||||
|
|
||||||
kernel = "/assets/coreos/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
|
kernel = "/assets/coreos/${var.os_version}/coreos_production_pxe.vmlinuz"
|
||||||
|
|
||||||
initrd = [
|
initrd = [
|
||||||
"/assets/coreos/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
|
"/assets/coreos/${var.os_version}/coreos_production_pxe_image.cpio.gz",
|
||||||
]
|
]
|
||||||
|
|
||||||
args = [
|
args = [
|
||||||
@ -66,28 +75,61 @@ resource "matchbox_profile" "cached-container-linux-install" {
|
|||||||
data "template_file" "cached-container-linux-install-configs" {
|
data "template_file" "cached-container-linux-install-configs" {
|
||||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||||
|
|
||||||
template = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
|
template = "${file("${path.module}/cl/install.yaml.tmpl")}"
|
||||||
|
|
||||||
vars {
|
vars {
|
||||||
container_linux_channel = "${var.container_linux_channel}"
|
os_flavor = "${local.flavor}"
|
||||||
container_linux_version = "${var.container_linux_version}"
|
os_channel = "${local.channel}"
|
||||||
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
|
os_version = "${var.os_version}"
|
||||||
install_disk = "${var.install_disk}"
|
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
|
||||||
container_linux_oem = "${var.container_linux_oem}"
|
install_disk = "${var.install_disk}"
|
||||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
container_linux_oem = "${var.container_linux_oem}"
|
||||||
|
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||||
|
|
||||||
# profile uses -b baseurl to install from matchbox cache
|
# profile uses -b baseurl to install from matchbox cache
|
||||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/coreos"
|
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/coreos"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Flatcar Linux install profile (from release.flatcar-linux.net)
|
||||||
|
resource "matchbox_profile" "flatcar-install" {
|
||||||
|
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||||
|
name = "${format("%s-flatcar-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||||
|
|
||||||
|
kernel = "http://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||||
|
|
||||||
|
initrd = [
|
||||||
|
"http://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||||
|
]
|
||||||
|
|
||||||
|
args = [
|
||||||
|
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||||
|
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||||
|
"flatcar.first_boot=yes",
|
||||||
|
"console=tty0",
|
||||||
|
"console=ttyS0",
|
||||||
|
"${var.kernel_args}",
|
||||||
|
]
|
||||||
|
|
||||||
|
container_linux_config = "${element(data.template_file.container-linux-install-configs.*.rendered, count.index)}"
|
||||||
|
}
|
||||||
|
|
||||||
// Kubernetes Controller profiles
|
// Kubernetes Controller profiles
|
||||||
resource "matchbox_profile" "controllers" {
|
resource "matchbox_profile" "controllers" {
|
||||||
count = "${length(var.controller_names)}"
|
count = "${length(var.controller_names)}"
|
||||||
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
||||||
container_linux_config = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
raw_ignition = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
data "ct_config" "controller-ignitions" {
|
||||||
|
count = "${length(var.controller_names)}"
|
||||||
|
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||||
|
pretty_print = false
|
||||||
|
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
|
||||||
|
snippets = ["${local.clc_map[element(var.controller_names, count.index)]}"]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
data "template_file" "controller-configs" {
|
data "template_file" "controller-configs" {
|
||||||
count = "${length(var.controller_names)}"
|
count = "${length(var.controller_names)}"
|
||||||
|
|
||||||
@ -110,7 +152,16 @@ data "template_file" "controller-configs" {
|
|||||||
resource "matchbox_profile" "workers" {
|
resource "matchbox_profile" "workers" {
|
||||||
count = "${length(var.worker_names)}"
|
count = "${length(var.worker_names)}"
|
||||||
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
||||||
container_linux_config = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
raw_ignition = "${element(data.ct_config.worker-ignitions.*.rendered, count.index)}"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
data "ct_config" "worker-ignitions" {
|
||||||
|
count = "${length(var.worker_names)}"
|
||||||
|
content = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
||||||
|
pretty_print = false
|
||||||
|
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
|
||||||
|
snippets = ["${local.clc_map[element(var.worker_names, count.index)]}"]
|
||||||
}
|
}
|
||||||
|
|
||||||
data "template_file" "worker-configs" {
|
data "template_file" "worker-configs" {
|
||||||
@ -128,3 +179,18 @@ data "template_file" "worker-configs" {
|
|||||||
networkd_content = "${length(var.worker_networkds) == 0 ? "" : element(concat(var.worker_networkds, list("")), count.index)}"
|
networkd_content = "${length(var.worker_networkds) == 0 ? "" : element(concat(var.worker_networkds, list("")), count.index)}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
locals {
|
||||||
|
# Hack to workaround https://github.com/hashicorp/terraform/issues/17251
|
||||||
|
# Default Container Linux config snippets map every node names to list("\n") so
|
||||||
|
# all lookups succeed
|
||||||
|
clc_defaults = "${zipmap(concat(var.controller_names, var.worker_names), chunklist(data.template_file.clc-default-snippets.*.rendered, 1))}"
|
||||||
|
# Union of the default and user specific snippets, later overrides prior.
|
||||||
|
clc_map = "${merge(local.clc_defaults, var.clc_snippets)}"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Horrible hack to generate a Terraform list of node count length
|
||||||
|
data "template_file" "clc-default-snippets" {
|
||||||
|
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||||
|
template = "\n"
|
||||||
|
}
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# Terraform version and plugin versions
|
# Terraform version and plugin versions
|
||||||
|
|
||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.10.4"
|
required_version = ">= 0.11.0"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "local" {
|
provider "local" {
|
||||||
|
@ -2,6 +2,14 @@
|
|||||||
resource "null_resource" "copy-controller-secrets" {
|
resource "null_resource" "copy-controller-secrets" {
|
||||||
count = "${length(var.controller_names)}"
|
count = "${length(var.controller_names)}"
|
||||||
|
|
||||||
|
# Without depends_on, remote-exec could start and wait for machines before
|
||||||
|
# matchbox groups are written, causing a deadlock.
|
||||||
|
depends_on = [
|
||||||
|
"matchbox_group.install",
|
||||||
|
"matchbox_group.controller",
|
||||||
|
"matchbox_group.worker",
|
||||||
|
]
|
||||||
|
|
||||||
connection {
|
connection {
|
||||||
type = "ssh"
|
type = "ssh"
|
||||||
host = "${element(var.controller_domains, count.index)}"
|
host = "${element(var.controller_domains, count.index)}"
|
||||||
@ -70,6 +78,14 @@ resource "null_resource" "copy-controller-secrets" {
|
|||||||
resource "null_resource" "copy-worker-secrets" {
|
resource "null_resource" "copy-worker-secrets" {
|
||||||
count = "${length(var.worker_names)}"
|
count = "${length(var.worker_names)}"
|
||||||
|
|
||||||
|
# Without depends_on, remote-exec could start and wait for machines before
|
||||||
|
# matchbox groups are written, causing a deadlock.
|
||||||
|
depends_on = [
|
||||||
|
"matchbox_group.install",
|
||||||
|
"matchbox_group.controller",
|
||||||
|
"matchbox_group.worker",
|
||||||
|
]
|
||||||
|
|
||||||
connection {
|
connection {
|
||||||
type = "ssh"
|
type = "ssh"
|
||||||
host = "${element(var.worker_domains, count.index)}"
|
host = "${element(var.worker_domains, count.index)}"
|
||||||
|
@ -10,14 +10,14 @@ variable "matchbox_http_endpoint" {
|
|||||||
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
|
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "container_linux_channel" {
|
variable "os_channel" {
|
||||||
type = "string"
|
type = "string"
|
||||||
description = "Container Linux channel corresponding to the container_linux_version"
|
description = "Channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "container_linux_version" {
|
variable "os_version" {
|
||||||
type = "string"
|
type = "string"
|
||||||
description = "Container Linux version of the kernel/initrd to PXE or the image to install"
|
description = "Version for a Container Linux derivative to PXE and install (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||||
}
|
}
|
||||||
|
|
||||||
# machines
|
# machines
|
||||||
@ -25,26 +25,38 @@ variable "container_linux_version" {
|
|||||||
|
|
||||||
variable "controller_names" {
|
variable "controller_names" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of controller names (e.g. [node1])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "controller_macs" {
|
variable "controller_macs" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "controller_domains" {
|
variable "controller_domains" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "worker_names" {
|
variable "worker_names" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of worker names (e.g. [node2, node3])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "worker_macs" {
|
variable "worker_macs" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "worker_domains" {
|
variable "worker_domains" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "clc_snippets" {
|
||||||
|
type = "map"
|
||||||
|
description = "Map from machine names to lists of Container Linux Config snippets"
|
||||||
|
default = {}
|
||||||
}
|
}
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
@ -76,6 +88,12 @@ variable "network_mtu" {
|
|||||||
default = "1480"
|
default = "1480"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "network_ip_autodetection_method" {
|
||||||
|
description = "Method to autodetect the host IPv4 address (applies to calico only)"
|
||||||
|
type = "string"
|
||||||
|
default = "first-found"
|
||||||
|
}
|
||||||
|
|
||||||
variable "pod_cidr" {
|
variable "pod_cidr" {
|
||||||
description = "CIDR IPv4 range to assign Kubernetes pods"
|
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||||
type = "string"
|
type = "string"
|
||||||
@ -85,7 +103,7 @@ variable "pod_cidr" {
|
|||||||
variable "service_cidr" {
|
variable "service_cidr" {
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
CIDR IPv4 range to assign Kubernetes services.
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
EOD
|
EOD
|
||||||
|
|
||||||
type = "string"
|
type = "string"
|
||||||
@ -95,7 +113,7 @@ EOD
|
|||||||
# optional
|
# optional
|
||||||
|
|
||||||
variable "cluster_domain_suffix" {
|
variable "cluster_domain_suffix" {
|
||||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "cluster.local"
|
default = "cluster.local"
|
||||||
}
|
}
|
||||||
@ -103,7 +121,7 @@ variable "cluster_domain_suffix" {
|
|||||||
variable "cached_install" {
|
variable "cached_install" {
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "false"
|
default = "false"
|
||||||
description = "Whether Container Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the container_linux_version into matchbox assets."
|
description = "Whether Container Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "install_disk" {
|
variable "install_disk" {
|
||||||
@ -115,7 +133,7 @@ variable "install_disk" {
|
|||||||
variable "container_linux_oem" {
|
variable "container_linux_oem" {
|
||||||
type = "string"
|
type = "string"
|
||||||
default = ""
|
default = ""
|
||||||
description = "Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
|
description = "DEPRECATED: Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "kernel_args" {
|
variable "kernel_args" {
|
||||||
|
@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
## Docs
|
## Docs
|
||||||
|
|
||||||
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/bare-metal/).
|
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/cl/bare-metal/).
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootkube" {
|
module "bootkube" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||||
|
|
||||||
cluster_name = "${var.cluster_name}"
|
cluster_name = "${var.cluster_name}"
|
||||||
api_servers = ["${var.k8s_domain_name}"]
|
api_servers = ["${var.k8s_domain_name}"]
|
||||||
etcd_servers = ["${var.controller_domains}"]
|
etcd_servers = ["${var.controller_domains}"]
|
||||||
|
@ -36,8 +36,9 @@ write_files:
|
|||||||
RestartSec=10
|
RestartSec=10
|
||||||
- path: /etc/kubernetes/kubelet.conf
|
- path: /etc/kubernetes/kubelet.conf
|
||||||
content: |
|
content: |
|
||||||
ARGS="--allow-privileged \
|
ARGS="--anonymous-auth=false \
|
||||||
--anonymous-auth=false \
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -82,9 +83,9 @@ runcmd:
|
|||||||
- [systemctl, daemon-reload]
|
- [systemctl, daemon-reload]
|
||||||
- [systemctl, restart, NetworkManager]
|
- [systemctl, restart, NetworkManager]
|
||||||
- [hostnamectl, set-hostname, ${domain_name}]
|
- [hostnamectl, set-hostname, ${domain_name}]
|
||||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||||
- [systemctl, start, --no-block, etcd.service]
|
- [systemctl, start, --no-block, etcd.service]
|
||||||
- [systemctl, enable, kubelet.path]
|
- [systemctl, enable, kubelet.path]
|
||||||
- [systemctl, start, --no-block, kubelet.path]
|
- [systemctl, start, --no-block, kubelet.path]
|
||||||
|
@ -15,8 +15,9 @@ write_files:
|
|||||||
RestartSec=10
|
RestartSec=10
|
||||||
- path: /etc/kubernetes/kubelet.conf
|
- path: /etc/kubernetes/kubelet.conf
|
||||||
content: |
|
content: |
|
||||||
ARGS="--allow-privileged \
|
ARGS="--anonymous-auth=false \
|
||||||
--anonymous-auth=false \
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -58,7 +59,7 @@ runcmd:
|
|||||||
- [systemctl, daemon-reload]
|
- [systemctl, daemon-reload]
|
||||||
- [systemctl, restart, NetworkManager]
|
- [systemctl, restart, NetworkManager]
|
||||||
- [hostnamectl, set-hostname, ${domain_name}]
|
- [hostnamectl, set-hostname, ${domain_name}]
|
||||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||||
- [systemctl, enable, kubelet.path]
|
- [systemctl, enable, kubelet.path]
|
||||||
- [systemctl, start, --no-block, kubelet.path]
|
- [systemctl, start, --no-block, kubelet.path]
|
||||||
users:
|
users:
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
// Install Fedora to disk
|
// Install Fedora to disk
|
||||||
resource "matchbox_group" "fedora-install" {
|
resource "matchbox_group" "install" {
|
||||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||||
|
|
||||||
name = "${format("fedora-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
name = "${format("fedora-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||||
|
@ -17,7 +17,7 @@ network --bootproto=dhcp --device=link --activate --onboot=on
|
|||||||
bootloader --timeout=1 --append="ds=nocloud\;seedfrom=/var/cloud-init/"
|
bootloader --timeout=1 --append="ds=nocloud\;seedfrom=/var/cloud-init/"
|
||||||
services --enabled=cloud-init,cloud-init-local,cloud-config,cloud-final
|
services --enabled=cloud-init,cloud-init-local,cloud-config,cloud-final
|
||||||
|
|
||||||
ostreesetup --osname="fedora-atomic" --remote="fedora-atomic" --url="${atomic_assets_endpoint}/repo" --ref=fedora/27/x86_64/atomic-host --nogpg
|
ostreesetup --osname="fedora-atomic" --remote="fedora-atomic" --url="${atomic_assets_endpoint}/repo" --ref=fedora/28/x86_64/atomic-host --nogpg
|
||||||
|
|
||||||
reboot
|
reboot
|
||||||
|
|
||||||
@ -27,7 +27,7 @@ curl --retry 10 "${matchbox_http_endpoint}/generic?mac=${mac}&os=installed" -o /
|
|||||||
echo "instance-id: iid-local01" > /var/cloud-init/meta-data
|
echo "instance-id: iid-local01" > /var/cloud-init/meta-data
|
||||||
|
|
||||||
rm -f /etc/ostree/remotes.d/fedora-atomic.conf
|
rm -f /etc/ostree/remotes.d/fedora-atomic.conf
|
||||||
ostree remote add fedora-atomic https://kojipkgs.fedoraproject.org/atomic/27 --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-primary
|
ostree remote add fedora-atomic https://dl.fedoraproject.org/atomic/repo/ --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-primary
|
||||||
|
|
||||||
# lock root user
|
# lock root user
|
||||||
passwd -l root
|
passwd -l root
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
locals {
|
locals {
|
||||||
default_assets_endpoint = "${var.matchbox_http_endpoint}/assets/fedora/27"
|
default_assets_endpoint = "${var.matchbox_http_endpoint}/assets/fedora/28"
|
||||||
atomic_assets_endpoint = "${var.atomic_assets_endpoint != "" ? var.atomic_assets_endpoint : local.default_assets_endpoint}"
|
atomic_assets_endpoint = "${var.atomic_assets_endpoint != "" ? var.atomic_assets_endpoint : local.default_assets_endpoint}"
|
||||||
}
|
}
|
||||||
|
|
||||||
// Cached Fedora Install profile (from matchbox /assets cache)
|
// Cached Fedora Install profile (from matchbox /assets cache)
|
||||||
@ -36,14 +36,15 @@ data "template_file" "install-kickstarts" {
|
|||||||
vars {
|
vars {
|
||||||
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
|
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
|
||||||
atomic_assets_endpoint = "${local.atomic_assets_endpoint}"
|
atomic_assets_endpoint = "${local.atomic_assets_endpoint}"
|
||||||
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Kubernetes Controller profiles
|
// Kubernetes Controller profiles
|
||||||
resource "matchbox_profile" "controllers" {
|
resource "matchbox_profile" "controllers" {
|
||||||
count = "${length(var.controller_names)}"
|
count = "${length(var.controller_names)}"
|
||||||
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
||||||
|
|
||||||
# cloud-init
|
# cloud-init
|
||||||
generic_config = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
generic_config = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||||
}
|
}
|
||||||
@ -65,8 +66,9 @@ data "template_file" "controller-configs" {
|
|||||||
|
|
||||||
// Kubernetes Worker profiles
|
// Kubernetes Worker profiles
|
||||||
resource "matchbox_profile" "workers" {
|
resource "matchbox_profile" "workers" {
|
||||||
count = "${length(var.worker_names)}"
|
count = "${length(var.worker_names)}"
|
||||||
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
||||||
|
|
||||||
# cloud-init
|
# cloud-init
|
||||||
generic_config = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
generic_config = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
||||||
}
|
}
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# Terraform version and plugin versions
|
# Terraform version and plugin versions
|
||||||
|
|
||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.10.4"
|
required_version = ">= 0.11.0"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "local" {
|
provider "local" {
|
||||||
|
@ -2,6 +2,14 @@
|
|||||||
resource "null_resource" "copy-controller-secrets" {
|
resource "null_resource" "copy-controller-secrets" {
|
||||||
count = "${length(var.controller_names)}"
|
count = "${length(var.controller_names)}"
|
||||||
|
|
||||||
|
# Without depends_on, remote-exec could start and wait for machines before
|
||||||
|
# matchbox groups are written, causing a deadlock.
|
||||||
|
depends_on = [
|
||||||
|
"matchbox_group.install",
|
||||||
|
"matchbox_group.controller",
|
||||||
|
"matchbox_group.worker",
|
||||||
|
]
|
||||||
|
|
||||||
connection {
|
connection {
|
||||||
type = "ssh"
|
type = "ssh"
|
||||||
host = "${element(var.controller_domains, count.index)}"
|
host = "${element(var.controller_domains, count.index)}"
|
||||||
@ -68,6 +76,14 @@ resource "null_resource" "copy-controller-secrets" {
|
|||||||
resource "null_resource" "copy-worker-secrets" {
|
resource "null_resource" "copy-worker-secrets" {
|
||||||
count = "${length(var.worker_names)}"
|
count = "${length(var.worker_names)}"
|
||||||
|
|
||||||
|
# Without depends_on, remote-exec could start and wait for machines before
|
||||||
|
# matchbox groups are written, causing a deadlock.
|
||||||
|
depends_on = [
|
||||||
|
"matchbox_group.install",
|
||||||
|
"matchbox_group.controller",
|
||||||
|
"matchbox_group.worker",
|
||||||
|
]
|
||||||
|
|
||||||
connection {
|
connection {
|
||||||
type = "ssh"
|
type = "ssh"
|
||||||
host = "${element(var.worker_domains, count.index)}"
|
host = "${element(var.worker_domains, count.index)}"
|
||||||
|
@ -11,12 +11,13 @@ variable "matchbox_http_endpoint" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
variable "atomic_assets_endpoint" {
|
variable "atomic_assets_endpoint" {
|
||||||
type = "string"
|
type = "string"
|
||||||
default = ""
|
default = ""
|
||||||
|
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
HTTP endpoint serving the Fedora Atomic Host vmlinuz, initrd, os repo, and ostree repo (.e.g `http://example.com/some/path`).
|
HTTP endpoint serving the Fedora Atomic Host vmlinuz, initrd, os repo, and ostree repo (.e.g `http://example.com/some/path`).
|
||||||
|
|
||||||
Ensure the HTTP server directory contains `vmlinuz` and `initrd` files and `os` and `repo` directories. Leave unset to assume ${matchbox_http_endpoint}/assets/fedora/27
|
Ensure the HTTP server directory contains `vmlinuz` and `initrd` files and `os` and `repo` directories. Leave unset to assume ${matchbox_http_endpoint}/assets/fedora/28
|
||||||
EOD
|
EOD
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -25,26 +26,32 @@ EOD
|
|||||||
|
|
||||||
variable "controller_names" {
|
variable "controller_names" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of controller names (e.g. [node1])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "controller_macs" {
|
variable "controller_macs" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "controller_domains" {
|
variable "controller_domains" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "worker_names" {
|
variable "worker_names" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of worker names (e.g. [node2, node3])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "worker_macs" {
|
variable "worker_macs" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "worker_domains" {
|
variable "worker_domains" {
|
||||||
type = "list"
|
type = "list"
|
||||||
|
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
|
||||||
}
|
}
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
@ -85,7 +92,7 @@ variable "pod_cidr" {
|
|||||||
variable "service_cidr" {
|
variable "service_cidr" {
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
CIDR IPv4 range to assign Kubernetes services.
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
EOD
|
EOD
|
||||||
|
|
||||||
type = "string"
|
type = "string"
|
||||||
@ -93,7 +100,7 @@ EOD
|
|||||||
}
|
}
|
||||||
|
|
||||||
variable "cluster_domain_suffix" {
|
variable "cluster_domain_suffix" {
|
||||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "cluster.local"
|
default = "cluster.local"
|
||||||
}
|
}
|
||||||
|
@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||||
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
## Docs
|
## Docs
|
||||||
|
|
||||||
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/digital-ocean/).
|
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/cl/digital-ocean/).
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootkube" {
|
module "bootkube" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||||
|
|
||||||
cluster_name = "${var.cluster_name}"
|
cluster_name = "${var.cluster_name}"
|
||||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||||
|
@ -7,7 +7,7 @@ systemd:
|
|||||||
- name: 40-etcd-cluster.conf
|
- name: 40-etcd-cluster.conf
|
||||||
contents: |
|
contents: |
|
||||||
[Service]
|
[Service]
|
||||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||||
Environment="ETCD_NAME=${etcd_name}"
|
Environment="ETCD_NAME=${etcd_name}"
|
||||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||||
@ -85,8 +85,9 @@ systemd:
|
|||||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||||
--allow-privileged \
|
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -127,7 +128,7 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||||
KUBELET_IMAGE_TAG=v1.10.2
|
KUBELET_IMAGE_TAG=v1.11.2
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
contents:
|
contents:
|
||||||
@ -148,7 +149,7 @@ storage:
|
|||||||
# Move experimental manifests
|
# Move experimental manifests
|
||||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
|
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||||
exec /usr/bin/rkt run \
|
exec /usr/bin/rkt run \
|
||||||
--trust-keys-from-https \
|
--trust-keys-from-https \
|
||||||
|
@ -58,8 +58,9 @@ systemd:
|
|||||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||||
--allow-privileged \
|
|
||||||
--anonymous-auth=false \
|
--anonymous-auth=false \
|
||||||
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -97,7 +98,7 @@ storage:
|
|||||||
contents:
|
contents:
|
||||||
inline: |
|
inline: |
|
||||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||||
KUBELET_IMAGE_TAG=v1.10.2
|
KUBELET_IMAGE_TAG=v1.11.2
|
||||||
- path: /etc/sysctl.d/max-user-watches.conf
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
filesystem: root
|
filesystem: root
|
||||||
contents:
|
contents:
|
||||||
@ -115,7 +116,7 @@ storage:
|
|||||||
--volume config,kind=host,source=/etc/kubernetes \
|
--volume config,kind=host,source=/etc/kubernetes \
|
||||||
--mount volume=config,target=/etc/kubernetes \
|
--mount volume=config,target=/etc/kubernetes \
|
||||||
--insecure-options=image \
|
--insecure-options=image \
|
||||||
docker://k8s.gcr.io/hyperkube:v1.10.2 \
|
docker://k8s.gcr.io/hyperkube:v1.11.2 \
|
||||||
--net=host \
|
--net=host \
|
||||||
--dns=host \
|
--dns=host \
|
||||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||||
|
@ -69,20 +69,20 @@ data "template_file" "controller_config" {
|
|||||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||||
|
|
||||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
|
||||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
data "template_file" "etcds" {
|
||||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
count = "${var.controller_count}"
|
||||||
resource null_resource "repeat" {
|
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||||
count = "${var.controller_count}"
|
|
||||||
|
|
||||||
triggers {
|
vars {
|
||||||
name = "etcd${count.index}"
|
index = "${count.index}"
|
||||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
cluster_name = "${var.cluster_name}"
|
||||||
|
dns_zone = "${var.dns_zone}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3,7 +3,7 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
|
|
||||||
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||||
|
|
||||||
# allow ssh, http/https ingress, and peer-to-peer traffic
|
# allow ssh, apiserver, http/https ingress, and peer-to-peer traffic
|
||||||
inbound_rule = [
|
inbound_rule = [
|
||||||
{
|
{
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
@ -20,6 +20,11 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
port_range = "443"
|
port_range = "443"
|
||||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
protocol = "tcp"
|
||||||
|
port_range = "6443"
|
||||||
|
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
protocol = "udp"
|
protocol = "udp"
|
||||||
port_range = "1-65535"
|
port_range = "1-65535"
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# Terraform version and plugin versions
|
# Terraform version and plugin versions
|
||||||
|
|
||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.10.4"
|
required_version = ">= 0.11.0"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "digitalocean" {
|
provider "digitalocean" {
|
||||||
|
@ -80,7 +80,7 @@ variable "pod_cidr" {
|
|||||||
variable "service_cidr" {
|
variable "service_cidr" {
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
CIDR IPv4 range to assign Kubernetes services.
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
EOD
|
EOD
|
||||||
|
|
||||||
type = "string"
|
type = "string"
|
||||||
@ -88,7 +88,7 @@ EOD
|
|||||||
}
|
}
|
||||||
|
|
||||||
variable "cluster_domain_suffix" {
|
variable "cluster_domain_suffix" {
|
||||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "cluster.local"
|
default = "cluster.local"
|
||||||
}
|
}
|
||||||
|
@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
## Docs
|
## Docs
|
||||||
|
|
||||||
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/digital-ocean/).
|
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/cl/digital-ocean/).
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootkube" {
|
module "bootkube" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||||
|
|
||||||
cluster_name = "${var.cluster_name}"
|
cluster_name = "${var.cluster_name}"
|
||||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||||
|
@ -51,8 +51,9 @@ write_files:
|
|||||||
RestartSec=10
|
RestartSec=10
|
||||||
- path: /etc/kubernetes/kubelet.conf
|
- path: /etc/kubernetes/kubelet.conf
|
||||||
content: |
|
content: |
|
||||||
ARGS="--allow-privileged \
|
ARGS="--anonymous-auth=false \
|
||||||
--anonymous-auth=false \
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -88,9 +89,9 @@ bootcmd:
|
|||||||
- [modprobe, ip_vs]
|
- [modprobe, ip_vs]
|
||||||
runcmd:
|
runcmd:
|
||||||
- [systemctl, daemon-reload]
|
- [systemctl, daemon-reload]
|
||||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||||
- [systemctl, start, --no-block, etcd.service]
|
- [systemctl, start, --no-block, etcd.service]
|
||||||
- [systemctl, enable, cloud-metadata.service]
|
- [systemctl, enable, cloud-metadata.service]
|
||||||
- [systemctl, enable, kubelet.path]
|
- [systemctl, enable, kubelet.path]
|
||||||
|
@ -30,8 +30,9 @@ write_files:
|
|||||||
RestartSec=10
|
RestartSec=10
|
||||||
- path: /etc/kubernetes/kubelet.conf
|
- path: /etc/kubernetes/kubelet.conf
|
||||||
content: |
|
content: |
|
||||||
ARGS="--allow-privileged \
|
ARGS="--anonymous-auth=false \
|
||||||
--anonymous-auth=false \
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
--cluster_dns=${k8s_dns_service_ip} \
|
--cluster_dns=${k8s_dns_service_ip} \
|
||||||
--cluster_domain=${cluster_domain_suffix} \
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
@ -65,7 +66,7 @@ bootcmd:
|
|||||||
runcmd:
|
runcmd:
|
||||||
- [systemctl, daemon-reload]
|
- [systemctl, daemon-reload]
|
||||||
- [systemctl, enable, cloud-metadata.service]
|
- [systemctl, enable, cloud-metadata.service]
|
||||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||||
- [systemctl, enable, kubelet.path]
|
- [systemctl, enable, kubelet.path]
|
||||||
- [systemctl, start, --no-block, kubelet.path]
|
- [systemctl, start, --no-block, kubelet.path]
|
||||||
users:
|
users:
|
||||||
|
@ -46,7 +46,7 @@ resource "digitalocean_droplet" "controllers" {
|
|||||||
|
|
||||||
user_data = "${element(data.template_file.controller-cloudinit.*.rendered, count.index)}"
|
user_data = "${element(data.template_file.controller-cloudinit.*.rendered, count.index)}"
|
||||||
ssh_keys = ["${var.ssh_fingerprints}"]
|
ssh_keys = ["${var.ssh_fingerprints}"]
|
||||||
|
|
||||||
tags = [
|
tags = [
|
||||||
"${digitalocean_tag.controllers.id}",
|
"${digitalocean_tag.controllers.id}",
|
||||||
]
|
]
|
||||||
@ -69,7 +69,7 @@ data "template_file" "controller-cloudinit" {
|
|||||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||||
|
|
||||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
|
||||||
|
|
||||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||||
@ -77,13 +77,13 @@ data "template_file" "controller-cloudinit" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
data "template_file" "etcds" {
|
||||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
count = "${var.controller_count}"
|
||||||
resource null_resource "repeat" {
|
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||||
count = "${var.controller_count}"
|
|
||||||
|
|
||||||
triggers {
|
vars {
|
||||||
name = "etcd${count.index}"
|
index = "${count.index}"
|
||||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
cluster_name = "${var.cluster_name}"
|
||||||
|
dns_zone = "${var.dns_zone}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -3,7 +3,7 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
|
|
||||||
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||||
|
|
||||||
# allow ssh, http/https ingress, and peer-to-peer traffic
|
# allow ssh, apiserver, http/https ingress, and peer-to-peer traffic
|
||||||
inbound_rule = [
|
inbound_rule = [
|
||||||
{
|
{
|
||||||
protocol = "tcp"
|
protocol = "tcp"
|
||||||
@ -20,6 +20,11 @@ resource "digitalocean_firewall" "rules" {
|
|||||||
port_range = "443"
|
port_range = "443"
|
||||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
protocol = "tcp"
|
||||||
|
port_range = "6443"
|
||||||
|
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
protocol = "udp"
|
protocol = "udp"
|
||||||
port_range = "1-65535"
|
port_range = "1-65535"
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# Terraform version and plugin versions
|
# Terraform version and plugin versions
|
||||||
|
|
||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.10.4"
|
required_version = ">= 0.11.0"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "digitalocean" {
|
provider "digitalocean" {
|
||||||
|
@ -43,8 +43,8 @@ variable "worker_type" {
|
|||||||
|
|
||||||
variable "image" {
|
variable "image" {
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "fedora-27-x64-atomic"
|
default = "fedora-28-x64-atomic"
|
||||||
description = "OS image from which to initialize the disk (e.g. fedora-27-x64-atomic)"
|
description = "OS image from which to initialize the disk (e.g. fedora-28-x64-atomic)"
|
||||||
}
|
}
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
@ -73,7 +73,7 @@ variable "pod_cidr" {
|
|||||||
variable "service_cidr" {
|
variable "service_cidr" {
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
CIDR IPv4 range to assign Kubernetes services.
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
EOD
|
EOD
|
||||||
|
|
||||||
type = "string"
|
type = "string"
|
||||||
@ -81,7 +81,7 @@ EOD
|
|||||||
}
|
}
|
||||||
|
|
||||||
variable "cluster_domain_suffix" {
|
variable "cluster_domain_suffix" {
|
||||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
type = "string"
|
type = "string"
|
||||||
default = "cluster.local"
|
default = "cluster.local"
|
||||||
}
|
}
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user