mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-03 05:31:33 +02:00
Compare commits
63 Commits
Author | SHA1 | Date | |
---|---|---|---|
3352388fe6 | |||
915f89d3c8 | |||
f40f60b83c | |||
6f958d7577 | |||
ee31074679 | |||
97517fa7f3 | |||
18502d64d6 | |||
a3349b5c68 | |||
74dc6b0bf9 | |||
fd1de27aef | |||
93de7506ef | |||
def445a344 | |||
8464b258d8 | |||
855aec5af3 | |||
0c4d59db87 | |||
2eaf04c68b | |||
0227014fa0 | |||
fb6f40051f | |||
316f06df06 | |||
f4d3059b00 | |||
6c5a1964aa | |||
6e64634748 | |||
d5de41e07a | |||
05b99178ae | |||
ed0b781296 | |||
51906bf398 | |||
18dd7ccc09 | |||
0764bd30b5 | |||
899424c94f | |||
ca8c0a7ac0 | |||
cbe646fba6 | |||
c166b2ba33 | |||
6676484490 | |||
79260c48f6 | |||
589c3569b7 | |||
4d75ae1373 | |||
d32e6797ae | |||
32a9a83190 | |||
6e968cd152 | |||
6a581ab577 | |||
4ac4d7cbaf | |||
4ea1fde9c5 | |||
1e2eec6487 | |||
28d0891729 | |||
2ae126bf68 | |||
714419342e | |||
3701c0b1fe | |||
0c3557e68e | |||
adc6c6866d | |||
9ac7b0655f | |||
983489bb52 | |||
c2b719dc75 | |||
37981f9fb1 | |||
5eb11f5104 | |||
f2ee75ac98 | |||
8b8e364915 | |||
fb88113523 | |||
1854f5c104 | |||
726b58b697 | |||
a5916da0e2 | |||
a54e3c0da1 | |||
9d4cbb38f6 | |||
cc29530ba0 |
143
CHANGES.md
143
CHANGES.md
@ -4,8 +4,149 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
* [Introduce](https://typhoon.psdn.io/announce/#april-26-2018) Typhoon for Fedora Atomic ([#199](https://github.com/poseidon/typhoon/pull/199))
|
||||
## v1.11.0
|
||||
|
||||
* Kubernetes [v1.11.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1110)
|
||||
* Force apiserver to stop listening on `127.0.0.1:8080`
|
||||
* Replace `kube-dns` with [CoreDNS](https://coredns.io/) ([#261](https://github.com/poseidon/typhoon/pull/261))
|
||||
* Edit the `coredns` ConfigMap to [customize](https://coredns.io/plugins/)
|
||||
* CoreDNS doesn't use a resizer. For large clusters, scaling may be required.
|
||||
|
||||
#### AWS
|
||||
|
||||
* Update from Fedora Atomic 27 to 28 ([#258](https://github.com/poseidon/typhoon/pull/258))
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Update from Fedora Atomic 27 to 28 ([#263](https://github.com/poseidon/typhoon/pull/263))
|
||||
|
||||
#### Google
|
||||
|
||||
* Promote Google Cloud to stable
|
||||
* Update from Fedora Atomic 27 to 28 ([#259](https://github.com/poseidon/typhoon/pull/259))
|
||||
* Remove `ingress_static_ip` module output. Use `ingress_static_ipv4`.
|
||||
* Remove `controllers_ipv4_public` module output.
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from 0.15.0 to 0.16.2
|
||||
* Update Grafana from 5.1.4 to [5.2.1](http://docs.grafana.org/guides/whats-new-in-v5-2/)
|
||||
* Update heapster from v1.5.2 to v1.5.3
|
||||
|
||||
## v1.10.5
|
||||
|
||||
* Kubernetes [v1.10.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1105)
|
||||
* Update etcd from v3.3.6 to v3.3.8 ([#243](https://github.com/poseidon/typhoon/pull/243), [#247](https://github.com/poseidon/typhoon/pull/247))
|
||||
|
||||
#### AWS
|
||||
|
||||
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
|
||||
* Combine apiserver and ingress NLBs ([#249](https://github.com/poseidon/typhoon/pull/249))
|
||||
* Reduce cost by ~$18/month per cluster. Typhoon AWS clusters now use one network load balancer.
|
||||
* Ingress addon users may keep using CNAME records to the `ingress_dns_name` module output (few million RPS)
|
||||
* Ingress users with heavy traffic (many million RPS) should create a separate NLB(s)
|
||||
* Worker pools no longer include an extraneous load balancer. Remove worker module's `ingress_dns_name` output
|
||||
* Disable detailed (paid) monitoring on worker nodes ([#251](https://github.com/poseidon/typhoon/pull/251))
|
||||
* Favor Prometheus for cloud-agnostic metrics, aggregation, and alerting
|
||||
* Add `worker_target_group_http` and `worker_target_group_https` module outputs to allow custom load balancing
|
||||
* Add `target_group_http` and `target_group_https` worker module outputs to allow custom load balancing
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
|
||||
* Users who exposed kube-apiserver on a WAN via their router/load-balancer will need to adjust its configuration (e.g. DNAT 6443). Most apiservers are on a LAN (internal, VPN-only, etc) so if you didn't specially configure network gear for 443, no change is needed. (possible action required)
|
||||
* Fix possible deadlock when provisioning clusters larger than 10 nodes ([#244](https://github.com/poseidon/typhoon/pull/244))
|
||||
|
||||
#### DigitalOcean
|
||||
|
||||
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
|
||||
* Update firewall rules and generated kubeconfig's
|
||||
|
||||
#### Google Cloud
|
||||
|
||||
* Use global HTTP and TCP proxy load balancing for Kubernetes Ingress ([#252](https://github.com/poseidon/typhoon/pull/252))
|
||||
* Switch Ingress from regional network load balancers to global HTTP/TCP Proxy load balancing
|
||||
* Reduce cost by ~$19/month per cluster. Google bills the first 5 global and regional forwarding rules separately. Typhoon clusters now use 3 global and 0 regional forwarding rules.
|
||||
* Worker pools no longer include an extraneous load balancer. Remove worker module's `ingress_static_ip` output
|
||||
* Allow using nginx-ingress addon on Fedora Atomic clusters ([#200](https://github.com/poseidon/typhoon/issues/200))
|
||||
* Add `worker_instance_group` module output to allow custom global load balancing
|
||||
* Add `instance_group` worker module output to allow custom global load balancing
|
||||
* Deprecate `ingress_static_ip` module output. Add `ingress_static_ipv4` module output instead.
|
||||
* Deprecate `controllers_ipv4_public` module output
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update CLUO from v0.6.0 to v0.7.0 ([#242](https://github.com/poseidon/typhoon/pull/242))
|
||||
* Update Prometheus from v2.3.0 to v2.3.1
|
||||
* Update Grafana from 5.1.3 to 5.1.4
|
||||
* Drop `hostNetwork` from nginx-ingress addon
|
||||
* Both flannel and Calico support host port via `portmap`
|
||||
* Allows writing NetworkPolicies that reference ingress pods in `from` or `to`. HostNetwork pods were difficult to write network policy for since they could circumvent the CNI network to communicate with pods on the same node.
|
||||
|
||||
## v1.10.4
|
||||
|
||||
* Kubernetes [v1.10.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1104)
|
||||
* Update etcd from v3.3.5 to v3.3.6
|
||||
* Update Calico from v3.1.2 to v3.1.3
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update Prometheus from v2.2.1 to v2.3.0
|
||||
* Add Prometheus liveness and readiness probes
|
||||
* Annotate Grafana service so Prometheus scrapes metrics
|
||||
* Label namespaces to ease writing Network Policies
|
||||
|
||||
## v1.10.3
|
||||
|
||||
* Kubernetes [v1.10.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1103)
|
||||
* Add [Flatcar Linux](https://docs.flatcar-linux.org/) (Container Linux derivative) as an option for AWS and bare-metal (thanks @kinvolk folks)
|
||||
* Allow bearer token authentication to the Kubelet ([#216](https://github.com/poseidon/typhoon/issues/216))
|
||||
* Require Webhook authorization to the Kubelet
|
||||
* Switch apiserver X509 client cert org to satisfy new authorization requirement
|
||||
* Require Terraform v0.11.x and drop support for v0.10.x ([migration guide](https://typhoon.psdn.io/topics/maintenance/#terraform-v011x))
|
||||
* Update etcd from v3.3.4 to v3.3.5 ([#213](https://github.com/poseidon/typhoon/pull/213))
|
||||
* Update Calico from v3.1.1 to v3.1.2
|
||||
|
||||
#### AWS
|
||||
|
||||
* Allow Flatcar Linux by setting `os_image` to flatcar-stable (default), flatcar-beta, flatcar-alpha ([#211](https://github.com/poseidon/typhoon/pull/211))
|
||||
* Replace `os_channel` variable with `os_image` to align naming across clouds
|
||||
* Please change values stable, beta, or alpha to coreos-stable, coreos-beta, coreos-alpha (**action required!**)
|
||||
* Allow preemptible workers via spot instances ([#202](https://github.com/poseidon/typhoon/pull/202))
|
||||
* Add `worker_price` to allow worker spot instances. Default to empty string for the worker autoscaling group to use regular on-demand instances
|
||||
* Add `spot_price` to internal `workers` module for spot [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Allow Flatcar Linux by setting `os_channel` to flatcar-stable, flatcar-beta, flatcar-alpha ([#220](https://github.com/poseidon/typhoon/pull/220))
|
||||
* Replace `container_linux_channel` variable with `os_channel`
|
||||
* Please change values stable, beta, or alpha to coreos-stable, coreos-beta, coreos-alpha (**action required!**)
|
||||
* Replace `container_linux_version` variable with `os_version`
|
||||
* Add `network_ip_autodetection_method` variable for Calico host IPv4 address detection
|
||||
* Use Calico's default "first-found" to support single NIC and bonded NIC nodes
|
||||
* Allow [alternative](https://docs.projectcalico.org/v3.1/reference/node/configuration#ip-autodetection-methods) methods for multi NIC nodes, like can-reach=IP or interface=REGEX
|
||||
* Deprecate `container_linux_oem` variable
|
||||
|
||||
#### DigitalOcean
|
||||
|
||||
* Update Fedora Atomic module to use Fedora Atomic 28 ([#225](https://github.com/poseidon/typhoon/pull/225))
|
||||
* Fedora Atomic 27 images disappeared from DigitalOcean and forced this early update
|
||||
|
||||
#### Addons
|
||||
|
||||
* Fix Prometheus data directory location ([#203](https://github.com/poseidon/typhoon/pull/203))
|
||||
* Configure Prometheus to scrape Kubelets directly with bearer token auth instead of proxying through the apiserver ([#217](https://github.com/poseidon/typhoon/pull/217))
|
||||
* Security improvement: Drop RBAC permission from `nodes/proxy` to `nodes/metrics`
|
||||
* Scale: Remove per-node proxied scrape load from the apiserver
|
||||
* Update Grafana from v5.04 to v5.1.3 ([#208](https://github.com/poseidon/typhoon/pull/208))
|
||||
* Disable Grafana Google Analytics by default ([#214](https://github.com/poseidon/typhoon/issues/214))
|
||||
* Update nginx-ingress from 0.14.0 to 0.15.0
|
||||
* Annotate nginx-ingress service so Prometheus auto-discovers and scrapes service endpoints ([#222](https://github.com/poseidon/typhoon/pull/222))
|
||||
|
||||
## v1.10.2
|
||||
|
||||
* Kubernetes [v1.10.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1102)
|
||||
* [Introduce](https://typhoon.psdn.io/announce/#april-26-2018) Typhoon for Fedora Atomic ([#199](https://github.com/poseidon/typhoon/pull/199))
|
||||
* Update Calico from v3.0.4 to v3.1.1 ([#197](https://github.com/poseidon/typhoon/pull/197))
|
||||
* https://www.projectcalico.org/announcing-calico-v3-1/
|
||||
* https://github.com/projectcalico/calico/releases/tag/v3.1.0
|
||||
|
18
README.md
18
README.md
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
|
||||
@ -29,8 +29,10 @@ Typhoon provides a Terraform Module for each supported operating system and plat
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | beta |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | very alpha |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
|
||||
|
||||
The AWS and bare-metal `container-linux` modules allow picking Red Hat Container Linux (formerly CoreOS Container Linux) or Kinvolk's Flatcar Linux friendly fork.
|
||||
|
||||
## Documentation
|
||||
|
||||
@ -44,7 +46,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.0"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -86,9 +88,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.0
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.0
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -99,10 +101,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-dns-1187388186-zj5dl 3/3 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
|
@ -18,7 +18,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: update-agent
|
||||
image: quay.io/coreos/container-linux-update-operator:v0.6.0
|
||||
image: quay.io/coreos/container-linux-update-operator:v0.7.0
|
||||
command:
|
||||
- "/bin/update-agent"
|
||||
volumeMounts:
|
||||
|
@ -15,7 +15,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: update-operator
|
||||
image: quay.io/coreos/container-linux-update-operator:v0.6.0
|
||||
image: quay.io/coreos/container-linux-update-operator:v0.7.0
|
||||
command:
|
||||
- "/bin/update-operator"
|
||||
env:
|
||||
|
@ -21,7 +21,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: grafana
|
||||
image: grafana/grafana:5.0.4
|
||||
image: grafana/grafana:5.2.1
|
||||
env:
|
||||
- name: GF_SERVER_HTTP_PORT
|
||||
value: "8080"
|
||||
@ -31,6 +31,8 @@ spec:
|
||||
value: "true"
|
||||
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
|
||||
value: Viewer
|
||||
- name: GF_ANALYTICS_REPORTING_ENABLED
|
||||
value: "false"
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
|
@ -3,6 +3,9 @@ kind: Service
|
||||
metadata:
|
||||
name: grafana
|
||||
namespace: monitoring
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
prometheus.io/port: '8080'
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
|
@ -14,11 +14,13 @@ spec:
|
||||
labels:
|
||||
name: heapster
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
serviceAccountName: heapster
|
||||
containers:
|
||||
- name: heapster
|
||||
image: k8s.gcr.io/heapster-amd64:v1.5.2
|
||||
image: k8s.gcr.io/heapster-amd64:v1.5.3
|
||||
command:
|
||||
- /heapster
|
||||
- --source=kubernetes.summary_api:''
|
||||
|
@ -2,3 +2,5 @@ apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress
|
||||
labels:
|
||||
name: ingress
|
||||
|
@ -20,10 +20,9 @@ spec:
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.2
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
@ -67,5 +66,12 @@ spec:
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
drop:
|
||||
- ALL
|
||||
runAsUser: 33 # www-data
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 60
|
||||
|
@ -3,6 +3,9 @@ kind: Service
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
namespace: ingress
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
prometheus.io/port: '10254'
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
|
@ -2,3 +2,5 @@ apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress
|
||||
labels:
|
||||
name: ingress
|
||||
|
@ -20,10 +20,9 @@ spec:
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.2
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
@ -67,5 +66,12 @@ spec:
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
drop:
|
||||
- ALL
|
||||
runAsUser: 33 # www-data
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 60
|
||||
|
@ -3,6 +3,9 @@ kind: Service
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
namespace: ingress
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
prometheus.io/port: '10254'
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
|
@ -2,3 +2,5 @@ apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress
|
||||
labels:
|
||||
name: ingress
|
||||
|
@ -20,10 +20,9 @@ spec:
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.2
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
@ -67,5 +66,12 @@ spec:
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
drop:
|
||||
- ALL
|
||||
runAsUser: 33 # www-data
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 60
|
||||
|
@ -3,6 +3,9 @@ kind: Service
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
namespace: ingress
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
prometheus.io/port: '10254'
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
|
@ -2,3 +2,5 @@ apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: monitoring
|
||||
labels:
|
||||
name: monitoring
|
||||
|
@ -56,12 +56,7 @@ data:
|
||||
target_label: job
|
||||
|
||||
# Scrape config for node (i.e. kubelet) /metrics (e.g. 'kubelet_'). Explore
|
||||
# metrics from a node by scraping kubelet (127.0.0.1:10255/metrics).
|
||||
#
|
||||
# Rather than connecting directly to the node, the scrape is proxied though the
|
||||
# Kubernetes apiserver. This means it will work if Prometheus is running out of
|
||||
# cluster, or can't connect to nodes for some other reason (e.g. because of
|
||||
# firewalling).
|
||||
# metrics from a node by scraping kubelet (127.0.0.1:10250/metrics).
|
||||
- job_name: 'kubelet'
|
||||
kubernetes_sd_configs:
|
||||
- role: node
|
||||
@ -69,50 +64,34 @@ data:
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
# Kubelet certs don't have any fixed IP SANs
|
||||
insecure_skip_verify: true
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
|
||||
relabel_configs:
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
- target_label: __address__
|
||||
replacement: kubernetes.default.svc:443
|
||||
- source_labels: [__meta_kubernetes_node_name]
|
||||
regex: (.+)
|
||||
target_label: __metrics_path__
|
||||
replacement: /api/v1/nodes/${1}/proxy/metrics
|
||||
|
||||
# Scrape config for Kubelet cAdvisor. Explore metrics from a node by
|
||||
# scraping kubelet (127.0.0.1:10255/metrics/cadvisor).
|
||||
#
|
||||
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
|
||||
# (those whose names begin with 'container_') have been removed from the
|
||||
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
|
||||
# retrieve those metrics.
|
||||
#
|
||||
# Rather than connecting directly to the node, the scrape is proxied though the
|
||||
# Kubernetes apiserver. This means it will work if Prometheus is running out of
|
||||
# cluster, or can't connect to nodes for some other reason (e.g. because of
|
||||
# firewalling).
|
||||
# scraping kubelet (127.0.0.1:10250/metrics/cadvisor).
|
||||
- job_name: 'kubernetes-cadvisor'
|
||||
kubernetes_sd_configs:
|
||||
- role: node
|
||||
|
||||
|
||||
scheme: https
|
||||
metrics_path: /metrics/cadvisor
|
||||
tls_config:
|
||||
# Kubelet certs don't have any fixed IP SANs
|
||||
insecure_skip_verify: true
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
|
||||
relabel_configs:
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
- target_label: __address__
|
||||
replacement: kubernetes.default.svc:443
|
||||
- source_labels: [__meta_kubernetes_node_name]
|
||||
regex: (.+)
|
||||
target_label: __metrics_path__
|
||||
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
|
||||
|
||||
# Scrap etcd metrics from controllers
|
||||
|
||||
|
||||
# Scrap etcd metrics from controllers via listen-metrics-urls
|
||||
- job_name: 'etcd'
|
||||
kubernetes_sd_configs:
|
||||
- role: node
|
||||
|
@ -17,29 +17,41 @@ spec:
|
||||
spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.2.1
|
||||
args:
|
||||
- '--config.file=/etc/prometheus/prometheus.yaml'
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 9090
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /etc/prometheus
|
||||
- name: rules
|
||||
mountPath: /etc/prometheus/rules
|
||||
- name: data
|
||||
mountPath: /var/lib/prometheus
|
||||
dnsPolicy: ClusterFirst
|
||||
restartPolicy: Always
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.3.1
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
- --storage.tsdb.path=/var/lib/prometheus
|
||||
ports:
|
||||
- name: web
|
||||
containerPort: 9090
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /etc/prometheus
|
||||
- name: rules
|
||||
mountPath: /etc/prometheus/rules
|
||||
- name: data
|
||||
mountPath: /var/lib/prometheus
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /-/ready
|
||||
port: 9090
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 10
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /-/healthy
|
||||
port: 9090
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 10
|
||||
terminationGracePeriodSeconds: 30
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: prometheus-config
|
||||
- name: rules
|
||||
configMap:
|
||||
name: prometheus-rules
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
- name: config
|
||||
configMap:
|
||||
name: prometheus-config
|
||||
- name: rules
|
||||
configMap:
|
||||
name: prometheus-rules
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
|
@ -6,7 +6,7 @@ rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes
|
||||
- nodes/proxy
|
||||
- nodes/metrics
|
||||
- services
|
||||
- endpoints
|
||||
- pods
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
@ -19,5 +19,5 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/aws/).
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/cl/aws/).
|
||||
|
||||
|
@ -1,3 +1,13 @@
|
||||
locals {
|
||||
# Pick a CoreOS Container Linux derivative
|
||||
# coreos-stable -> Container Linux AMI
|
||||
# flatcar-stable -> Flatcar Linux AMI
|
||||
ami_id = "${local.flavor == "flatcar" ? data.aws_ami.flatcar.image_id : data.aws_ami.coreos.image_id}"
|
||||
|
||||
flavor = "${element(split("-", var.os_image), 0)}"
|
||||
channel = "${element(split("-", var.os_image), 1)}"
|
||||
}
|
||||
|
||||
data "aws_ami" "coreos" {
|
||||
most_recent = true
|
||||
owners = ["595879546273"]
|
||||
@ -14,6 +24,26 @@ data "aws_ami" "coreos" {
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["CoreOS-${var.os_channel}-*"]
|
||||
values = ["CoreOS-${local.channel}-*"]
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_ami" "flatcar" {
|
||||
most_recent = true
|
||||
owners = ["075585003325"]
|
||||
|
||||
filter {
|
||||
name = "architecture"
|
||||
values = ["x86_64"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Flatcar-${local.channel}-*"]
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.8"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -74,8 +74,9 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -121,7 +122,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
KUBELET_IMAGE_TAG=v1.11.0
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
|
@ -23,7 +23,7 @@ resource "aws_instance" "controllers" {
|
||||
|
||||
instance_type = "${var.controller_type}"
|
||||
|
||||
ami = "${data.aws_ami.coreos.image_id}"
|
||||
ami = "${local.ami_id}"
|
||||
user_data = "${element(data.ct_config.controller_ign.*.rendered, count.index)}"
|
||||
|
||||
# storage
|
||||
@ -54,7 +54,7 @@ data "template_file" "controller_config" {
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
||||
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
|
||||
|
||||
kubeconfig = "${indent(10, module.bootkube.kubeconfig)}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
@ -63,14 +63,14 @@ data "template_file" "controller_config" {
|
||||
}
|
||||
}
|
||||
|
||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
||||
resource null_resource "repeat" {
|
||||
count = "${var.controller_count}"
|
||||
data "template_file" "etcds" {
|
||||
count = "${var.controller_count}"
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
triggers {
|
||||
name = "etcd${count.index}"
|
||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
vars {
|
||||
index = "${count.index}"
|
||||
cluster_name = "${var.cluster_name}"
|
||||
dns_zone = "${var.dns_zone}"
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -7,15 +7,15 @@ resource "aws_route53_record" "apiserver" {
|
||||
|
||||
# AWS recommends their special "alias" records for ELBs
|
||||
alias {
|
||||
name = "${aws_lb.apiserver.dns_name}"
|
||||
zone_id = "${aws_lb.apiserver.zone_id}"
|
||||
name = "${aws_lb.nlb.dns_name}"
|
||||
zone_id = "${aws_lb.nlb.zone_id}"
|
||||
evaluate_target_health = true
|
||||
}
|
||||
}
|
||||
|
||||
# Network Load Balancer for apiservers
|
||||
resource "aws_lb" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver"
|
||||
# Network Load Balancer for apiservers and ingress
|
||||
resource "aws_lb" "nlb" {
|
||||
name = "${var.cluster_name}-nlb"
|
||||
load_balancer_type = "network"
|
||||
internal = false
|
||||
|
||||
@ -24,11 +24,11 @@ resource "aws_lb" "apiserver" {
|
||||
enable_cross_zone_load_balancing = true
|
||||
}
|
||||
|
||||
# Forward TCP traffic to controllers
|
||||
# Forward TCP apiserver traffic to controllers
|
||||
resource "aws_lb_listener" "apiserver-https" {
|
||||
load_balancer_arn = "${aws_lb.apiserver.arn}"
|
||||
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||
protocol = "TCP"
|
||||
port = "443"
|
||||
port = "6443"
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
@ -36,6 +36,30 @@ resource "aws_lb_listener" "apiserver-https" {
|
||||
}
|
||||
}
|
||||
|
||||
# Forward HTTP ingress traffic to workers
|
||||
resource "aws_lb_listener" "ingress-http" {
|
||||
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||
protocol = "TCP"
|
||||
port = 80
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${module.workers.target_group_http}"
|
||||
}
|
||||
}
|
||||
|
||||
# Forward HTTPS ingress traffic to workers
|
||||
resource "aws_lb_listener" "ingress-https" {
|
||||
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${module.workers.target_group_https}"
|
||||
}
|
||||
}
|
||||
|
||||
# Target group of controllers
|
||||
resource "aws_lb_target_group" "controllers" {
|
||||
name = "${var.cluster_name}-controllers"
|
||||
@ -43,12 +67,12 @@ resource "aws_lb_target_group" "controllers" {
|
||||
target_type = "instance"
|
||||
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
port = 6443
|
||||
|
||||
# TCP health check for apiserver
|
||||
health_check {
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
port = 6443
|
||||
|
||||
# NLBs required to use same healthy and unhealthy thresholds
|
||||
healthy_threshold = 3
|
||||
@ -65,5 +89,5 @@ resource "aws_lb_target_group_attachment" "controllers" {
|
||||
|
||||
target_group_arn = "${aws_lb_target_group.controllers.arn}"
|
||||
target_id = "${element(aws_instance.controllers.*.id, count.index)}"
|
||||
port = 443
|
||||
port = 6443
|
||||
}
|
@ -1,5 +1,7 @@
|
||||
# Outputs for Kubernetes Ingress
|
||||
|
||||
output "ingress_dns_name" {
|
||||
value = "${module.workers.ingress_dns_name}"
|
||||
value = "${aws_lb.nlb.dns_name}"
|
||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||
}
|
||||
|
||||
@ -23,3 +25,15 @@ output "worker_security_groups" {
|
||||
output "kubeconfig" {
|
||||
value = "${module.bootkube.kubeconfig}"
|
||||
}
|
||||
|
||||
# Outputs for custom load balancing
|
||||
|
||||
output "worker_target_group_http" {
|
||||
description = "ARN of a target group of workers for HTTP traffic"
|
||||
value = "${module.workers.target_group_http}"
|
||||
}
|
||||
|
||||
output "worker_target_group_https" {
|
||||
description = "ARN of a target group of workers for HTTPS traffic"
|
||||
value = "${module.workers.target_group_https}"
|
||||
}
|
||||
|
@ -1,11 +1,11 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.10.4"
|
||||
required_version = ">= 0.11.0"
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
version = "~> 1.11"
|
||||
version = "~> 1.13"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
|
@ -36,8 +36,8 @@ resource "aws_security_group_rule" "controller-apiserver" {
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 443
|
||||
to_port = 443
|
||||
from_port = 6443
|
||||
to_port = 6443
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
@ -91,6 +91,16 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10250
|
||||
to_port = 10250
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
|
@ -41,10 +41,10 @@ variable "worker_type" {
|
||||
description = "EC2 instance type for workers"
|
||||
}
|
||||
|
||||
variable "os_channel" {
|
||||
variable "os_image" {
|
||||
type = "string"
|
||||
default = "stable"
|
||||
description = "Container Linux AMI channel (stable, beta, alpha)"
|
||||
default = "coreos-stable"
|
||||
description = "AMI channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||
}
|
||||
|
||||
variable "disk_size" {
|
||||
@ -59,6 +59,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "worker_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
|
||||
}
|
||||
|
||||
variable "controller_clc_snippets" {
|
||||
type = "list"
|
||||
description = "Controller Container Linux Config snippets"
|
||||
@ -110,7 +116,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -118,7 +124,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -8,8 +8,9 @@ module "workers" {
|
||||
security_groups = ["${aws_security_group.worker.id}"]
|
||||
count = "${var.worker_count}"
|
||||
instance_type = "${var.worker_type}"
|
||||
os_channel = "${var.os_channel}"
|
||||
os_image = "${var.os_image}"
|
||||
disk_size = "${var.disk_size}"
|
||||
spot_price = "${var.worker_price}"
|
||||
|
||||
# configuration
|
||||
kubeconfig = "${module.bootkube.kubeconfig}"
|
||||
|
@ -1,3 +1,13 @@
|
||||
locals {
|
||||
# Pick a CoreOS Container Linux derivative
|
||||
# coreos-stable -> Container Linux AMI
|
||||
# flatcar-stable -> Flatcar Linux AMI
|
||||
ami_id = "${local.flavor == "flatcar" ? data.aws_ami.flatcar.image_id : data.aws_ami.coreos.image_id}"
|
||||
|
||||
flavor = "${element(split("-", var.os_image), 0)}"
|
||||
channel = "${element(split("-", var.os_image), 1)}"
|
||||
}
|
||||
|
||||
data "aws_ami" "coreos" {
|
||||
most_recent = true
|
||||
owners = ["595879546273"]
|
||||
@ -14,6 +24,26 @@ data "aws_ami" "coreos" {
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["CoreOS-${var.os_channel}-*"]
|
||||
values = ["CoreOS-${local.channel}-*"]
|
||||
}
|
||||
}
|
||||
|
||||
data "aws_ami" "flatcar" {
|
||||
most_recent = true
|
||||
owners = ["075585003325"]
|
||||
|
||||
filter {
|
||||
name = "architecture"
|
||||
values = ["x86_64"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Flatcar-${local.channel}-*"]
|
||||
}
|
||||
}
|
||||
|
@ -47,8 +47,9 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -91,7 +92,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
KUBELET_IMAGE_TAG=v1.11.0
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -109,7 +110,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -1,39 +1,4 @@
|
||||
# Network Load Balancer for Ingress
|
||||
resource "aws_lb" "ingress" {
|
||||
name = "${var.name}-ingress"
|
||||
load_balancer_type = "network"
|
||||
internal = false
|
||||
|
||||
subnets = ["${var.subnet_ids}"]
|
||||
|
||||
enable_cross_zone_load_balancing = true
|
||||
}
|
||||
|
||||
# Forward HTTP traffic to workers
|
||||
resource "aws_lb_listener" "ingress-http" {
|
||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
||||
protocol = "TCP"
|
||||
port = 80
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${aws_lb_target_group.workers-http.arn}"
|
||||
}
|
||||
}
|
||||
|
||||
# Forward HTTPS traffic to workers
|
||||
resource "aws_lb_listener" "ingress-https" {
|
||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${aws_lb_target_group.workers-https.arn}"
|
||||
}
|
||||
}
|
||||
|
||||
# Network Load Balancer target groups of instances
|
||||
# Target groups of instances for use with load balancers
|
||||
|
||||
resource "aws_lb_target_group" "workers-http" {
|
||||
name = "${var.name}-workers-http"
|
||||
|
@ -1,4 +1,9 @@
|
||||
output "ingress_dns_name" {
|
||||
value = "${aws_lb.ingress.dns_name}"
|
||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||
output "target_group_http" {
|
||||
description = "ARN of a target group of workers for HTTP traffic"
|
||||
value = "${aws_lb_target_group.workers-http.arn}"
|
||||
}
|
||||
|
||||
output "target_group_https" {
|
||||
description = "ARN of a target group of workers for HTTPS traffic"
|
||||
value = "${aws_lb_target_group.workers-https.arn}"
|
||||
}
|
||||
|
@ -34,10 +34,10 @@ variable "instance_type" {
|
||||
description = "EC2 instance type"
|
||||
}
|
||||
|
||||
variable "os_channel" {
|
||||
variable "os_image" {
|
||||
type = "string"
|
||||
default = "stable"
|
||||
description = "Container Linux AMI channel (stable, beta, alpha)"
|
||||
default = "coreos-stable"
|
||||
description = "AMI channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||
}
|
||||
|
||||
variable "disk_size" {
|
||||
@ -52,6 +52,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "spot_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
|
||||
}
|
||||
|
||||
variable "clc_snippets" {
|
||||
type = "list"
|
||||
description = "Container Linux Config snippets"
|
||||
@ -73,7 +79,7 @@ variable "ssh_authorized_key" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -81,7 +87,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -26,6 +26,12 @@ resource "aws_autoscaling_group" "workers" {
|
||||
create_before_destroy = true
|
||||
}
|
||||
|
||||
# Waiting for instance creation delays adding the ASG to state. If instances
|
||||
# can't be created (e.g. spot price too low), the ASG will be orphaned.
|
||||
# Orphaned ASGs escape cleanup, can't be updated, and keep bidding if spot is
|
||||
# used. Disable wait to avoid issues and align with other clouds.
|
||||
wait_for_capacity_timeout = "0"
|
||||
|
||||
tags = [{
|
||||
key = "Name"
|
||||
value = "${var.name}-worker"
|
||||
@ -35,8 +41,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
image_id = "${data.aws_ami.coreos.image_id}"
|
||||
instance_type = "${var.instance_type}"
|
||||
image_id = "${local.ami_id}"
|
||||
instance_type = "${var.instance_type}"
|
||||
spot_price = "${var.spot_price}"
|
||||
enable_monitoring = false
|
||||
|
||||
user_data = "${data.ct_config.worker_ign.rendered}"
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
@ -19,5 +19,5 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/aws/).
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/cl/aws/).
|
||||
|
||||
|
@ -14,6 +14,6 @@ data "aws_ami" "fedora" {
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Fedora-Atomic-27-20180419.0.x86_64-*-gp2-*"]
|
||||
values = ["Fedora-AtomicHost-28-20180625.1.x86_64-*-gp2-*"]
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -51,8 +51,9 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -91,8 +92,8 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
|
@ -54,7 +54,7 @@ data "template_file" "controller-cloudinit" {
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
||||
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
|
||||
|
||||
kubeconfig = "${indent(6, module.bootkube.kubeconfig)}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
@ -63,13 +63,13 @@ data "template_file" "controller-cloudinit" {
|
||||
}
|
||||
}
|
||||
|
||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
||||
resource null_resource "repeat" {
|
||||
count = "${var.controller_count}"
|
||||
data "template_file" "etcds" {
|
||||
count = "${var.controller_count}"
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
triggers {
|
||||
name = "etcd${count.index}"
|
||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
vars {
|
||||
index = "${count.index}"
|
||||
cluster_name = "${var.cluster_name}"
|
||||
dns_zone = "${var.dns_zone}"
|
||||
}
|
||||
}
|
||||
|
@ -7,15 +7,15 @@ resource "aws_route53_record" "apiserver" {
|
||||
|
||||
# AWS recommends their special "alias" records for ELBs
|
||||
alias {
|
||||
name = "${aws_lb.apiserver.dns_name}"
|
||||
zone_id = "${aws_lb.apiserver.zone_id}"
|
||||
name = "${aws_lb.nlb.dns_name}"
|
||||
zone_id = "${aws_lb.nlb.zone_id}"
|
||||
evaluate_target_health = true
|
||||
}
|
||||
}
|
||||
|
||||
# Network Load Balancer for apiservers
|
||||
resource "aws_lb" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver"
|
||||
# Network Load Balancer for apiservers and ingress
|
||||
resource "aws_lb" "nlb" {
|
||||
name = "${var.cluster_name}-nlb"
|
||||
load_balancer_type = "network"
|
||||
internal = false
|
||||
|
||||
@ -24,11 +24,11 @@ resource "aws_lb" "apiserver" {
|
||||
enable_cross_zone_load_balancing = true
|
||||
}
|
||||
|
||||
# Forward TCP traffic to controllers
|
||||
# Forward TCP apiserver traffic to controllers
|
||||
resource "aws_lb_listener" "apiserver-https" {
|
||||
load_balancer_arn = "${aws_lb.apiserver.arn}"
|
||||
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||
protocol = "TCP"
|
||||
port = "443"
|
||||
port = "6443"
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
@ -36,6 +36,30 @@ resource "aws_lb_listener" "apiserver-https" {
|
||||
}
|
||||
}
|
||||
|
||||
# Forward HTTP ingress traffic to workers
|
||||
resource "aws_lb_listener" "ingress-http" {
|
||||
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||
protocol = "TCP"
|
||||
port = 80
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${module.workers.target_group_http}"
|
||||
}
|
||||
}
|
||||
|
||||
# Forward HTTPS ingress traffic to workers
|
||||
resource "aws_lb_listener" "ingress-https" {
|
||||
load_balancer_arn = "${aws_lb.nlb.arn}"
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${module.workers.target_group_https}"
|
||||
}
|
||||
}
|
||||
|
||||
# Target group of controllers
|
||||
resource "aws_lb_target_group" "controllers" {
|
||||
name = "${var.cluster_name}-controllers"
|
||||
@ -43,12 +67,12 @@ resource "aws_lb_target_group" "controllers" {
|
||||
target_type = "instance"
|
||||
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
port = 6443
|
||||
|
||||
# TCP health check for apiserver
|
||||
health_check {
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
port = 6443
|
||||
|
||||
# NLBs required to use same healthy and unhealthy thresholds
|
||||
healthy_threshold = 3
|
||||
@ -65,5 +89,5 @@ resource "aws_lb_target_group_attachment" "controllers" {
|
||||
|
||||
target_group_arn = "${aws_lb_target_group.controllers.arn}"
|
||||
target_id = "${element(aws_instance.controllers.*.id, count.index)}"
|
||||
port = 443
|
||||
port = 6443
|
||||
}
|
@ -1,5 +1,7 @@
|
||||
# Outputs for Kubernetes Ingress
|
||||
|
||||
output "ingress_dns_name" {
|
||||
value = "${module.workers.ingress_dns_name}"
|
||||
value = "${aws_lb.nlb.dns_name}"
|
||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||
}
|
||||
|
||||
@ -23,3 +25,15 @@ output "worker_security_groups" {
|
||||
output "kubeconfig" {
|
||||
value = "${module.bootkube.kubeconfig}"
|
||||
}
|
||||
|
||||
# Outputs for custom load balancing
|
||||
|
||||
output "worker_target_group_http" {
|
||||
description = "ARN of a target group of workers for HTTP traffic"
|
||||
value = "${module.workers.target_group_http}"
|
||||
}
|
||||
|
||||
output "worker_target_group_https" {
|
||||
description = "ARN of a target group of workers for HTTPS traffic"
|
||||
value = "${module.workers.target_group_https}"
|
||||
}
|
||||
|
@ -1,11 +1,11 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.10.4"
|
||||
required_version = ">= 0.11.0"
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
version = "~> 1.11"
|
||||
version = "~> 1.13"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
|
@ -36,8 +36,8 @@ resource "aws_security_group_rule" "controller-apiserver" {
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 443
|
||||
to_port = 443
|
||||
from_port = 6443
|
||||
to_port = 6443
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
@ -91,6 +91,16 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10250
|
||||
to_port = 10250
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
|
@ -53,6 +53,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "worker_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
@ -92,7 +98,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -100,7 +106,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -9,6 +9,7 @@ module "workers" {
|
||||
count = "${var.worker_count}"
|
||||
instance_type = "${var.worker_type}"
|
||||
disk_size = "${var.disk_size}"
|
||||
spot_price = "${var.worker_price}"
|
||||
|
||||
# configuration
|
||||
kubeconfig = "${module.bootkube.kubeconfig}"
|
||||
|
@ -14,6 +14,6 @@ data "aws_ami" "fedora" {
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Fedora-Atomic-27-20180419.0.x86_64-*-gp2-*"]
|
||||
values = ["Fedora-AtomicHost-28-20180625.1.x86_64-*-gp2-*"]
|
||||
}
|
||||
}
|
||||
|
@ -30,8 +30,9 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -68,7 +69,7 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
|
@ -1,39 +1,4 @@
|
||||
# Network Load Balancer for Ingress
|
||||
resource "aws_lb" "ingress" {
|
||||
name = "${var.name}-ingress"
|
||||
load_balancer_type = "network"
|
||||
internal = false
|
||||
|
||||
subnets = ["${var.subnet_ids}"]
|
||||
|
||||
enable_cross_zone_load_balancing = true
|
||||
}
|
||||
|
||||
# Forward HTTP traffic to workers
|
||||
resource "aws_lb_listener" "ingress-http" {
|
||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
||||
protocol = "TCP"
|
||||
port = 80
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${aws_lb_target_group.workers-http.arn}"
|
||||
}
|
||||
}
|
||||
|
||||
# Forward HTTPS traffic to workers
|
||||
resource "aws_lb_listener" "ingress-https" {
|
||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${aws_lb_target_group.workers-https.arn}"
|
||||
}
|
||||
}
|
||||
|
||||
# Network Load Balancer target groups of instances
|
||||
# Target groups of instances for use with load balancers
|
||||
|
||||
resource "aws_lb_target_group" "workers-http" {
|
||||
name = "${var.name}-workers-http"
|
||||
|
@ -1,4 +1,9 @@
|
||||
output "ingress_dns_name" {
|
||||
value = "${aws_lb.ingress.dns_name}"
|
||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||
output "target_group_http" {
|
||||
description = "ARN of a target group of workers for HTTP traffic"
|
||||
value = "${aws_lb_target_group.workers-http.arn}"
|
||||
}
|
||||
|
||||
output "target_group_https" {
|
||||
description = "ARN of a target group of workers for HTTPS traffic"
|
||||
value = "${aws_lb_target_group.workers-https.arn}"
|
||||
}
|
||||
|
@ -46,6 +46,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "spot_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "kubeconfig" {
|
||||
@ -61,7 +67,7 @@ variable "ssh_authorized_key" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -69,7 +75,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -26,6 +26,12 @@ resource "aws_autoscaling_group" "workers" {
|
||||
create_before_destroy = true
|
||||
}
|
||||
|
||||
# Waiting for instance creation delays adding the ASG to state. If instances
|
||||
# can't be created (e.g. spot price too low), the ASG will be orphaned.
|
||||
# Orphaned ASGs escape cleanup, can't be updated, and keep bidding if spot is
|
||||
# used. Disable wait to avoid issues and align with other clouds.
|
||||
wait_for_capacity_timeout = "0"
|
||||
|
||||
tags = [{
|
||||
key = "Name"
|
||||
value = "${var.name}-worker"
|
||||
@ -35,8 +41,10 @@ resource "aws_autoscaling_group" "workers" {
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
image_id = "${data.aws_ami.fedora.image_id}"
|
||||
instance_type = "${var.instance_type}"
|
||||
image_id = "${data.aws_ami.fedora.image_id}"
|
||||
instance_type = "${var.instance_type}"
|
||||
spot_price = "${var.spot_price}"
|
||||
enable_monitoring = false
|
||||
|
||||
user_data = "${data.template_file.worker-cloudinit.rendered}"
|
||||
|
||||
|
@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/bare-metal/).
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/cl/bare-metal/).
|
||||
|
||||
|
@ -1,14 +1,15 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
etcd_servers = ["${var.controller_domains}"]
|
||||
asset_dir = "${var.asset_dir}"
|
||||
networking = "${var.networking}"
|
||||
network_mtu = "${var.network_mtu}"
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
etcd_servers = ["${var.controller_domains}"]
|
||||
asset_dir = "${var.asset_dir}"
|
||||
networking = "${var.networking}"
|
||||
network_mtu = "${var.network_mtu}"
|
||||
network_ip_autodetection_method = "${var.network_ip_autodetection_method}"
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.8"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
|
||||
@ -82,8 +82,9 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -122,7 +123,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
KUBELET_IMAGE_TAG=v1.11.0
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
@ -31,10 +31,10 @@ storage:
|
||||
inline: |
|
||||
#!/bin/bash -ex
|
||||
curl --retry 10 "${ignition_endpoint}?{{.request.raw_query}}&os=installed" -o ignition.json
|
||||
coreos-install \
|
||||
${os_flavor}-install \
|
||||
-d ${install_disk} \
|
||||
-C ${container_linux_channel} \
|
||||
-V ${container_linux_version} \
|
||||
-C ${os_channel} \
|
||||
-V ${os_version} \
|
||||
-o "${container_linux_oem}" \
|
||||
${baseurl_flag} \
|
||||
-i ignition.json
|
@ -55,8 +55,9 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -83,7 +84,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
KUBELET_IMAGE_TAG=v1.11.0
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
@ -1,9 +1,9 @@
|
||||
// Install Container Linux to disk
|
||||
resource "matchbox_group" "container-linux-install" {
|
||||
resource "matchbox_group" "install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
|
||||
name = "${format("container-linux-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
profile = "${var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
|
||||
name = "${format("install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
||||
profile = "${local.flavor == "flatcar" ? element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
|
||||
|
||||
selector {
|
||||
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
||||
|
@ -1,12 +1,20 @@
|
||||
locals {
|
||||
# coreos-stable -> coreos flavor, stable channel
|
||||
# flatcar-stable -> flatcar flavor, stable channel
|
||||
flavor = "${element(split("-", var.os_channel), 0)}"
|
||||
|
||||
channel = "${element(split("-", var.os_channel), 1)}"
|
||||
}
|
||||
|
||||
// Container Linux Install profile (from release.core-os.net)
|
||||
resource "matchbox_profile" "container-linux-install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
name = "${format("%s-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
||||
kernel = "http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
|
||||
kernel = "http://${local.channel}.release.core-os.net/amd64-usr/${var.os_version}/coreos_production_pxe.vmlinuz"
|
||||
|
||||
initrd = [
|
||||
"http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
|
||||
"http://${local.channel}.release.core-os.net/amd64-usr/${var.os_version}/coreos_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
args = [
|
||||
@ -24,15 +32,16 @@ resource "matchbox_profile" "container-linux-install" {
|
||||
data "template_file" "container-linux-install-configs" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
|
||||
template = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
|
||||
template = "${file("${path.module}/cl/install.yaml.tmpl")}"
|
||||
|
||||
vars {
|
||||
container_linux_channel = "${var.container_linux_channel}"
|
||||
container_linux_version = "${var.container_linux_version}"
|
||||
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
|
||||
install_disk = "${var.install_disk}"
|
||||
container_linux_oem = "${var.container_linux_oem}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
os_flavor = "${local.flavor}"
|
||||
os_channel = "${local.channel}"
|
||||
os_version = "${var.os_version}"
|
||||
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
|
||||
install_disk = "${var.install_disk}"
|
||||
container_linux_oem = "${var.container_linux_oem}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
|
||||
# only cached-container-linux profile adds -b baseurl
|
||||
baseurl_flag = ""
|
||||
@ -40,15 +49,15 @@ data "template_file" "container-linux-install-configs" {
|
||||
}
|
||||
|
||||
// Container Linux Install profile (from matchbox /assets cache)
|
||||
// Note: Admin must have downloaded container_linux_version into matchbox assets.
|
||||
// Note: Admin must have downloaded os_version into matchbox assets.
|
||||
resource "matchbox_profile" "cached-container-linux-install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
name = "${format("%s-cached-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
||||
kernel = "/assets/coreos/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
|
||||
kernel = "/assets/coreos/${var.os_version}/coreos_production_pxe.vmlinuz"
|
||||
|
||||
initrd = [
|
||||
"/assets/coreos/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
|
||||
"/assets/coreos/${var.os_version}/coreos_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
args = [
|
||||
@ -66,21 +75,45 @@ resource "matchbox_profile" "cached-container-linux-install" {
|
||||
data "template_file" "cached-container-linux-install-configs" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
|
||||
template = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
|
||||
template = "${file("${path.module}/cl/install.yaml.tmpl")}"
|
||||
|
||||
vars {
|
||||
container_linux_channel = "${var.container_linux_channel}"
|
||||
container_linux_version = "${var.container_linux_version}"
|
||||
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
|
||||
install_disk = "${var.install_disk}"
|
||||
container_linux_oem = "${var.container_linux_oem}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
os_flavor = "${local.flavor}"
|
||||
os_channel = "${local.channel}"
|
||||
os_version = "${var.os_version}"
|
||||
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
|
||||
install_disk = "${var.install_disk}"
|
||||
container_linux_oem = "${var.container_linux_oem}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
|
||||
# profile uses -b baseurl to install from matchbox cache
|
||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/coreos"
|
||||
}
|
||||
}
|
||||
|
||||
// Flatcar Linux install profile (from release.flatcar-linux.net)
|
||||
resource "matchbox_profile" "flatcar-install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
name = "${format("%s-flatcar-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
||||
kernel = "http://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
|
||||
initrd = [
|
||||
"http://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
args = [
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
"${var.kernel_args}",
|
||||
]
|
||||
|
||||
container_linux_config = "${element(data.template_file.container-linux-install-configs.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
// Kubernetes Controller profiles
|
||||
resource "matchbox_profile" "controllers" {
|
||||
count = "${length(var.controller_names)}"
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.10.4"
|
||||
required_version = ">= 0.11.0"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
|
@ -2,6 +2,14 @@
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = "${length(var.controller_names)}"
|
||||
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
"matchbox_group.install",
|
||||
"matchbox_group.controller",
|
||||
"matchbox_group.worker",
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(var.controller_domains, count.index)}"
|
||||
@ -70,6 +78,14 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
count = "${length(var.worker_names)}"
|
||||
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
"matchbox_group.install",
|
||||
"matchbox_group.controller",
|
||||
"matchbox_group.worker",
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(var.worker_domains, count.index)}"
|
||||
|
@ -10,14 +10,14 @@ variable "matchbox_http_endpoint" {
|
||||
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
|
||||
}
|
||||
|
||||
variable "container_linux_channel" {
|
||||
variable "os_channel" {
|
||||
type = "string"
|
||||
description = "Container Linux channel corresponding to the container_linux_version"
|
||||
description = "Channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||
}
|
||||
|
||||
variable "container_linux_version" {
|
||||
variable "os_version" {
|
||||
type = "string"
|
||||
description = "Container Linux version of the kernel/initrd to PXE or the image to install"
|
||||
description = "Version for a Container Linux derivative to PXE and install (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
|
||||
}
|
||||
|
||||
# machines
|
||||
@ -76,6 +76,12 @@ variable "network_mtu" {
|
||||
default = "1480"
|
||||
}
|
||||
|
||||
variable "network_ip_autodetection_method" {
|
||||
description = "Method to autodetect the host IPv4 address (applies to calico only)"
|
||||
type = "string"
|
||||
default = "first-found"
|
||||
}
|
||||
|
||||
variable "pod_cidr" {
|
||||
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||
type = "string"
|
||||
@ -85,7 +91,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -95,7 +101,7 @@ EOD
|
||||
# optional
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
@ -103,7 +109,7 @@ variable "cluster_domain_suffix" {
|
||||
variable "cached_install" {
|
||||
type = "string"
|
||||
default = "false"
|
||||
description = "Whether Container Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the container_linux_version into matchbox assets."
|
||||
description = "Whether Container Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
|
||||
}
|
||||
|
||||
variable "install_disk" {
|
||||
@ -115,7 +121,7 @@ variable "install_disk" {
|
||||
variable "container_linux_oem" {
|
||||
type = "string"
|
||||
default = ""
|
||||
description = "Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
|
||||
description = "DEPRECATED: Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
|
||||
}
|
||||
|
||||
variable "kernel_args" {
|
||||
|
@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/bare-metal/).
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/cl/bare-metal/).
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
etcd_servers = ["${var.controller_domains}"]
|
||||
|
@ -36,8 +36,9 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -82,8 +83,8 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
|
@ -15,8 +15,9 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -58,7 +59,7 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -1,5 +1,5 @@
|
||||
// Install Fedora to disk
|
||||
resource "matchbox_group" "fedora-install" {
|
||||
resource "matchbox_group" "install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
|
||||
name = "${format("fedora-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
@ -17,7 +17,7 @@ network --bootproto=dhcp --device=link --activate --onboot=on
|
||||
bootloader --timeout=1 --append="ds=nocloud\;seedfrom=/var/cloud-init/"
|
||||
services --enabled=cloud-init,cloud-init-local,cloud-config,cloud-final
|
||||
|
||||
ostreesetup --osname="fedora-atomic" --remote="fedora-atomic" --url="${atomic_assets_endpoint}/repo" --ref=fedora/27/x86_64/atomic-host --nogpg
|
||||
ostreesetup --osname="fedora-atomic" --remote="fedora-atomic" --url="${atomic_assets_endpoint}/repo" --ref=fedora/28/x86_64/atomic-host --nogpg
|
||||
|
||||
reboot
|
||||
|
||||
@ -27,7 +27,7 @@ curl --retry 10 "${matchbox_http_endpoint}/generic?mac=${mac}&os=installed" -o /
|
||||
echo "instance-id: iid-local01" > /var/cloud-init/meta-data
|
||||
|
||||
rm -f /etc/ostree/remotes.d/fedora-atomic.conf
|
||||
ostree remote add fedora-atomic https://kojipkgs.fedoraproject.org/atomic/27 --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-primary
|
||||
ostree remote add fedora-atomic https://dl.fedoraproject.org/atomic/repo/ --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-primary
|
||||
|
||||
# lock root user
|
||||
passwd -l root
|
||||
|
@ -1,6 +1,6 @@
|
||||
locals {
|
||||
default_assets_endpoint = "${var.matchbox_http_endpoint}/assets/fedora/27"
|
||||
atomic_assets_endpoint = "${var.atomic_assets_endpoint != "" ? var.atomic_assets_endpoint : local.default_assets_endpoint}"
|
||||
default_assets_endpoint = "${var.matchbox_http_endpoint}/assets/fedora/28"
|
||||
atomic_assets_endpoint = "${var.atomic_assets_endpoint != "" ? var.atomic_assets_endpoint : local.default_assets_endpoint}"
|
||||
}
|
||||
|
||||
// Cached Fedora Install profile (from matchbox /assets cache)
|
||||
@ -36,14 +36,15 @@ data "template_file" "install-kickstarts" {
|
||||
vars {
|
||||
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
|
||||
atomic_assets_endpoint = "${local.atomic_assets_endpoint}"
|
||||
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
||||
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
||||
}
|
||||
}
|
||||
|
||||
// Kubernetes Controller profiles
|
||||
resource "matchbox_profile" "controllers" {
|
||||
count = "${length(var.controller_names)}"
|
||||
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
||||
count = "${length(var.controller_names)}"
|
||||
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
||||
|
||||
# cloud-init
|
||||
generic_config = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
}
|
||||
@ -65,8 +66,9 @@ data "template_file" "controller-configs" {
|
||||
|
||||
// Kubernetes Worker profiles
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = "${length(var.worker_names)}"
|
||||
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
||||
count = "${length(var.worker_names)}"
|
||||
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
||||
|
||||
# cloud-init
|
||||
generic_config = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
||||
}
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.10.4"
|
||||
required_version = ">= 0.11.0"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
|
@ -2,6 +2,14 @@
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = "${length(var.controller_names)}"
|
||||
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
"matchbox_group.install",
|
||||
"matchbox_group.controller",
|
||||
"matchbox_group.worker",
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(var.controller_domains, count.index)}"
|
||||
@ -68,6 +76,14 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
count = "${length(var.worker_names)}"
|
||||
|
||||
# Without depends_on, remote-exec could start and wait for machines before
|
||||
# matchbox groups are written, causing a deadlock.
|
||||
depends_on = [
|
||||
"matchbox_group.install",
|
||||
"matchbox_group.controller",
|
||||
"matchbox_group.worker",
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(var.worker_domains, count.index)}"
|
||||
|
@ -11,12 +11,13 @@ variable "matchbox_http_endpoint" {
|
||||
}
|
||||
|
||||
variable "atomic_assets_endpoint" {
|
||||
type = "string"
|
||||
type = "string"
|
||||
default = ""
|
||||
|
||||
description = <<EOD
|
||||
HTTP endpoint serving the Fedora Atomic Host vmlinuz, initrd, os repo, and ostree repo (.e.g `http://example.com/some/path`).
|
||||
|
||||
Ensure the HTTP server directory contains `vmlinuz` and `initrd` files and `os` and `repo` directories. Leave unset to assume ${matchbox_http_endpoint}/assets/fedora/27
|
||||
Ensure the HTTP server directory contains `vmlinuz` and `initrd` files and `os` and `repo` directories. Leave unset to assume ${matchbox_http_endpoint}/assets/fedora/28
|
||||
EOD
|
||||
}
|
||||
|
||||
@ -85,7 +86,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -93,7 +94,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/digital-ocean/).
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/cl/digital-ocean/).
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.8"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -85,8 +85,9 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -127,7 +128,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
KUBELET_IMAGE_TAG=v1.11.0
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
|
@ -58,8 +58,9 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -97,7 +98,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
KUBELET_IMAGE_TAG=v1.11.0
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -115,7 +116,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -69,20 +69,20 @@ data "template_file" "controller_config" {
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
||||
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
|
||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
||||
|
||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
||||
resource null_resource "repeat" {
|
||||
count = "${var.controller_count}"
|
||||
data "template_file" "etcds" {
|
||||
count = "${var.controller_count}"
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
triggers {
|
||||
name = "etcd${count.index}"
|
||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
vars {
|
||||
index = "${count.index}"
|
||||
cluster_name = "${var.cluster_name}"
|
||||
dns_zone = "${var.dns_zone}"
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,7 +3,7 @@ resource "digitalocean_firewall" "rules" {
|
||||
|
||||
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
|
||||
# allow ssh, http/https ingress, and peer-to-peer traffic
|
||||
# allow ssh, apiserver, http/https ingress, and peer-to-peer traffic
|
||||
inbound_rule = [
|
||||
{
|
||||
protocol = "tcp"
|
||||
@ -20,6 +20,11 @@ resource "digitalocean_firewall" "rules" {
|
||||
port_range = "443"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
{
|
||||
protocol = "tcp"
|
||||
port_range = "6443"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
{
|
||||
protocol = "udp"
|
||||
port_range = "1-65535"
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.10.4"
|
||||
required_version = ">= 0.11.0"
|
||||
}
|
||||
|
||||
provider "digitalocean" {
|
||||
|
@ -80,7 +80,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -88,7 +88,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/digital-ocean/).
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/cl/digital-ocean/).
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -51,8 +51,9 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -88,8 +89,8 @@ bootcmd:
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
|
@ -30,8 +30,9 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -65,7 +66,7 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -46,7 +46,7 @@ resource "digitalocean_droplet" "controllers" {
|
||||
|
||||
user_data = "${element(data.template_file.controller-cloudinit.*.rendered, count.index)}"
|
||||
ssh_keys = ["${var.ssh_fingerprints}"]
|
||||
|
||||
|
||||
tags = [
|
||||
"${digitalocean_tag.controllers.id}",
|
||||
]
|
||||
@ -69,7 +69,7 @@ data "template_file" "controller-cloudinit" {
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
||||
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
|
||||
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
@ -77,13 +77,13 @@ data "template_file" "controller-cloudinit" {
|
||||
}
|
||||
}
|
||||
|
||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
||||
resource null_resource "repeat" {
|
||||
count = "${var.controller_count}"
|
||||
data "template_file" "etcds" {
|
||||
count = "${var.controller_count}"
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
triggers {
|
||||
name = "etcd${count.index}"
|
||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
vars {
|
||||
index = "${count.index}"
|
||||
cluster_name = "${var.cluster_name}"
|
||||
dns_zone = "${var.dns_zone}"
|
||||
}
|
||||
}
|
||||
|
@ -3,7 +3,7 @@ resource "digitalocean_firewall" "rules" {
|
||||
|
||||
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
|
||||
# allow ssh, http/https ingress, and peer-to-peer traffic
|
||||
# allow ssh, apiserver, http/https ingress, and peer-to-peer traffic
|
||||
inbound_rule = [
|
||||
{
|
||||
protocol = "tcp"
|
||||
@ -20,6 +20,11 @@ resource "digitalocean_firewall" "rules" {
|
||||
port_range = "443"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
{
|
||||
protocol = "tcp"
|
||||
port_range = "6443"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
{
|
||||
protocol = "udp"
|
||||
port_range = "1-65535"
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.10.4"
|
||||
required_version = ">= 0.11.0"
|
||||
}
|
||||
|
||||
provider "digitalocean" {
|
||||
|
@ -43,8 +43,8 @@ variable "worker_type" {
|
||||
|
||||
variable "image" {
|
||||
type = "string"
|
||||
default = "fedora-27-x64-atomic"
|
||||
description = "OS image from which to initialize the disk (e.g. fedora-27-x64-atomic)"
|
||||
default = "fedora-28-x64-atomic"
|
||||
description = "OS image from which to initialize the disk (e.g. fedora-28-x64-atomic)"
|
||||
}
|
||||
|
||||
# configuration
|
||||
@ -73,7 +73,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -81,7 +81,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -4,7 +4,7 @@ Nginx Ingress controller pods accept and demultiplex HTTP, HTTPS, TCP, or UDP tr
|
||||
|
||||
## AWS
|
||||
|
||||
On AWS, an elastic load balancer distributes traffic across worker nodes (i.e. an auto-scaling group) running an Ingress controller deployment on host ports 80 and 443. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a health Ingress controller receive traffic.
|
||||
On AWS, a network load balancer (NLB) distributes traffic across a target group of worker nodes running an Ingress controller deployment on host ports 80 and 443. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a health Ingress controller receive traffic.
|
||||
|
||||
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
@ -80,7 +80,7 @@ aap2.example.com -> 11.22.33.44
|
||||
app3.example.com -> 11.22.33.44
|
||||
```
|
||||
|
||||
Find the IPv4 address with `gcloud compute addresses list` or use the Typhoon module's output `ingress_static_ip`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
Find the IPv4 address with `gcloud compute addresses list` or use the Typhoon module's output `ingress_static_ipv4`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
|
||||
```tf
|
||||
resource "google_dns_record_set" "some-application" {
|
||||
@ -91,7 +91,7 @@ resource "google_dns_record_set" "some-application" {
|
||||
name = "app.example.com."
|
||||
type = "A"
|
||||
ttl = 300
|
||||
rrdatas = ["${module.google-cloud-yavin.ingress_static_ip}"]
|
||||
rrdatas = ["${module.google-cloud-yavin.ingress_static_ipv4}"]
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -15,7 +15,7 @@ Create a cluster following the AWS [tutorial](../cl/aws.md#cluster). Define a wo
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.10.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.11.0"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -33,7 +33,7 @@ module "tempest-worker-pool" {
|
||||
|
||||
count = 2
|
||||
instance_type = "m5.large"
|
||||
os_channel = "beta"
|
||||
os_image = "coreos-beta"
|
||||
}
|
||||
```
|
||||
|
||||
@ -66,12 +66,13 @@ The AWS internal `workers` module supports a number of [variables](https://githu
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| count | Number of instances | 1 | 3 |
|
||||
| instance_type | EC2 instance type | "t2.small" | "t2.medium" |
|
||||
| os_channel | Container Linux AMI channel | stable| "beta", "alpha" |
|
||||
| os_image | AMI channel for a Container Linux derivative | coreos-stable | coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||
| spot_price | Spot price in USD for workers. Leave as default empty string for regular on-demand instances | "" | "0.10" |
|
||||
| service_cidr | Must match `service_cidr` of cluster | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | Must match `cluster_domain_suffix` of cluster | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
||||
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/) or per-region and per-type [spot prices](https://aws.amazon.com/ec2/spot/pricing/).
|
||||
|
||||
## Google Cloud
|
||||
|
||||
@ -79,7 +80,7 @@ Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#clu
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.10.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.11.0"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -113,11 +114,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.10.2
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.10.2
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.0
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.0
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.0
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.11.0
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.11.0
|
||||
```
|
||||
|
||||
### Variables
|
||||
|
@ -1,5 +1,15 @@
|
||||
# Announce <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo-small.png">
|
||||
|
||||
## May 23, 2018
|
||||
|
||||
Starting in v1.10.3, Typhoon AWS and bare-metal `container-linux` modules allow picking between the Red Hat [Container Linux](https://coreos.com/os/docs/latest/) (formerly CoreOS Container Linux) and Kinvolk [Flatcar Linux](https://www.flatcar-linux.org/) operating system. Flatcar Linux serves as a drop-in compatible "friendly fork" of Container Linux. Flatcar Linux publishes the same channels and versions as Container Linux and gets provisioned, managed, and operated in an identical way (e.g. login as user "core").
|
||||
|
||||
On AWS, pick the Container Linux derivative channel by setting `os_image` to coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, or flatcar-alpha.
|
||||
|
||||
On bare-metal, pick the Container Linux derivative channel by setting `os_channel` to coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, or flatcar-alpha. Set the `os_version` number to PXE boot and install. Variables `container_linux_channel` and `container_linux_version` have been dropped.
|
||||
|
||||
Flatcar Linux provides a familar Container Linux experience, with support from Kinvolk as an alternative to Red Hat. Typhoon offers the choice of Container Linux vendor to satisfy differing preferences and to diversify technology underpinnings, while providing a consistent Kubernetes experience across operating systems, clouds, and on-premise.
|
||||
|
||||
## April 26, 2018
|
||||
|
||||
Introducing Typhoon Kubernetes clusters for Fedora Atomic!
|
||||
|
@ -12,7 +12,7 @@ Cluster nodes provision themselves from a declarative configuration upfront. Nod
|
||||
|
||||
#### Controllers
|
||||
|
||||
Controller nodes are scheduled to run the Kubernetes `apiserver`, `scheduler`, `controller-manager`, `kube-dns`, and `kube-proxy`. A fully qualified domain name (e.g. cluster_name.domain.com) resolving to a network load balancer or round-robin DNS (depends on platform) is used to refer to the control plane.
|
||||
Controller nodes are scheduled to run the Kubernetes `apiserver`, `scheduler`, `controller-manager`, `coredns`, and `kube-proxy`. A fully qualified domain name (e.g. cluster_name.domain.com) resolving to a network load balancer or round-robin DNS (depends on platform) is used to refer to the control plane.
|
||||
|
||||
#### Workers
|
||||
|
||||
|
@ -3,11 +3,11 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on AWS with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on AWS with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -44,7 +44,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
|
||||
|
||||
```tf
|
||||
provider "aws" {
|
||||
version = "~> 1.11.0"
|
||||
version = "~> 1.13.0"
|
||||
alias = "default"
|
||||
|
||||
region = "eu-central-1"
|
||||
@ -83,7 +83,7 @@ Define a Kubernetes cluster using the module `aws/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.10.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.11.0"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -156,9 +156,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
ip-10-0-12-221 Ready 34m v1.10.2
|
||||
ip-10-0-19-112 Ready 34m v1.10.2
|
||||
ip-10-0-4-22 Ready 34m v1.10.2
|
||||
ip-10-0-12-221 Ready 34m v1.11.0
|
||||
ip-10-0-19-112 Ready 34m v1.11.0
|
||||
ip-10-0-4-22 Ready 34m v1.11.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -169,10 +169,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1m5bf 2/2 Running 0 34m
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
|
||||
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
|
||||
kube-system kube-dns-1187388186-wx1lg 3/3 Running 0 34m
|
||||
kube-system kube-proxy-14wxv 1/1 Running 0 34m
|
||||
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
|
||||
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
|
||||
@ -227,15 +227,15 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
|
||||
| worker_type | EC2 instance type for workers | "t2.small" | See below |
|
||||
| disk_size | Size of the EBS volume in GB | "40" | "100" |
|
||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||
| worker_price | Spot price in USD for workers. Leave as default empty string for regular on-demand instances | "" | "0.10" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
|
||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
||||
|
||||
!!! tip "MTU"
|
||||
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8991! You will get better pod-to-pod bandwidth.
|
||||
|
||||
!!! warning
|
||||
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller.
|
||||
|
@ -3,11 +3,11 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.2 cluster on bare-metal with Fedora Atomic.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.11.0 cluster on bare-metal with Fedora Atomic.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora Atomic via kickstart, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via cloud-init.
|
||||
|
||||
Controllers are provisioned to run `etcd` and `kubelet` [system containers](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/). Workers run just a `kubelet` system container. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run `etcd` and `kubelet` [system containers](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/). Workers run just a `kubelet` system container. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -121,16 +121,17 @@ sudo systemctl enable httpd --now
|
||||
Download the [Fedora Atomic](https://getfedora.org/en/atomic/download/) ISO which contains install files and add them to the serve directory.
|
||||
|
||||
```
|
||||
sudo mount -o loop,ro Fedora-Atomic-ostree-*.iso /mnt
|
||||
sudo mkdir -p /var/www/html/fedora/27
|
||||
sudo cp -av /mnt/* /var/www/html/fedora/27/
|
||||
sudo mount -o loop,ro Fedora-AtomicHost-ostree-*.iso /mnt
|
||||
sudo mkdir -p /var/www/html/fedora/28
|
||||
sudo cp -av /mnt/* /var/www/html/fedora/28/
|
||||
sudo umount /mnt
|
||||
```
|
||||
|
||||
Checkout the [fedora-atomic](https://pagure.io/fedora-atomic) ostree manifest repo.
|
||||
|
||||
```
|
||||
git clone https://pagure.io/fedora-atomic.git && cd fedora-atomic
|
||||
git checkout f27
|
||||
git checkout f28
|
||||
```
|
||||
|
||||
Compose an ostree repo from RPM sources.
|
||||
@ -145,12 +146,12 @@ sudo rpm-ostree compose tree --repo=repo fedora-atomic-host.json
|
||||
Serve the ostree `repo` as well.
|
||||
|
||||
```
|
||||
sudo cp -r repo /var/www/html/fedora/27/
|
||||
tree /var/www/html/fedora/27/
|
||||
├── images
|
||||
│ ├── pxeboot
|
||||
│ ├── initrd.img
|
||||
│ └── vmlinuz
|
||||
sudo cp -r repo /var/www/html/fedora/28/
|
||||
tree /var/www/html/fedora/28/
|
||||
├── images
|
||||
│ ├── pxeboot
|
||||
│ ├── initrd.img
|
||||
│ └── vmlinuz
|
||||
├── isolinux/
|
||||
├── repo/
|
||||
```
|
||||
@ -158,7 +159,7 @@ tree /var/www/html/fedora/27/
|
||||
Verify `vmlinuz`, `initrd.img`, and `repo` are accessible from the HTTP server (i.e. `atomic_assets_endpoint`).
|
||||
|
||||
```
|
||||
curl http://example.com/fedora/27/
|
||||
curl http://example.com/fedora/28/
|
||||
```
|
||||
|
||||
!!! note
|
||||
@ -234,7 +235,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-atomic/kubernete
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.10.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.11.0"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -246,7 +247,7 @@ module "bare-metal-mercury" {
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
matchbox_http_endpoint = "http://matchbox.example.com"
|
||||
atomic_assets_endpoint = "http://example.com/fedora/27"
|
||||
atomic_assets_endpoint = "http://example.com/fedora/28"
|
||||
|
||||
# configuration
|
||||
k8s_domain_name = "node1.example.com"
|
||||
@ -360,9 +361,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
node1.example.com Ready 11m v1.10.2
|
||||
node2.example.com Ready 11m v1.10.2
|
||||
node3.example.com Ready 11m v1.10.2
|
||||
node1.example.com Ready 11m v1.11.0
|
||||
node2.example.com Ready 11m v1.11.0
|
||||
node3.example.com Ready 11m v1.11.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -373,10 +374,10 @@ NAMESPACE NAME READY STATUS RES
|
||||
kube-system calico-node-6qp7f 2/2 Running 1 11m
|
||||
kube-system calico-node-gnjrm 2/2 Running 0 11m
|
||||
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-7336w 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-v30js 1/1 Running 1 11m
|
||||
kube-system kube-dns-1187388186-mx9rt 3/3 Running 0 11m
|
||||
kube-system kube-proxy-50sd4 1/1 Running 0 11m
|
||||
kube-system kube-proxy-bczhp 1/1 Running 0 11m
|
||||
kube-system kube-proxy-mp2fw 1/1 Running 0 11m
|
||||
@ -400,7 +401,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name | mercury |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | "http://matchbox.example.com:port" |
|
||||
| atomic_assets_endpoint | HTTP endpoint serving the Fedora Atomic vmlinuz, initrd.img, and ostree repo | "http://example.com/fedora/27" |
|
||||
| atomic_assets_endpoint | HTTP endpoint serving the Fedora Atomic vmlinuz, initrd.img, and ostree repo | "http://example.com/fedora/28" |
|
||||
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3Nz..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
|
||||
@ -419,6 +420,6 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
| kernel_args | Additional kernel args to provide at PXE boot | [] | "kvm-intel.nested=1" |
|
||||
|
||||
|
@ -3,11 +3,11 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on DigitalOcean with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on DigitalOcean with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -77,7 +77,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-atomic/kubern
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.10.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.11.0"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
@ -152,19 +152,19 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
10.132.110.130 Ready 10m v1.10.2
|
||||
10.132.115.81 Ready 10m v1.10.2
|
||||
10.132.124.107 Ready 10m v1.10.2
|
||||
10.132.110.130 Ready 10m v1.11.0
|
||||
10.132.115.81 Ready 10m v1.11.0
|
||||
10.132.124.107 Ready 10m v1.11.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-n10qr 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-37gtw 1/1 Running 1 11m
|
||||
kube-system kube-controller-manager-3271970485-p52t5 1/1 Running 0 11m
|
||||
kube-system kube-dns-1187388186-ld1j7 3/3 Running 0 11m
|
||||
kube-system kube-flannel-1cq1v 2/2 Running 0 11m
|
||||
kube-system kube-flannel-hq9t0 2/2 Running 1 11m
|
||||
kube-system kube-flannel-v0g9w 2/2 Running 0 11m
|
||||
@ -241,7 +241,7 @@ Digital Ocean requires the SSH public key be uploaded to your account, so you ma
|
||||
| worker_type | Droplet type for workers | s-1vcpu-1gb | s-1vcpu-1gb, s-1vcpu-2gb, s-2vcpu-2gb, ... |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [droplet types](https://developers.digitalocean.com/documentation/changelog/api-v2/new-size-slugs-for-droplet-plan-changes/) or use `doctl compute size list`.
|
||||
|
||||
|
@ -1,13 +1,13 @@
|
||||
# Google Cloud
|
||||
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is very alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Some addons don't work yet. Expect rough edges and changes.
|
||||
Typhoon for Fedora Atomic is alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on Google Compute Engine with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on Google Compute Engine with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -83,35 +83,37 @@ Additional configuration options are described in the `google` provider [docs](h
|
||||
|
||||
Project Atomic does not publish official Fedora Atomic images to Google Cloud. However, Google Cloud allows [custom boot images](https://cloud.google.com/compute/docs/images/import-existing-image) to be uploaded to a bucket and imported into your project.
|
||||
|
||||
Download the Fedora Atomic 27 [raw image](https://getfedora.org/en/atomic/download/) and decompress the file.
|
||||
Download the Fedora Atomic 28 [raw image](https://getfedora.org/en/atomic/download/) and decompress the file.
|
||||
|
||||
```
|
||||
xz -d Fedora-Atomic-27-20180326.1.x86_64.raw.xz
|
||||
xz -d Fedora-AtomicHost-28-20180528.0.x86_64.raw.xz
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Download the exact dated version shown in docs. Fedora has no official Atomic images for Google Cloud. We've verified specific versions and found others to have problems.
|
||||
|
||||
Rename the image `disk.raw`. Gzip compress and tar the image.
|
||||
|
||||
```
|
||||
mv Fedora-Atomic-27-20180326.1.x86_64.raw disk.raw
|
||||
tar cvzf fedora-atomic-27.tar.gz disk.raw
|
||||
mv Fedora-AtomicHost-28-20180528.0.x86_64.raw disk.raw
|
||||
tar cvzf fedora-atomic-28.tar.gz disk.raw
|
||||
```
|
||||
|
||||
List available storage buckets and upload the tar.gz.
|
||||
|
||||
```
|
||||
gsutil list
|
||||
gsutil cp fedora-atomic-27.tar.gz gs://BUCKET_NAME
|
||||
gsutil cp fedora-atomic-28.tar.gz gs://BUCKET_NAME
|
||||
```
|
||||
|
||||
Create a Google Compute Engine image from the bucket file.
|
||||
|
||||
```
|
||||
gcloud compute images list
|
||||
gcloud compute images create fedora-atomic-27 --source-uri gs://BUCKET/fedora-atomic-27.tar.gz
|
||||
gcloud compute images create fedora-atomic-28 --source-uri gs://BUCKET/fedora-atomic-28.tar.gz
|
||||
```
|
||||
|
||||
Note your project id and the image name for setting `os_image` later (e.g. proj-id/fedora-atomic-27).
|
||||
|
||||
Note your project id and the image name for setting `os_image` later (e.g. proj-id/fedora-atomic-28).
|
||||
|
||||
## Cluster
|
||||
|
||||
@ -119,7 +121,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-atomic/kuberne
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.10.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.11.0"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -138,7 +140,7 @@ module "google-cloud-yavin" {
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
asset_dir = "/home/user/.secrets/clusters/yavin"
|
||||
os_image = "MY-PROJECT_ID/fedora-atomic-27"
|
||||
os_image = "MY-PROJECT_ID/fedora-atomic-28"
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
@ -195,9 +197,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.0
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.0
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -208,10 +210,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-dns-1187388186-zj5dl 3/3 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
@ -236,7 +238,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-
|
||||
| region | Google Cloud region | "us-central1" |
|
||||
| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" |
|
||||
| dns_zone_name | Google Cloud DNS zone name | "example-zone" |
|
||||
| os_image | Custom uploaded Fedora Atomic 27 image | "PROJECT-ID/fedora-atomic-27" |
|
||||
| os_image | Custom uploaded Fedora Atomic image | "PROJECT-ID/fedora-atomic-28" |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/yavin" |
|
||||
|
||||
@ -272,7 +274,7 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on AWS with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on AWS with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -57,7 +57,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
|
||||
|
||||
```tf
|
||||
provider "aws" {
|
||||
version = "~> 1.11.0"
|
||||
version = "~> 1.13.0"
|
||||
alias = "default"
|
||||
|
||||
region = "eu-central-1"
|
||||
@ -96,7 +96,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.10.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.11.0"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -169,9 +169,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
ip-10-0-12-221 Ready 34m v1.10.2
|
||||
ip-10-0-19-112 Ready 34m v1.10.2
|
||||
ip-10-0-4-22 Ready 34m v1.10.2
|
||||
ip-10-0-12-221 Ready 34m v1.11.0
|
||||
ip-10-0-19-112 Ready 34m v1.11.0
|
||||
ip-10-0-4-22 Ready 34m v1.11.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -182,10 +182,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1m5bf 2/2 Running 0 34m
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
|
||||
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
|
||||
kube-system kube-dns-1187388186-wx1lg 3/3 Running 0 34m
|
||||
kube-system kube-proxy-14wxv 1/1 Running 0 34m
|
||||
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
|
||||
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
|
||||
@ -241,9 +241,10 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | EC2 instance type for controllers | "t2.small" | See below |
|
||||
| worker_type | EC2 instance type for workers | "t2.small" | See below |
|
||||
| os_channel | Container Linux AMI channel | stable | stable, beta, alpha |
|
||||
| os_image | AMI channel for a Container Linux derivative | coreos-stable | coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||
| disk_size | Size of the EBS volume in GB | "40" | "100" |
|
||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||
| worker_price | Spot price in USD for workers. Leave as default empty string for regular on-demand instances | "" | "0.10" |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
@ -251,10 +252,12 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
|
||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
||||
|
||||
!!! tip "MTU"
|
||||
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8991! You will get better pod-to-pod bandwidth.
|
||||
!!! warning
|
||||
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller.
|
||||
|
||||
!!! tip "MTU"
|
||||
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.2 cluster on bare-metal with Container Linux.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.11.0 cluster on bare-metal with Container Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -174,7 +174,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.11.0"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -186,8 +186,8 @@ module "bare-metal-mercury" {
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
matchbox_http_endpoint = "http://matchbox.example.com"
|
||||
container_linux_channel = "stable"
|
||||
container_linux_version = "1632.3.0"
|
||||
os_channel = "coreos-stable"
|
||||
os_version = "1632.3.0"
|
||||
|
||||
# configuration
|
||||
k8s_domain_name = "node1.example.com"
|
||||
@ -283,9 +283,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
|
||||
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
|
||||
|
||||
```
|
||||
# before v1.10.2
|
||||
# before v1.11.0
|
||||
$ ssh debug@node1.example.com
|
||||
# after v1.10.2
|
||||
# after v1.11.0
|
||||
$ ssh -p 2222 core@node1.example.com
|
||||
```
|
||||
|
||||
@ -310,9 +310,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
node1.example.com Ready 11m v1.10.2
|
||||
node2.example.com Ready 11m v1.10.2
|
||||
node3.example.com Ready 11m v1.10.2
|
||||
node1.example.com Ready 11m v1.11.0
|
||||
node2.example.com Ready 11m v1.11.0
|
||||
node3.example.com Ready 11m v1.11.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -323,10 +323,10 @@ NAMESPACE NAME READY STATUS RES
|
||||
kube-system calico-node-6qp7f 2/2 Running 1 11m
|
||||
kube-system calico-node-gnjrm 2/2 Running 0 11m
|
||||
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-7336w 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-v30js 1/1 Running 1 11m
|
||||
kube-system kube-dns-1187388186-mx9rt 3/3 Running 0 11m
|
||||
kube-system kube-proxy-50sd4 1/1 Running 0 11m
|
||||
kube-system kube-proxy-bczhp 1/1 Running 0 11m
|
||||
kube-system kube-proxy-mp2fw 1/1 Running 0 11m
|
||||
@ -353,8 +353,8 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name | mercury |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | http://matchbox.example.com:port |
|
||||
| container_linux_channel | Container Linux channel | stable, beta, alpha |
|
||||
| container_linux_version | Container Linux version of the kernel/initrd to PXE and the image to install | 1632.3.0 |
|
||||
| os_channel | Channel for a Container Linux derivative | coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||
| os_version | Version for a Container Linux derivative to PXE and install | 1632.3.0 |
|
||||
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
|
||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
|
||||
@ -369,13 +369,13 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| cached_install | Whether machines should PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux images into the cache to use this | false | true |
|
||||
| cached_install | Whether machines should PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux images into the cache to use this (coreos only for now) | false | true |
|
||||
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |
|
||||
| container_linux_oem | Specify alternative OEM image ids for the disk install | "" | "vmware_raw", "xen" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
|
||||
| network_ip_autodetection_method | Method to detect host IPv4 address (calico-only) | first-found | can-reach=10.0.0.1 |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
| kernel_args | Additional kernel args to provide at PXE boot | [] | "kvm-intel.nested=1" |
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user