mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 17:51:33 +02:00
Compare commits
57 Commits
Author | SHA1 | Date | |
---|---|---|---|
eec314b52f | |||
bcce02a9ce | |||
42c523e6a2 | |||
64b4c10418 | |||
872b11b948 | |||
5b27d8d889 | |||
840b73f9ba | |||
915af3c6cc | |||
c6586b69fd | |||
ea3fc6d2a7 | |||
c8c43f3991 | |||
58472438ce | |||
7f8e781ae4 | |||
56e9a82984 | |||
e95b856a22 | |||
31f48a81a8 | |||
2b3f61d1bb | |||
8fd2978c31 | |||
7de03a1279 | |||
be9f7b87d6 | |||
721c847943 | |||
78c9fdc18f | |||
884c8b39dc | |||
0e71f7e565 | |||
8c4200d425 | |||
5be5b261e2 | |||
f034ef90ae | |||
3bba1ba0dc | |||
dbe7604b67 | |||
9b405a19b2 | |||
bfa1a679eb | |||
f1da0731d8 | |||
d641a058fe | |||
99a6d5478b | |||
bc750aec33 | |||
d55bfd5589 | |||
0be4673e44 | |||
3b44972d78 | |||
0127ee82c1 | |||
a10d6977b8 | |||
05fe923c14 | |||
d10620fb58 | |||
9b6113a058 | |||
5eb4078d68 | |||
8f0d2b5db4 | |||
2e89e161e9 | |||
55bb4dfba6 | |||
43fe78a2cc | |||
5a283b6443 | |||
ff0c271d7b | |||
89088b6a99 | |||
ee569e3a59 | |||
daeaec0832 | |||
db36036c81 | |||
7653e511be | |||
f8474d68c9 | |||
032a24133b |
102
CHANGES.md
102
CHANGES.md
@ -4,6 +4,108 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
## v1.12.3
|
||||
|
||||
* Kubernetes [v1.12.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#v1123)
|
||||
* Add `enable_reporting` variable (default "false") to provide upstreams with usage data ([#345](https://github.com/poseidon/typhoon/pull/345))
|
||||
* Change kube-apiserver `--kubelet-preferred-address-types` to InternalIP,ExternalIP,Hostname
|
||||
* Update Calico from v3.3.0 to [v3.3.1](https://docs.projectcalico.org/v3.3/releases/)
|
||||
* Disable Felix usage reporting by default ([#345](https://github.com/poseidon/typhoon/pull/345))
|
||||
* Improve flannel manifests
|
||||
* [Rename](https://github.com/poseidon/terraform-render-bootkube/commit/d045a8e6b8eccfbb9d69bb51953b5a93d23f67f7) `kube-flannel` DaemonSet to `flannel` and `kube-flannel-cfg` ConfigMap to `flannel-config`
|
||||
* [Drop](https://github.com/poseidon/terraform-render-bootkube/commit/39f9afb3360ec642e5b98457c8bd07eda35b6c96) unused mounts and add a CPU resource request
|
||||
* Update CoreDNS from v1.2.4 to [v1.2.6](https://coredns.io/2018/11/05/coredns-1.2.6-release/)
|
||||
* Enable CoreDNS `loop` and `loadbalance` plugins ([#340](https://github.com/poseidon/typhoon/pull/340))
|
||||
* Fix pod-checkpointer log noise and checkpointable pods detection ([#346](https://github.com/poseidon/typhoon/pull/346))
|
||||
* Use kubernetes-incubator/bootkube v0.14.0
|
||||
* [Recommend](https://typhoon.psdn.io/topics/maintenance/#terraform-plugins-directory) switching from `~/.terraformrc` to the Terraform [third-party plugins](https://www.terraform.io/docs/configuration/providers.html#third-party-plugins) directory `~/.terraform.d/plugins/`.
|
||||
* Allows pinning `terraform-provider-ct` and `terraform-provider-matchbox` versions
|
||||
* Improves safety of later plugin version migrations
|
||||
|
||||
#### Azure
|
||||
|
||||
* Use eviction policy `Delete` for `Low` priority virtual machine scale set workers ([#343](https://github.com/poseidon/typhoon/pull/343))
|
||||
* Fix issue where Azure defaults to `Deallocate` eviction policy, which required manually restarting deallocated instances. `Delete` policy aligns Azure with AWS and GCP behavior.
|
||||
* Require `terraform-provider-azurerm` v1.19+ (action required)
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Add Kubelet `/etc/iscsi` and `iscsadm` mounts on bare-metal for iSCSI ([#103](https://github.com/poseidon/typhoon/pull/103))
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from v0.20.0 to v0.21.0
|
||||
* Update Prometheus from v2.4.3 to v2.5.0
|
||||
* Update Grafana from v5.3.2 to v5.3.4
|
||||
|
||||
## v1.12.2
|
||||
|
||||
* Kubernetes [v1.12.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#v1122)
|
||||
* Update CoreDNS from 1.2.2 to [1.2.4](https://github.com/coredns/coredns/releases/tag/v1.2.4)
|
||||
* Update Calico from v3.2.3 to [v3.3.0](https://docs.projectcalico.org/v3.3/releases/)
|
||||
* Disable Kubelet read-only port ([#324](https://github.com/poseidon/typhoon/pull/324))
|
||||
* Fix CoreDNS AntiAffinity spec to prefer spreading replicas
|
||||
* Ignore controller node user-data changes ([#335](https://github.com/poseidon/typhoon/pull/335))
|
||||
* Once all managed clusters use v1.12.2, it is possible to update `terraform-provider-ct`
|
||||
|
||||
#### AWS
|
||||
|
||||
* Add `disk_iops` variable for EBS volume IOPS ([#314](https://github.com/poseidon/typhoon/pull/314))
|
||||
|
||||
#### Azure
|
||||
|
||||
* Use new `azurerm_network_interface_backend_address_pool_association` ([#332](https://github.com/poseidon/typhoon/pull/332))
|
||||
* Require `terraform-provider-azurerm` v1.17+ (action required)
|
||||
* Add `primary` field to `ip_configuration` needed by v1.17+ ([#331](https://github.com/poseidon/typhoon/pull/331))
|
||||
|
||||
#### DigitalOcean
|
||||
|
||||
* Add AAAA DNS records resolving to worker nodes ([#333](https://github.com/poseidon/typhoon/pull/333))
|
||||
* Hosting IPv6 apps requires editing nginx-ingress with `hostNetwork: true`
|
||||
|
||||
#### Google Cloud
|
||||
|
||||
* Add an IPv6 address and IPv6 forwarding rules for load balancing IPv6 Ingress ([#334](https://github.com/poseidon/typhoon/pull/334))
|
||||
* Add `ingress_static_ipv6` output variable for use in AAAA DNS records
|
||||
* Allow serving IPv6 applications via Kubernetes Ingress
|
||||
|
||||
#### Addons
|
||||
|
||||
* Configure Heapster to scrape Kubelets with bearer token auth ([#323](https://github.com/poseidon/typhoon/pull/323))
|
||||
* Update Grafana from v5.3.1 to v5.3.2
|
||||
|
||||
## v1.12.1
|
||||
|
||||
* Kubernetes [v1.12.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#v1121)
|
||||
* Update etcd from v3.3.9 to [v3.3.10](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md#v3310-2018-10-10)
|
||||
* Update CoreDNS from 1.1.3 to [1.2.2](https://github.com/coredns/coredns/releases/tag/v1.2.2)
|
||||
* Update Calico from v3.2.1 to [v3.2.3](https://docs.projectcalico.org/v3.2/releases/)
|
||||
* Raise scheduler and controller-manager replicas to the larger of 2 or the number of controller nodes ([#312](https://github.com/poseidon/typhoon/pull/312))
|
||||
* Single-controller clusters continue to run 2 replicas as before
|
||||
* Raise default CoreDNS replicas to the larger of 2 or the number of controller nodes ([#313](https://github.com/poseidon/typhoon/pull/313))
|
||||
* Add AntiAffinity preferred rule to favor spreading CoreDNS pods
|
||||
* Annotate control plane and addon containers to use the Docker runtime seccomp profile ([#319](https://github.com/poseidon/typhoon/pull/319))
|
||||
* Override Kubernetes default behavior that starts containers with `seccomp=unconfined`
|
||||
|
||||
#### Azure
|
||||
|
||||
* Remove `admin_password` field (disabled) since it is now optional
|
||||
* Require `terraform-provider-azurerm` v1.16+ (action required)
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Add support for `cached_install` mode with Flatcar Linux ([#315](https://github.com/poseidon/typhoon/pull/315))
|
||||
|
||||
#### DigitalOcean
|
||||
|
||||
* Require `terraform-provider-digitalocean` v1.0+ (action required)
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from v0.19.0 to v0.20.0
|
||||
* Update Prometheus from v2.3.2 to v2.4.3
|
||||
* Update Grafana from v5.2.4 to v5.3.1
|
||||
|
||||
## v1.11.3
|
||||
|
||||
* Kubernetes [v1.11.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1113)
|
||||
|
37
README.md
37
README.md
@ -11,35 +11,38 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/cl/google-cloud/#preemption) (varies by platform)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, or other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Modules
|
||||
|
||||
Typhoon provides a Terraform Module for each supported operating system and platform.
|
||||
Typhoon provides a Terraform Module for each supported operating system and platform. Container Linux is a mature and reliable choice. Also, Kinvolk's Flatcar Linux fork is selectable on AWS and bare-metal.
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Container Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](aws/fedora-atomic/kubernetes) | alpha |
|
||||
| Azure | Container Linux | [azure/container-linux/kubernetes](cl/azure.md) | alpha |
|
||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
|
||||
|
||||
The AWS and bare-metal `container-linux` modules allow picking Red Hat Container Linux (formerly CoreOS Container Linux) or Kinvolk's Flatcar Linux friendly fork.
|
||||
Fedora Atomic support is alpha and will evolve as Fedora Atomic is replaced by Fedora CoreOS.
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](aws/fedora-atomic/kubernetes) | alpha |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
* [Docs](https://typhoon.psdn.io)
|
||||
* Architecture [concepts](https://typhoon.psdn.io/architecture/concepts/) and [operating systems](https://typhoon.psdn.io/architecture/operating-systems/)
|
||||
* Tutorials for [AWS](cl/aws.md), [Azure](cl/azure.md), [Bare-Metal](cl/bare-metal.md), [Digital Ocean](cl/digital-ocean.md), and [Google-Cloud](cl/google-cloud.md)
|
||||
* Tutorials for [AWS](docs/cl/aws.md), [Azure](docs/cl/azure.md), [Bare-Metal](docs/cl/bare-metal.md), [Digital Ocean](docs/cl/digital-ocean.md), and [Google-Cloud](docs/cl/google-cloud.md)
|
||||
|
||||
## Usage
|
||||
|
||||
@ -47,7 +50,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -87,10 +90,10 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
```sh
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.3
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.3
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.3
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.12.3
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.12.3
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -102,6 +105,7 @@ kube-system calico-node-1cs8z 2/2 Running 0
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
@ -111,6 +115,7 @@ kube-system kube-proxy-njn47 1/1 Running 0
|
||||
kube-system kube-scheduler-3895335239-5x87r 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-bzrrt 1/1 Running 1 6m
|
||||
kube-system pod-checkpointer-l6lrt 1/1 Running 0 6m
|
||||
kube-system pod-checkpointer-l6lrt-controller-0 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
## Non-Goals
|
||||
|
@ -15,6 +15,8 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
app: container-linux-update-agent
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: update-agent
|
||||
|
@ -12,6 +12,8 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
app: container-linux-update-operator
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: update-operator
|
||||
|
@ -18,10 +18,12 @@ spec:
|
||||
labels:
|
||||
name: grafana
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: grafana
|
||||
image: grafana/grafana:5.2.4
|
||||
image: grafana/grafana:5.3.4
|
||||
env:
|
||||
- name: GF_SERVER_HTTP_PORT
|
||||
value: "8080"
|
||||
|
@ -5,7 +5,7 @@ metadata:
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:heapster
|
||||
name: heapster
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: heapster
|
||||
|
30
addons/heapster/cluster-role.yaml
Normal file
30
addons/heapster/cluster-role.yaml
Normal file
@ -0,0 +1,30 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: heapster
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
- namespaces
|
||||
- nodes
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- deployments
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes/stats
|
||||
verbs:
|
||||
- get
|
@ -23,7 +23,7 @@ spec:
|
||||
image: k8s.gcr.io/heapster-amd64:v1.5.4
|
||||
command:
|
||||
- /heapster
|
||||
- --source=kubernetes.summary_api:''
|
||||
- --source=kubernetes.summary_api:''?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
|
@ -14,6 +14,8 @@ spec:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
|
@ -17,12 +17,14 @@ spec:
|
||||
labels:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -14,6 +14,8 @@ spec:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
|
@ -17,12 +17,14 @@ spec:
|
||||
labels:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -14,6 +14,8 @@ spec:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
|
@ -17,10 +17,12 @@ spec:
|
||||
labels:
|
||||
name: ingress-controller-public
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -17,12 +17,14 @@ spec:
|
||||
labels:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -14,6 +14,8 @@ spec:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
|
@ -14,6 +14,8 @@ spec:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
|
@ -17,12 +17,14 @@ spec:
|
||||
labels:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -102,7 +102,7 @@ data:
|
||||
regex: 'true'
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_node_name]
|
||||
- source_labels: [__meta_kubernetes_node_address_InternalIP]
|
||||
action: replace
|
||||
target_label: __address__
|
||||
replacement: '${1}:2381'
|
||||
|
@ -14,11 +14,13 @@ spec:
|
||||
labels:
|
||||
name: prometheus
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.3.2
|
||||
image: quay.io/prometheus/prometheus:v2.5.0
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -18,6 +18,8 @@ spec:
|
||||
labels:
|
||||
name: kube-state-metrics
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
|
@ -17,6 +17,8 @@ spec:
|
||||
labels:
|
||||
name: node-exporter
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
serviceAccountName: node-exporter
|
||||
securityContext:
|
||||
|
@ -11,10 +11,10 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=5378e166ef7ec44e69fbc2d879dbf048a45a0d09"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
@ -11,4 +11,5 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -88,6 +88,7 @@ systemd:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -122,7 +123,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.3
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -142,17 +143,14 @@ storage:
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=$${BOOTKUBE_ASSETS} \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
$${BOOTKUBE_ACI}:$${BOOTKUBE_VERSION} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
|
@ -24,12 +24,13 @@ resource "aws_instance" "controllers" {
|
||||
instance_type = "${var.controller_type}"
|
||||
|
||||
ami = "${local.ami_id}"
|
||||
user_data = "${element(data.ct_config.controller_ign.*.rendered, count.index)}"
|
||||
user_data = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
iops = "${var.disk_iops}"
|
||||
}
|
||||
|
||||
# network
|
||||
@ -38,12 +39,23 @@ resource "aws_instance" "controllers" {
|
||||
vpc_security_group_ids = ["${aws_security_group.controller.id}"]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = ["ami"]
|
||||
ignore_changes = [
|
||||
"ami",
|
||||
"user_data",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Controller Container Linux Config
|
||||
data "template_file" "controller_config" {
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = "${var.controller_count}"
|
||||
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.controller_clc_snippets}"]
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
template = "${file("${path.module}/cl/controller.yaml.tmpl")}"
|
||||
@ -73,10 +85,3 @@ data "template_file" "etcds" {
|
||||
dns_zone = "${var.dns_zone}"
|
||||
}
|
||||
}
|
||||
|
||||
data "ct_config" "controller_ign" {
|
||||
count = "${var.controller_count}"
|
||||
content = "${element(data.template_file.controller_config.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.controller_clc_snippets}"]
|
||||
}
|
||||
|
@ -104,27 +104,6 @@ resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow heapster / metrics-server to scrape kubelet read-only
|
||||
resource "aws_security_group_rule" "controller-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-bgp" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -300,27 +279,6 @@ resource "aws_security_group_rule" "worker-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow heapster / metrics-server to scrape kubelet read-only
|
||||
resource "aws_security_group_rule" "worker-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-bgp" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
|
@ -59,6 +59,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "disk_iops" {
|
||||
type = "string"
|
||||
default = "0"
|
||||
description = "IOPS of the EBS volume (e.g. 100)"
|
||||
}
|
||||
|
||||
variable "worker_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
@ -128,3 +134,9 @@ variable "cluster_domain_suffix" {
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -60,6 +60,7 @@ systemd:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
Restart=always
|
||||
@ -92,7 +93,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.3
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -110,7 +111,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.3 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.12.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -52,6 +52,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "disk_iops" {
|
||||
type = "string"
|
||||
default = "0"
|
||||
description = "IOPS of the EBS volume (required for io1)"
|
||||
}
|
||||
|
||||
variable "spot_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
|
@ -46,12 +46,13 @@ resource "aws_launch_configuration" "worker" {
|
||||
spot_price = "${var.spot_price}"
|
||||
enable_monitoring = false
|
||||
|
||||
user_data = "${data.ct_config.worker_ign.rendered}"
|
||||
user_data = "${data.ct_config.worker-ignition.rendered}"
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
iops = "${var.disk_iops}"
|
||||
}
|
||||
|
||||
# network
|
||||
@ -64,8 +65,15 @@ resource "aws_launch_configuration" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Container Linux Config
|
||||
data "template_file" "worker_config" {
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = "${data.template_file.worker-config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.clc_snippets}"]
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
@ -75,9 +83,3 @@ data "template_file" "worker_config" {
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
||||
|
||||
data "ct_config" "worker_ign" {
|
||||
content = "${data.template_file.worker_config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.clc_snippets}"]
|
||||
}
|
||||
|
@ -11,10 +11,10 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [spot](https://typhoon.psdn.io/cl/aws/#spot) workers
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=5378e166ef7ec44e69fbc2d879dbf048a45a0d09"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
@ -11,6 +11,7 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
|
@ -19,24 +19,9 @@ write_files:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/latest/meta-data/local-ipv4\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -65,6 +50,7 @@ write_files:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
@ -92,11 +78,10 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
|
@ -30,6 +30,7 @@ resource "aws_instance" "controllers" {
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
iops = "${var.disk_iops}"
|
||||
}
|
||||
|
||||
# network
|
||||
@ -38,7 +39,10 @@ resource "aws_instance" "controllers" {
|
||||
vpc_security_group_ids = ["${aws_security_group.controller.id}"]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = ["ami"]
|
||||
ignore_changes = [
|
||||
"ami",
|
||||
"user_data",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -104,27 +104,6 @@ resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow heapster / metrics-server to scrape kubelet read-only
|
||||
resource "aws_security_group_rule" "controller-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-bgp" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -300,27 +279,6 @@ resource "aws_security_group_rule" "worker-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow heapster / metrics-server to scrape kubelet read-only
|
||||
resource "aws_security_group_rule" "worker-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-bgp" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
|
@ -53,6 +53,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "disk_iops" {
|
||||
type = "string"
|
||||
default = "0"
|
||||
description = "IOPS of the EBS volume (e.g. 100)"
|
||||
}
|
||||
|
||||
variable "worker_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
@ -110,3 +116,9 @@ variable "cluster_domain_suffix" {
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -1,23 +1,8 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/latest/meta-data/local-ipv4\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -43,6 +28,7 @@ write_files:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
permissions: '0644'
|
||||
@ -68,8 +54,7 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.3"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
|
@ -46,6 +46,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "disk_iops" {
|
||||
type = "string"
|
||||
default = "0"
|
||||
description = "IOPS of the EBS volume (required for io1)"
|
||||
}
|
||||
|
||||
variable "spot_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
|
@ -52,6 +52,7 @@ resource "aws_launch_configuration" "worker" {
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
iops = "${var.disk_iops}"
|
||||
}
|
||||
|
||||
# network
|
||||
|
@ -11,9 +11,10 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=5378e166ef7ec44e69fbc2d879dbf048a45a0d09"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
@ -10,4 +10,5 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -88,6 +88,7 @@ systemd:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -122,7 +123,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.3
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -142,17 +143,14 @@ storage:
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=$${BOOTKUBE_ASSETS} \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
$${BOOTKUBE_ACI}:$${BOOTKUBE_VERSION} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
|
@ -85,6 +85,7 @@ resource "azurerm_virtual_machine" "controllers" {
|
||||
lifecycle {
|
||||
ignore_changes = [
|
||||
"storage_os_disk",
|
||||
"os_profile",
|
||||
]
|
||||
}
|
||||
}
|
||||
@ -105,12 +106,16 @@ resource "azurerm_network_interface" "controllers" {
|
||||
|
||||
# public IPv4
|
||||
public_ip_address_id = "${element(azurerm_public_ip.controllers.*.id, count.index)}"
|
||||
|
||||
# backend address pool to which the NIC should be added
|
||||
load_balancer_backend_address_pools_ids = ["${azurerm_lb_backend_address_pool.controller.id}"]
|
||||
}
|
||||
}
|
||||
|
||||
# Add controller NICs to the controller backend address pool
|
||||
resource "azurerm_network_interface_backend_address_pool_association" "controllers" {
|
||||
network_interface_id = "${azurerm_network_interface.controllers.id}"
|
||||
ip_configuration_name = "ip0"
|
||||
backend_address_pool_id = "${azurerm_lb_backend_address_pool.controller.id}"
|
||||
}
|
||||
|
||||
# Controller public IPv4 addresses
|
||||
resource "azurerm_public_ip" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
|
25
azure/container-linux/kubernetes/require.tf
Normal file
25
azure/container-linux/kubernetes/require.tf
Normal file
@ -0,0 +1,25 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.11.0"
|
||||
}
|
||||
|
||||
provider "azurerm" {
|
||||
version = "~> 1.19"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
}
|
@ -117,22 +117,6 @@ resource "azurerm_network_security_rule" "controller-kubelet" {
|
||||
destination_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
}
|
||||
|
||||
# Allow heapster / metrics-server to scrape kubelet read-only
|
||||
resource "azurerm_network_security_rule" "controller-kubelet-read" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-kubelet-read"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "2035"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10255"
|
||||
source_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
destination_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
}
|
||||
|
||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||
# https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#default-security-rules
|
||||
|
||||
@ -269,22 +253,6 @@ resource "azurerm_network_security_rule" "worker-kubelet" {
|
||||
destination_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
}
|
||||
|
||||
# Allow heapster / metrics-server to scrape kubelet read-only
|
||||
resource "azurerm_network_security_rule" "worker-kubelet-read" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-kubelet-read"
|
||||
network_security_group_name = "${azurerm_network_security_group.worker.name}"
|
||||
priority = "2030"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10255"
|
||||
source_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
destination_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
}
|
||||
|
||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||
# https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#default-security-rules
|
||||
|
||||
|
@ -115,3 +115,9 @@ variable "cluster_domain_suffix" {
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -60,6 +60,7 @@ systemd:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
Restart=always
|
||||
@ -92,7 +93,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.3
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -110,7 +111,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.3 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.12.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]')
|
||||
|
@ -1 +0,0 @@
|
||||
|
@ -37,10 +37,7 @@ resource "azurerm_virtual_machine_scale_set" "workers" {
|
||||
os_profile {
|
||||
computer_name_prefix = "${var.name}-worker-"
|
||||
admin_username = "core"
|
||||
|
||||
# Required by Azure, but password auth is disabled below
|
||||
admin_password = ""
|
||||
custom_data = "${element(data.ct_config.worker-ignitions.*.rendered, count.index)}"
|
||||
custom_data = "${data.ct_config.worker-ignition.rendered}"
|
||||
}
|
||||
|
||||
# Azure mandates setting an ssh_key, even though Ignition custom_data handles it too
|
||||
@ -61,6 +58,7 @@ resource "azurerm_virtual_machine_scale_set" "workers" {
|
||||
|
||||
ip_configuration {
|
||||
name = "ip0"
|
||||
primary = true
|
||||
subnet_id = "${var.subnet_id}"
|
||||
|
||||
# backend address pool to which the NIC should be added
|
||||
@ -69,8 +67,9 @@ resource "azurerm_virtual_machine_scale_set" "workers" {
|
||||
}
|
||||
|
||||
# lifecycle
|
||||
priority = "${var.priority}"
|
||||
upgrade_policy_mode = "Manual"
|
||||
priority = "${var.priority}"
|
||||
eviction_policy = "Delete"
|
||||
}
|
||||
|
||||
# Scale up or down to maintain desired number, tolerating deallocations.
|
||||
@ -96,14 +95,14 @@ resource "azurerm_autoscale_setting" "workers" {
|
||||
}
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignitions" {
|
||||
content = "${data.template_file.worker-configs.rendered}"
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = "${data.template_file.worker-config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.clc_snippets}"]
|
||||
}
|
||||
|
||||
# Worker Container Linux configs
|
||||
data "template_file" "worker-configs" {
|
||||
data "template_file" "worker-config" {
|
||||
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
|
@ -11,9 +11,10 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=5378e166ef7ec44e69fbc2d879dbf048a45a0d09"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
@ -12,4 +12,5 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
|
||||
@ -70,6 +70,10 @@ systemd:
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--volume iscsiconf,kind=host,source=/etc/iscsi/ \
|
||||
--mount volume=iscsiconf,target=/etc/iscsi/ \
|
||||
--volume iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
|
||||
--mount volume=iscsiadm,target=/sbin/iscsiadm \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -97,6 +101,7 @@ systemd:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -123,7 +128,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.3
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
@ -149,17 +154,14 @@ storage:
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=$BOOTKUBE_ASSETS \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$$RKT_OPTS \
|
||||
$${BOOTKUBE_ACI}:$${BOOTKUBE_VERSION} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
|
@ -45,6 +45,10 @@ systemd:
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--volume iscsiconf,kind=host,source=/etc/iscsi/ \
|
||||
--mount volume=iscsiconf,target=/etc/iscsi/ \
|
||||
--volume iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
|
||||
--mount volume=iscsiadm,target=/sbin/iscsiadm \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -69,6 +73,7 @@ systemd:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
Restart=always
|
||||
@ -84,7 +89,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.3
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
@ -3,7 +3,7 @@ resource "matchbox_group" "install" {
|
||||
|
||||
name = "${format("install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
||||
profile = "${local.flavor == "flatcar" ? element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
|
||||
profile = "${local.flavor == "flatcar" ? var.cached_install == "true" ? element(matchbox_profile.cached-flatcar-linux-install.*.name, count.index) : element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
|
||||
|
||||
selector {
|
||||
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
||||
|
@ -49,7 +49,7 @@ data "template_file" "container-linux-install-configs" {
|
||||
}
|
||||
|
||||
// Container Linux Install profile (from matchbox /assets cache)
|
||||
// Note: Admin must have downloaded os_version into matchbox assets.
|
||||
// Note: Admin must have downloaded os_version into matchbox assets/coreos.
|
||||
resource "matchbox_profile" "cached-container-linux-install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
name = "${format("%s-cached-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
@ -87,7 +87,7 @@ data "template_file" "cached-container-linux-install-configs" {
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
|
||||
# profile uses -b baseurl to install from matchbox cache
|
||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/coreos"
|
||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/${local.flavor}"
|
||||
}
|
||||
}
|
||||
|
||||
@ -114,6 +114,30 @@ resource "matchbox_profile" "flatcar-install" {
|
||||
container_linux_config = "${element(data.template_file.container-linux-install-configs.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
// Flatcar Linux Install profile (from matchbox /assets cache)
|
||||
// Note: Admin must have downloaded os_version into matchbox assets/flatcar.
|
||||
resource "matchbox_profile" "cached-flatcar-linux-install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
name = "${format("%s-cached-flatcar-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
||||
kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
|
||||
initrd = [
|
||||
"/assets/flatcar/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
args = [
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
"${var.kernel_args}",
|
||||
]
|
||||
|
||||
container_linux_config = "${element(data.template_file.cached-container-linux-install-configs.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
// Kubernetes Controller profiles
|
||||
resource "matchbox_profile" "controllers" {
|
||||
count = "${length(var.controller_names)}"
|
||||
|
@ -141,3 +141,9 @@ variable "kernel_args" {
|
||||
type = "list"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=5378e166ef7ec44e69fbc2d879dbf048a45a0d09"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
@ -11,6 +11,7 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
|
@ -51,6 +51,7 @@ write_files:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
@ -83,9 +84,9 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
|
@ -29,6 +29,7 @@ write_files:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
content: |
|
||||
@ -59,7 +60,7 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.3"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -110,3 +110,9 @@ variable "kernel_args" {
|
||||
type = "list"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -11,10 +11,11 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=5378e166ef7ec44e69fbc2d879dbf048a45a0d09"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
@ -11,4 +11,5 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -56,12 +56,9 @@ systemd:
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube
|
||||
Requires=coreos-metadata.service
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/kubelet.env
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
|
||||
--volume=resolv,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
@ -93,13 +90,13 @@ systemd:
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -128,7 +125,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.3
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -148,17 +145,14 @@ storage:
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=$${BOOTKUBE_ASSETS} \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
$${BOOTKUBE_ACI}:$${BOOTKUBE_VERSION} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
|
@ -31,12 +31,9 @@ systemd:
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube
|
||||
Requires=coreos-metadata.service
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/kubelet.env
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
|
||||
--volume=resolv,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
@ -66,12 +63,12 @@ systemd:
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
Restart=always
|
||||
@ -98,7 +95,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.3
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -116,7 +113,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.3 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.12.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -44,12 +44,18 @@ resource "digitalocean_droplet" "controllers" {
|
||||
ipv6 = true
|
||||
private_networking = true
|
||||
|
||||
user_data = "${element(data.ct_config.controller_ign.*.rendered, count.index)}"
|
||||
user_data = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
|
||||
ssh_keys = ["${var.ssh_fingerprints}"]
|
||||
|
||||
tags = [
|
||||
"${digitalocean_tag.controllers.id}",
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [
|
||||
"user_data",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label controllers
|
||||
@ -57,8 +63,16 @@ resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Container Linux Config
|
||||
data "template_file" "controller_config" {
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = "${var.controller_count}"
|
||||
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.controller_clc_snippets}"]
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
template = "${file("${path.module}/cl/controller.yaml.tmpl")}"
|
||||
@ -85,11 +99,3 @@ data "template_file" "etcds" {
|
||||
dns_zone = "${var.dns_zone}"
|
||||
}
|
||||
}
|
||||
|
||||
data "ct_config" "controller_ign" {
|
||||
count = "${var.controller_count}"
|
||||
content = "${element(data.template_file.controller_config.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
|
||||
snippets = ["${var.controller_clc_snippets}"]
|
||||
}
|
||||
|
@ -3,7 +3,8 @@ output "controllers_dns" {
|
||||
}
|
||||
|
||||
output "workers_dns" {
|
||||
value = "${digitalocean_record.workers.0.fqdn}"
|
||||
# Multiple A and AAAA records with the same FQDN
|
||||
value = "${digitalocean_record.workers-record-a.0.fqdn}"
|
||||
}
|
||||
|
||||
output "controllers_ipv4" {
|
||||
|
@ -5,7 +5,7 @@ terraform {
|
||||
}
|
||||
|
||||
provider "digitalocean" {
|
||||
version = "~> 0.1.2"
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
|
@ -92,3 +92,9 @@ variable "cluster_domain_suffix" {
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
# Worker DNS records
|
||||
resource "digitalocean_record" "workers" {
|
||||
resource "digitalocean_record" "workers-record-a" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
@ -11,6 +11,18 @@ resource "digitalocean_record" "workers" {
|
||||
value = "${element(digitalocean_droplet.workers.*.ipv4_address, count.index)}"
|
||||
}
|
||||
|
||||
resource "digitalocean_record" "workers-record-aaaa" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = "${var.dns_zone}"
|
||||
|
||||
name = "${var.cluster_name}-workers"
|
||||
type = "AAAA"
|
||||
ttl = 300
|
||||
value = "${element(digitalocean_droplet.workers.*.ipv6_address, count.index)}"
|
||||
}
|
||||
|
||||
# Worker droplet instances
|
||||
resource "digitalocean_droplet" "workers" {
|
||||
count = "${var.worker_count}"
|
||||
@ -25,12 +37,16 @@ resource "digitalocean_droplet" "workers" {
|
||||
ipv6 = true
|
||||
private_networking = true
|
||||
|
||||
user_data = "${data.ct_config.worker_ign.rendered}"
|
||||
user_data = "${data.ct_config.worker-ignition.rendered}"
|
||||
ssh_keys = ["${var.ssh_fingerprints}"]
|
||||
|
||||
tags = [
|
||||
"${digitalocean_tag.workers.id}",
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
create_before_destroy = true
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label workers
|
||||
@ -38,8 +54,15 @@ resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Container Linux Config
|
||||
data "template_file" "worker_config" {
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = "${data.template_file.worker-config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.worker_clc_snippets}"]
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
@ -47,9 +70,3 @@ data "template_file" "worker_config" {
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
||||
|
||||
data "ct_config" "worker_ign" {
|
||||
content = "${data.template_file.worker_config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.worker_clc_snippets}"]
|
||||
}
|
||||
|
@ -11,9 +11,9 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=5378e166ef7ec44e69fbc2d879dbf048a45a0d09"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
@ -11,6 +11,7 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
|
@ -19,24 +19,9 @@ write_files:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -65,6 +50,7 @@ write_files:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
@ -89,11 +75,10 @@ bootcmd:
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -1,23 +1,8 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -43,6 +28,7 @@ write_files:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
content: |
|
||||
@ -65,8 +51,7 @@ bootcmd:
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.3"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -50,6 +50,12 @@ resource "digitalocean_droplet" "controllers" {
|
||||
tags = [
|
||||
"${digitalocean_tag.controllers.id}",
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [
|
||||
"user_data",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label controllers
|
||||
|
@ -3,7 +3,8 @@ output "controllers_dns" {
|
||||
}
|
||||
|
||||
output "workers_dns" {
|
||||
value = "${digitalocean_record.workers.0.fqdn}"
|
||||
# Multiple A and AAAA records with the same FQDN
|
||||
value = "${digitalocean_record.workers-record-a.0.fqdn}"
|
||||
}
|
||||
|
||||
output "controllers_ipv4" {
|
||||
|
@ -5,7 +5,7 @@ terraform {
|
||||
}
|
||||
|
||||
provider "digitalocean" {
|
||||
version = "~> 0.1.2"
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
|
@ -85,3 +85,9 @@ variable "cluster_domain_suffix" {
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
# Worker DNS records
|
||||
resource "digitalocean_record" "workers" {
|
||||
resource "digitalocean_record" "workers-record-a" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
@ -11,6 +11,18 @@ resource "digitalocean_record" "workers" {
|
||||
value = "${element(digitalocean_droplet.workers.*.ipv4_address, count.index)}"
|
||||
}
|
||||
|
||||
resource "digitalocean_record" "workers-record-aaaa" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = "${var.dns_zone}"
|
||||
|
||||
name = "${var.cluster_name}-workers"
|
||||
type = "AAAA"
|
||||
ttl = 300
|
||||
value = "${element(digitalocean_droplet.workers.*.ipv6_address, count.index)}"
|
||||
}
|
||||
|
||||
# Worker droplet instances
|
||||
resource "digitalocean_droplet" "workers" {
|
||||
count = "${var.worker_count}"
|
||||
@ -31,6 +43,10 @@ resource "digitalocean_droplet" "workers" {
|
||||
tags = [
|
||||
"${digitalocean_tag.workers.id}",
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
create_before_destroy = true
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label workers
|
||||
|
@ -4,7 +4,7 @@ Nginx Ingress controller pods accept and demultiplex HTTP, HTTPS, TCP, or UDP tr
|
||||
|
||||
## AWS
|
||||
|
||||
On AWS, a network load balancer (NLB) distributes traffic across a target group of worker nodes running an Ingress controller deployment. Security group rules allow traffic to ports 80 and 443. Health checks ensure only workers with a healthy Ingress controller receive traffic.
|
||||
On AWS, a network load balancer (NLB) distributes TCP traffic across two target groups (port 80 and 443) of worker nodes running an Ingress controller deployment. Security groups rules allow traffic to ports 80 and 443. Health checks ensure only workers with a healthy Ingress controller receive traffic.
|
||||
|
||||
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
@ -37,7 +37,7 @@ resource "google_dns_record_set" "some-application" {
|
||||
|
||||
## Azure
|
||||
|
||||
On Azure, a load balancer distributes traffic across a backend pool of worker nodes running an Ingress controller deployment. Security group rules allow traffic to ports 80 and 443. Health probes ensure only workers with a healthy Ingress controller receive traffic.
|
||||
On Azure, a load balancer distributes traffic across a backend address pool of worker nodes running an Ingress controller deployment. Security group rules allow traffic to ports 80 and 443. Health probes ensure only workers with a healthy Ingress controller receive traffic.
|
||||
|
||||
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
@ -101,7 +101,7 @@ resource "google_dns_record_set" "some-application" {
|
||||
|
||||
## Digital Ocean
|
||||
|
||||
On Digital Ocean, a DNS A record (e.g. `nemo-workers.example.com`) resolves to each worker[^1] running an Ingress controller DaemonSet on host ports 80 and 443. Firewall rules allow IPv4 and IPv6 traffic to ports 80 and 443.
|
||||
On Digital Ocean, DNS A and AAAA records (e.g. FQDN `nemo-workers.example.com`) resolve to each worker[^1] running an Ingress controller DaemonSet on host ports 80 and 443. Firewall rules allow IPv4 and IPv6 traffic to ports 80 and 443.
|
||||
|
||||
Create the Ingress controller daemonset, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
@ -124,11 +124,14 @@ resource "google_dns_record_set" "some-application" {
|
||||
}
|
||||
```
|
||||
|
||||
!!! note
|
||||
Hosting IPv6 apps is possible, but requires editing the nginx-ingress addon to use `hostNetwork: true`.
|
||||
|
||||
[^1]: Digital Ocean does offer load balancers. We've opted not to use them to keep the Digital Ocean setup simple and cheap for developers.
|
||||
|
||||
## Google Cloud
|
||||
|
||||
On Google Cloud, a TCP Proxy load balancer distributes traffic across a backend service of worker nodes running an Ingress controller deployment. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a healthy Ingress controller receive traffic.
|
||||
On Google Cloud, a TCP Proxy load balancer distributes IPv4 and IPv6 TCP traffic across a backend service of worker nodes running an Ingress controller deployment. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a healthy Ingress controller receive traffic.
|
||||
|
||||
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
@ -136,7 +139,7 @@ Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, de
|
||||
kubectl apply -R -f addons/nginx-ingress/google-cloud
|
||||
```
|
||||
|
||||
For each application, add a DNS record resolving to the load balancer's IPv4 address.
|
||||
For each application, add DNS A records resolving to the load balancer's IPv4 address and DNS AAAA records resolving to the load balancer's IPv6 address.
|
||||
|
||||
```
|
||||
app1.example.com -> 11.22.33.44
|
||||
@ -144,10 +147,10 @@ app2.example.com -> 11.22.33.44
|
||||
app3.example.com -> 11.22.33.44
|
||||
```
|
||||
|
||||
Find the IPv4 address with `gcloud compute addresses list` or use the Typhoon module's output `ingress_static_ipv4`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
Find the IPv4 address with `gcloud compute addresses list` or use the Typhoon module's outputs `ingress_static_ipv4` and `ingress_static_ipv6`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
|
||||
```tf
|
||||
resource "google_dns_record_set" "some-application" {
|
||||
resource "google_dns_record_set" "app-record-a" {
|
||||
# DNS zone name
|
||||
managed_zone = "example-zone"
|
||||
|
||||
@ -157,4 +160,15 @@ resource "google_dns_record_set" "some-application" {
|
||||
ttl = 300
|
||||
rrdatas = ["${module.google-cloud-yavin.ingress_static_ipv4}"]
|
||||
}
|
||||
|
||||
resource "google_dns_record_set" "app-record-aaaa" {
|
||||
# DNS zone name
|
||||
managed_zone = "example-zone"
|
||||
|
||||
# DNS record
|
||||
name = "app.example.com."
|
||||
type = "AAAA"
|
||||
ttl = 300
|
||||
rrdatas = ["${module.google-cloud-yavin.ingress_static_ipv6}"]
|
||||
}
|
||||
```
|
||||
|
@ -47,4 +47,4 @@ Visit [127.0.0.1:9090](http://127.0.0.1:9090) to query [expressions](http://127.
|
||||
<br/>
|
||||

|
||||
|
||||
Use [Grafana](/addons/grafana.md) to view or build dashboards that use Prometheus as the datasource.
|
||||
Use [Grafana](/addons/grafana/) to view or build dashboards that use Prometheus as the datasource.
|
||||
|
@ -16,7 +16,7 @@ Create a cluster following the AWS [tutorial](../cl/aws.md#cluster). Define a wo
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -82,7 +82,7 @@ Create a cluster following the Azure [tutorial](../cl/azure.md#cluster). Define
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
azurerm = "azurerm.default"
|
||||
@ -152,7 +152,7 @@ Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#clu
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -187,11 +187,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.3
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.3
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.3
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.11.3
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.11.3
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.12.3
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.12.3
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.12.3
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.12.3
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.12.3
|
||||
```
|
||||
|
||||
### Variables
|
||||
|
13
docs/architecture/aws.md
Normal file
13
docs/architecture/aws.md
Normal file
@ -0,0 +1,13 @@
|
||||
# AWS
|
||||
|
||||
## IPv6
|
||||
|
||||
Status of IPv6 on Typhoon AWS clusters.
|
||||
|
||||
| IPv6 Feature | Supported |
|
||||
|-------------------------|-----------|
|
||||
| Node IPv6 address | Yes |
|
||||
| Node Outbound IPv6 | Yes |
|
||||
| Kubernetes Ingress IPv6 | No |
|
||||
|
||||
* AWS Network Load Balancers do not support `dualstack`.
|
13
docs/architecture/azure.md
Normal file
13
docs/architecture/azure.md
Normal file
@ -0,0 +1,13 @@
|
||||
# Azure
|
||||
|
||||
## IPv6
|
||||
|
||||
Status of IPv6 on Typhoon Azure clusters.
|
||||
|
||||
| IPv6 Feature | Supported |
|
||||
|-------------------------|-----------|
|
||||
| Node IPv6 address | No |
|
||||
| Node Outbound IPv6 | No |
|
||||
| Kubernetes Ingress IPv6 | No |
|
||||
|
||||
* Azure does not allow reserving a static IPv6 address
|
13
docs/architecture/bare-metal.md
Normal file
13
docs/architecture/bare-metal.md
Normal file
@ -0,0 +1,13 @@
|
||||
# Bare-Metal
|
||||
|
||||
## IPv6
|
||||
|
||||
Status of IPv6 on Typhoon bare-metal clusters.
|
||||
|
||||
| IPv6 Feature | Supported |
|
||||
|-------------------------|-----------|
|
||||
| Node IPv6 address | Yes |
|
||||
| Node Outbound IPv6 | Yes |
|
||||
| Kubernetes Ingress IPv6 | Possible |
|
||||
|
||||
IPv6 support depends upon the bare-metal network environment.
|
@ -69,7 +69,7 @@ Module versioning ensures `terraform get --update` only fetches the desired vers
|
||||
|
||||
Maintain Terraform configs for "live" infrastructure in a versioned repository. Seek to organize configs to reflect resources that should be managed together in a `terraform apply` invocation.
|
||||
|
||||
You may choose to organize resources all together, by team, by project, or some other scheme. Here's an example that manages four clusters together:
|
||||
You may choose to organize resources all together, by team, by project, or some other scheme. Here's an example that manages clusters together:
|
||||
|
||||
```sh
|
||||
.git/
|
||||
|
11
docs/architecture/digitalocean.md
Normal file
11
docs/architecture/digitalocean.md
Normal file
@ -0,0 +1,11 @@
|
||||
# AWS
|
||||
|
||||
## IPv6
|
||||
|
||||
Status of IPv6 on Typhoon DigitalOcean clusters.
|
||||
|
||||
| IPv6 Feature | Supported |
|
||||
|-------------------------|-----------|
|
||||
| Node IPv6 address | Yes |
|
||||
| Node Outbound IPv6 | Yes |
|
||||
| Kubernetes Ingress IPv6 | Possible |
|
11
docs/architecture/google-cloud.md
Normal file
11
docs/architecture/google-cloud.md
Normal file
@ -0,0 +1,11 @@
|
||||
# Google Cloud
|
||||
|
||||
## IPv6
|
||||
|
||||
Status of IPv6 on Typhoon Google Cloud clusters.
|
||||
|
||||
| IPv6 Feature | Supported |
|
||||
|-------------------------|-----------|
|
||||
| Node IPv6 address | No |
|
||||
| Node Outbound IPv6 | No |
|
||||
| Kubernetes Ingress IPv6 | Yes |
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.11.3 cluster on AWS with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.12.3 cluster on AWS with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
@ -24,7 +24,7 @@ $ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -83,7 +83,7 @@ Define a Kubernetes cluster using the module `aws/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -155,10 +155,10 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
ip-10-0-12-221 Ready 34m v1.11.3
|
||||
ip-10-0-19-112 Ready 34m v1.11.3
|
||||
ip-10-0-4-22 Ready 34m v1.11.3
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready controller,master 10m v1.12.3
|
||||
ip-10-0-26-65 Ready node 10m v1.12.3
|
||||
ip-10-0-41-21 Ready node 10m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -170,6 +170,7 @@ kube-system calico-node-1m5bf 2/2 Running 0
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
|
||||
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
|
||||
@ -179,12 +180,12 @@ kube-system kube-proxy-sbbsh 1/1 Running 0
|
||||
kube-system kube-scheduler-3359497473-5plhf 1/1 Running 0 34m
|
||||
kube-system kube-scheduler-3359497473-r7zg7 1/1 Running 1 34m
|
||||
kube-system pod-checkpointer-4kxtl 1/1 Running 0 34m
|
||||
kube-system pod-checkpointer-4kxtl-ip-10-0-12-221 1/1 Running 0 33m
|
||||
kube-system pod-checkpointer-4kxtl-ip-10-0-3-155 1/1 Running 0 33m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance.md) and [addons](/addons/overview.md).
|
||||
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
## Variables
|
||||
|
||||
@ -227,6 +228,7 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
|
||||
| worker_type | EC2 instance type for workers | "t2.small" | See below |
|
||||
| disk_size | Size of the EBS volume in GB | "40" | "100" |
|
||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||
| disk_iops | IOPS of the EBS volume | "0" (i.e. auto) | "400" |
|
||||
| worker_price | Spot price in USD for workers. Leave as default empty string for regular on-demand instances | "" | "0.10" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
|
||||
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.11.3 cluster on bare-metal with Fedora Atomic.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.12.3 cluster on bare-metal with Fedora Atomic.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora Atomic via kickstart, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via cloud-init.
|
||||
|
||||
@ -95,7 +95,7 @@ For networks already supporting iPXE clients, you can add a `default.ipxe` confi
|
||||
chain http://matchbox.foo:8080/boot.ipxe
|
||||
```
|
||||
|
||||
For networks with Ubiquiti Routers, you can [configure the router](/topics/hardware.md#ubiquiti) itself to chainload machines to iPXE and Matchbox.
|
||||
For networks with Ubiquiti Routers, you can [configure the router](/topics/hardware/#ubiquiti) itself to chainload machines to iPXE and Matchbox.
|
||||
|
||||
For a small lab, you may wish to checkout the [quay.io/coreos/dnsmasq](https://quay.io/repository/coreos/dnsmasq) container image and [copy-paste examples](https://github.com/coreos/matchbox/blob/master/Documentation/network-setup.md#coreosdnsmasq).
|
||||
|
||||
@ -190,7 +190,7 @@ providers {
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -235,7 +235,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-atomic/kubernete
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -360,10 +360,10 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
node1.example.com Ready 11m v1.11.3
|
||||
node2.example.com Ready 11m v1.11.3
|
||||
node3.example.com Ready 11m v1.11.3
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready controller,master 10m v1.12.3
|
||||
node2.example.com Ready node 10m v1.12.3
|
||||
node3.example.com Ready node 10m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -374,6 +374,7 @@ NAMESPACE NAME READY STATUS RES
|
||||
kube-system calico-node-6qp7f 2/2 Running 1 11m
|
||||
kube-system calico-node-gnjrm 2/2 Running 0 11m
|
||||
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
|
||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-7336w 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
|
||||
@ -389,7 +390,7 @@ kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance.md) and [addons](/addons/overview.md).
|
||||
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
## Variables
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.11.3 cluster on DigitalOcean with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.12.3 cluster on DigitalOcean with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
@ -24,7 +24,7 @@ $ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -45,7 +45,7 @@ Configure the DigitalOcean provider to use your token in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "digitalocean" {
|
||||
version = "0.1.3"
|
||||
version = "1.0.0"
|
||||
token = "${chomp(file("~/.config/digital-ocean/token"))}"
|
||||
alias = "default"
|
||||
}
|
||||
@ -77,7 +77,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-atomic/kubern
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
@ -151,10 +151,10 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
10.132.110.130 Ready 10m v1.11.3
|
||||
10.132.115.81 Ready 10m v1.11.3
|
||||
10.132.124.107 Ready 10m v1.11.3
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
nemo-controller-0 Ready controller,master 10m v1.12.3
|
||||
nemo-worker-0 Ready node 10m v1.12.3
|
||||
nemo-worker-1 Ready node 10m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -162,24 +162,25 @@ List the pods.
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
||||
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
|
||||
kube-system flannel-1cq1v 2/2 Running 0 11m
|
||||
kube-system flannel-hq9t0 2/2 Running 1 11m
|
||||
kube-system flannel-v0g9w 2/2 Running 0 11m
|
||||
kube-system kube-apiserver-n10qr 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-37gtw 1/1 Running 1 11m
|
||||
kube-system kube-controller-manager-3271970485-p52t5 1/1 Running 0 11m
|
||||
kube-system kube-flannel-1cq1v 2/2 Running 0 11m
|
||||
kube-system kube-flannel-hq9t0 2/2 Running 1 11m
|
||||
kube-system kube-flannel-v0g9w 2/2 Running 0 11m
|
||||
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
|
||||
kube-system kube-proxy-fh3td 1/1 Running 0 11m
|
||||
kube-system kube-proxy-k35rc 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-2bc4c 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-b7q47 1/1 Running 1 11m
|
||||
kube-system pod-checkpointer-pr1lq 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0 10m
|
||||
kube-system pod-checkpointer-pr1lq-nemo-controller-0 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance.md) and [addons](/addons/overview.md).
|
||||
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
## Variables
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.11.3 cluster on Google Compute Engine with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.12.3 cluster on Google Compute Engine with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
@ -25,7 +25,7 @@ $ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -121,7 +121,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-atomic/kuberne
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -196,10 +196,10 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.3
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.3
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.3
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.12.3
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.12.3
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -210,6 +210,7 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
@ -224,7 +225,7 @@ kube-system pod-checkpointer-l6lrt 1/1 Running 0
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance.md) and [addons](/addons/overview.md).
|
||||
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
## Variables
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.11.3 cluster on AWS with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.12.3 cluster on AWS with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
@ -21,23 +21,15 @@ $ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.2.1
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
providers {
|
||||
ct = "/usr/local/bin/terraform-provider-ct"
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -64,6 +56,10 @@ provider "aws" {
|
||||
shared_credentials_file = "/home/user/.config/aws/credentials"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.2.1"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
@ -96,7 +92,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -168,10 +164,10 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
ip-10-0-12-221 Ready 34m v1.11.3
|
||||
ip-10-0-19-112 Ready 34m v1.11.3
|
||||
ip-10-0-4-22 Ready 34m v1.11.3
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready controller,master 10m v1.12.3
|
||||
ip-10-0-26-65 Ready node 10m v1.12.3
|
||||
ip-10-0-41-21 Ready node 10m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -183,6 +179,7 @@ kube-system calico-node-1m5bf 2/2 Running 0
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
|
||||
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
|
||||
@ -192,12 +189,12 @@ kube-system kube-proxy-sbbsh 1/1 Running 0
|
||||
kube-system kube-scheduler-3359497473-5plhf 1/1 Running 0 34m
|
||||
kube-system kube-scheduler-3359497473-r7zg7 1/1 Running 1 34m
|
||||
kube-system pod-checkpointer-4kxtl 1/1 Running 0 34m
|
||||
kube-system pod-checkpointer-4kxtl-ip-10-0-12-221 1/1 Running 0 33m
|
||||
kube-system pod-checkpointer-4kxtl-ip-10-0-3-155 1/1 Running 0 33m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance.md) and [addons](/addons/overview.md).
|
||||
Learn about [maintenance](/topics/maintenance) and [addons](/addons/overview).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
@ -244,9 +241,10 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
|
||||
| os_image | AMI channel for a Container Linux derivative | coreos-stable | coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||
| disk_size | Size of the EBS volume in GB | "40" | "100" |
|
||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||
| disk_iops | IOPS of the EBS volume | "0" (i.e. auto) | "400" |
|
||||
| worker_price | Spot price in USD for workers. Leave as default empty string for regular on-demand instances | "" | "0.10" |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization.md) |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization.md) |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
|
||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||
@ -261,3 +259,8 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
|
||||
|
||||
!!! tip "MTU"
|
||||
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
|
||||
|
||||
#### Spot
|
||||
|
||||
Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled.
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Azure is alpha. For production, use AWS, Google Cloud, or bare-metal. As Azure matures, check [errata](https://github.com/poseidon/typhoon/wiki/Errata) for known shortcomings.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.11.3 cluster on Azure with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.12.3 cluster on Azure with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
@ -24,23 +24,15 @@ $ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.2.1
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
providers {
|
||||
ct = "/usr/local/bin/terraform-provider-ct"
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -58,10 +50,14 @@ Configure the Azure provider in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "azurerm" {
|
||||
version = "1.13.0"
|
||||
version = "1.16.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.2.1"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
@ -91,7 +87,7 @@ Define a Kubernetes cluster using the module `azure/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "azure-ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
azurerm = "azurerm.default"
|
||||
@ -112,7 +108,7 @@ module "azure-ramius" {
|
||||
asset_dir = "/home/user/.secrets/clusters/ramius"
|
||||
|
||||
# optional
|
||||
worker_count = 3
|
||||
worker_count = 2
|
||||
host_cidr = "10.0.0.0/20"
|
||||
}
|
||||
```
|
||||
@ -165,10 +161,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/ramius/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready controller,master 24m v1.11.3
|
||||
ramius-worker-000001 Ready node 25m v1.11.3
|
||||
ramius-worker-000002 Ready node 24m v1.11.3
|
||||
ramius-worker-000005 Ready node 24m v1.11.3
|
||||
ramius-controller-0 Ready controller,master 24m v1.12.3
|
||||
ramius-worker-000001 Ready node 25m v1.12.3
|
||||
ramius-worker-000002 Ready node 24m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -177,17 +172,16 @@ List the pods.
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
|
||||
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
|
||||
kube-system flannel-bwf24 2/2 Running 2 26m
|
||||
kube-system flannel-ks5qb 2/2 Running 0 26m
|
||||
kube-system flannel-tq2wg 2/2 Running 0 26m
|
||||
kube-system kube-apiserver-hxgsx 1/1 Running 3 26m
|
||||
kube-system kube-controller-manager-5ff9cd7bb6-b942n 1/1 Running 0 26m
|
||||
kube-system kube-controller-manager-5ff9cd7bb6-bbr6w 1/1 Running 0 26m
|
||||
kube-system kube-flannel-bwf24 2/2 Running 2 26m
|
||||
kube-system kube-flannel-ks5qb 2/2 Running 0 26m
|
||||
kube-system kube-flannel-nghsx 2/2 Running 2 26m
|
||||
kube-system kube-flannel-tq2wg 2/2 Running 0 26m
|
||||
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
|
||||
kube-system kube-proxy-jxr5d 1/1 Running 0 26m
|
||||
kube-system kube-proxy-lbdw5 1/1 Running 0 26m
|
||||
kube-system kube-proxy-v8r7c 1/1 Running 0 26m
|
||||
kube-system kube-scheduler-5f76d69686-s4fbx 1/1 Running 0 26m
|
||||
kube-system kube-scheduler-5f76d69686-vgdgn 1/1 Running 0 26m
|
||||
kube-system pod-checkpointer-cnqdg 1/1 Running 0 26m
|
||||
@ -196,7 +190,7 @@ kube-system pod-checkpointer-cnqdg-ramius-controller-0 1/1 Running 0
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance.md) and [addons](/addons/overview.md).
|
||||
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.11.3 cluster on bare-metal with Container Linux.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.12.3 cluster on bare-metal with Container Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
@ -91,7 +91,7 @@ For networks already supporting iPXE clients, you can add a `default.ipxe` confi
|
||||
chain http://matchbox.foo:8080/boot.ipxe
|
||||
```
|
||||
|
||||
For networks with Ubiquiti Routers, you can [configure the router](/topics/hardware.md#ubiquiti) itself to chainload machines to iPXE and Matchbox.
|
||||
For networks with Ubiquiti Routers, you can [configure the router](/topics/hardware/#ubiquiti) itself to chainload machines to iPXE and Matchbox.
|
||||
|
||||
For a small lab, you may wish to checkout the [quay.io/coreos/dnsmasq](https://quay.io/repository/coreos/dnsmasq) container image and [copy-paste examples](https://github.com/coreos/matchbox/blob/master/Documentation/network-setup.md#coreosdnsmasq).
|
||||
|
||||
@ -113,31 +113,23 @@ $ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) plugin binary for your system.
|
||||
Add the [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-matchbox/releases/download/v0.2.2/terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-matchbox-v0.2.2-linux-amd64/terraform-provider-matchbox /usr/local/bin/
|
||||
mv terraform-provider-matchbox-v0.2.2-linux-amd64/terraform-provider-matchbox ~/.terraform.d/plugins/terraform-provider-matchbox_v0.2.2
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.2.1
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
providers {
|
||||
matchbox = "/usr/local/bin/terraform-provider-matchbox"
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -149,12 +141,17 @@ Configure the Matchbox provider to use your Matchbox API endpoint and client cer
|
||||
|
||||
```tf
|
||||
provider "matchbox" {
|
||||
version = "0.2.2"
|
||||
endpoint = "matchbox.example.com:8081"
|
||||
client_cert = "${file("~/.config/matchbox/client.crt")}"
|
||||
client_key = "${file("~/.config/matchbox/client.key")}"
|
||||
ca = "${file("~/.config/matchbox/ca.crt")}"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.2.1"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
@ -182,7 +179,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -291,9 +288,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
|
||||
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
|
||||
|
||||
```
|
||||
# before v1.11.3
|
||||
# before v1.12.3
|
||||
$ ssh debug@node1.example.com
|
||||
# after v1.11.3
|
||||
# after v1.12.3
|
||||
$ ssh -p 2222 core@node1.example.com
|
||||
```
|
||||
|
||||
@ -317,10 +314,10 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
node1.example.com Ready 11m v1.11.3
|
||||
node2.example.com Ready 11m v1.11.3
|
||||
node3.example.com Ready 11m v1.11.3
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready controller,master 10m v1.12.3
|
||||
node2.example.com Ready node 10m v1.12.3
|
||||
node3.example.com Ready node 10m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -331,6 +328,7 @@ NAMESPACE NAME READY STATUS RES
|
||||
kube-system calico-node-6qp7f 2/2 Running 1 11m
|
||||
kube-system calico-node-gnjrm 2/2 Running 0 11m
|
||||
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
|
||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-7336w 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
|
||||
@ -346,7 +344,7 @@ kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance.md) and [addons](/addons/overview.md).
|
||||
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
@ -377,7 +375,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| cached_install | Whether machines should PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux images into the cache to use this (coreos only for now) | false | true |
|
||||
| cached_install | PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux or Flatcar images into the cache | false | true |
|
||||
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Digital Ocean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.11.3 cluster on DigitalOcean with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.12.3 cluster on DigitalOcean with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
@ -21,23 +21,15 @@ $ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.2.1
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
providers {
|
||||
ct = "/usr/local/bin/terraform-provider-ct"
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -58,11 +50,15 @@ Configure the DigitalOcean provider to use your token in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "digitalocean" {
|
||||
version = "0.1.3"
|
||||
version = "1.0.0"
|
||||
token = "${chomp(file("~/.config/digital-ocean/token"))}"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.2.1"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
@ -90,7 +86,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
@ -163,10 +159,10 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
10.132.110.130 Ready 10m v1.11.3
|
||||
10.132.115.81 Ready 10m v1.11.3
|
||||
10.132.124.107 Ready 10m v1.11.3
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
nemo-controller-0 Ready controller,master 10m v1.12.3
|
||||
nemo-worker-0 Ready node 10m v1.12.3
|
||||
nemo-worker-1 Ready node 10m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -174,24 +170,25 @@ List the pods.
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
||||
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
|
||||
kube-system flannel-1cq1v 2/2 Running 0 11m
|
||||
kube-system flannel-hq9t0 2/2 Running 1 11m
|
||||
kube-system flannel-v0g9w 2/2 Running 0 11m
|
||||
kube-system kube-apiserver-n10qr 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-37gtw 1/1 Running 1 11m
|
||||
kube-system kube-controller-manager-3271970485-p52t5 1/1 Running 0 11m
|
||||
kube-system kube-flannel-1cq1v 2/2 Running 0 11m
|
||||
kube-system kube-flannel-hq9t0 2/2 Running 1 11m
|
||||
kube-system kube-flannel-v0g9w 2/2 Running 0 11m
|
||||
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
|
||||
kube-system kube-proxy-fh3td 1/1 Running 0 11m
|
||||
kube-system kube-proxy-k35rc 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-2bc4c 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-b7q47 1/1 Running 1 11m
|
||||
kube-system pod-checkpointer-pr1lq 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0 10m
|
||||
kube-system pod-checkpointer-pr1lq-nemo-controller-0 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance.md) and [addons](/addons/overview.md).
|
||||
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
@ -254,8 +251,8 @@ Digital Ocean requires the SSH public key be uploaded to your account, so you ma
|
||||
| controller_type | Droplet type for controllers | s-2vcpu-2gb | s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb, ... |
|
||||
| worker_type | Droplet type for workers | s-1vcpu-1gb | s-1vcpu-1gb, s-1vcpu-2gb, s-2vcpu-2gb, ... |
|
||||
| image | Container Linux image for instances | "coreos-stable" | coreos-stable, coreos-beta, coreos-alpha |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advnaced/customization.md) |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](customization.md) |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.11.3 cluster on Google Compute Engine with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.12.3 cluster on Google Compute Engine with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
@ -21,23 +21,15 @@ $ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.2.1
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
providers {
|
||||
ct = "/usr/local/bin/terraform-provider-ct"
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -65,6 +57,10 @@ provider "google" {
|
||||
region = "us-central1"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.2.1"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
@ -97,7 +93,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -171,10 +167,10 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.3
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.3
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.3
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.12.3
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.12.3
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -185,6 +181,7 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
@ -199,7 +196,7 @@ kube-system pod-checkpointer-l6lrt 1/1 Running 0
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance.md) and [addons](/addons/overview.md).
|
||||
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
@ -249,8 +246,8 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
| os_image | Container Linux image for compute instances | "coreos-stable" | "coreos-stable-1632-3-0-v20180215" |
|
||||
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization.md) |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](customization.md) |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
|
@ -11,29 +11,32 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/cl/google-cloud/#preemption) (varies by platform)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, or other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Modules
|
||||
|
||||
Typhoon provides a Terraform Module for each supported operating system and platform.
|
||||
Typhoon provides a Terraform Module for each supported operating system and platform. Container Linux is a mature and reliable choice. Also, Kinvolk's Flatcar Linux fork is selectable on AWS and bare-metal.
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Container Linux | [aws/container-linux/kubernetes](cl/aws.md) | stable |
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](atomic/aws.md) | alpha |
|
||||
| AWS | Container Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
|
||||
| Azure | Container Linux | [azure/container-linux/kubernetes](cl/azure.md) | alpha |
|
||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](cl/bare-metal.md) | stable |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](atomic/bare-metal.md) | alpha |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | beta |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](atomic/digital-ocean.md) | alpha |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | stable |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/container-linux/kubernetes](atomic/google-cloud.md) | alpha |
|
||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable |
|
||||
|
||||
The AWS and bare-metal `container-linux` modules allow picking Red Hat Container Linux (formerly CoreOS Container Linux) or Kinvolk's Flatcar Linux friendly fork.
|
||||
Fedora Atomic support is alpha and will evolve as Fedora Atomic is replaced by Fedora CoreOS.
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](aws/fedora-atomic/kubernetes) | alpha |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
@ -46,7 +49,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -86,10 +89,10 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.3
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.3
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.3
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.12.3
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.12.3
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -100,6 +103,7 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
@ -110,6 +114,7 @@ kube-system kube-proxy-njn47 1/1 Running 0
|
||||
kube-system kube-scheduler-3895335239-5x87r 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-bzrrt 1/1 Running 1 6m
|
||||
kube-system pod-checkpointer-l6lrt 1/1 Running 0 6m
|
||||
kube-system pod-checkpointer-l6lrt-controller-0 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
## Help
|
||||
|
@ -18,7 +18,7 @@ module "google-cloud-yavin" {
|
||||
}
|
||||
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.11.3"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.12.3"
|
||||
...
|
||||
}
|
||||
```
|
||||
@ -40,7 +40,7 @@ Blue-green replacement reduces risk for clusters running critical applications.
|
||||
Blue-green replacement provides some subtler benefits as well:
|
||||
|
||||
* Encourages investment in tooling for traffic migration and failovers. When a cluster incident arises, shifting applications to a healthy cluster will be second nature.
|
||||
* Discourages reliance on in-place opqaue state. Retain confidence in your ability to create infrastructure from scratch.
|
||||
* Discourages reliance on in-place opaque state. Retain confidence in your ability to create infrastructure from scratch.
|
||||
* Allows Typhoon to make architecture changes between releases and eases the burden on Typhoon maintainers. By contrast, distros promising in-place upgrades get stuck with their mistakes or require complex and error-prone migrations.
|
||||
|
||||
### Bare-Metal
|
||||
@ -126,3 +126,70 @@ Typhoon supports multi-controller clusters, so it is possible to upgrade a clust
|
||||
|
||||
!!! warning
|
||||
Typhoon does not support or document node replacement as an upgrade strategy. It limits Typhoon's ability to make infrastructure and architectural changes between tagged releases.
|
||||
|
||||
### Terraform Plugins Directory
|
||||
|
||||
Use the Terraform 3rd-party [plugin directory](https://www.terraform.io/docs/configuration/providers.html#third-party-plugins) `~/.terraform.d/plugins` to keep versioned copies of the `terraform-provider-ct` and `terraform-provider-matchbox` plugins. The plugin directory replaces the `~/.terraformrc` file to allow 3rd party plugins to be defined and versioned independently (rather than globally).
|
||||
|
||||
```
|
||||
# ~/.terraformrc (DEPRECATED)
|
||||
providers {
|
||||
ct = "/usr/local/bin/terraform-provider-ct"
|
||||
matchbox = "/usr/local/bin/terraform-provider-matchbox"
|
||||
}
|
||||
```
|
||||
|
||||
Migrate to using the Terraform plugin directory. Move `~/.terraformrc` to a backup location.
|
||||
|
||||
```
|
||||
mv ~/.terraformrc ~/.terraform-backup
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`. Download the **same version** of `terraform-provider-ct` you were using with `~/.terraformrc`, updating only be done as a followup and is **only** safe for v1.12.2+ clusters!
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.2.1
|
||||
```
|
||||
|
||||
If you use bare-metal, add the [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) plugin binary for your system to `~/.terraform.d/plugins/`, noting the versioned name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-matchbox/releases/download/v0.2.2/terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
|
||||
mv terraform-provider-matchbox-v0.2.2-linux-amd64/terraform-provider-matchbox ~/.terraform.d/plugins/terraform-provider-matchbox_v0.2.2
|
||||
```
|
||||
|
||||
Binary names are versioned. This enables the ability to upgrade different plugins and have clusters pin different versions.
|
||||
|
||||
```
|
||||
$ tree ~/.terraform.d/
|
||||
/home/user/.terraform.d/
|
||||
└── plugins
|
||||
├── terraform-provider-ct_v0.2.1
|
||||
└── terraform-provider-matchbox_v0.2.2
|
||||
```
|
||||
|
||||
In each Terraform working directory, set the version of each provider.
|
||||
|
||||
```
|
||||
# providers.tf
|
||||
|
||||
provider "matchbox" {
|
||||
version = "0.2.2"
|
||||
...
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.2.1"
|
||||
}
|
||||
```
|
||||
|
||||
Run `terraform init` to ensure plugin version requirements are met. Verify `terraform plan` does not produce a diff, since the plugin versions should be the same as previously.
|
||||
|
||||
```
|
||||
$ terraform init
|
||||
$ terraform plan
|
||||
```
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user