mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 16:41:34 +02:00
Compare commits
38 Commits
Author | SHA1 | Date | |
---|---|---|---|
078f084220 | |||
81a1ae38e6 | |||
5b06e0e869 | |||
b951aca66f | |||
9da3725738 | |||
fd12f3612b | |||
96b646cf6d | |||
b15c60fa2f | |||
db947537d1 | |||
c933bdfc26 | |||
21632c6674 | |||
74780fb09f | |||
b60a2ecdf7 | |||
4a7083d94a | |||
c20683067d | |||
dc436b8fe9 | |||
efb9a2d09a | |||
e8d586f3b3 | |||
b74f470701 | |||
45bc52d156 | |||
4d5f962d76 | |||
e7d805d9a4 | |||
d95bf2d1ea | |||
c42139beaa | |||
35c2763ab0 | |||
2067356ae9 | |||
8f412e2f09 | |||
4ef2eb7e6b | |||
99990e3cbb | |||
3c3708d58e | |||
0c45cd0f06 | |||
976452825e | |||
7bc5633c38 | |||
09eb236519 | |||
6db11d5908 | |||
cad12804c8 | |||
eaea4d37a2 | |||
457ad18daa |
45
CHANGES.md
45
CHANGES.md
@ -4,6 +4,51 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
## v1.16.0
|
||||
|
||||
* Kubernetes [v1.16.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1160) ([#543](https://github.com/poseidon/typhoon/pull/543))
|
||||
* Read about several Kubernetes API [deprecations](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#deprecations-and-removals)!
|
||||
* Remove legacy node role labels (no longer shown in `kubectl get nodes`)
|
||||
* Rename node labels to `node.kubernetes.io/master` and `node.kubernetes.io/node` (migratory)
|
||||
* Migrate control plane from self-hosted to static pods ([#536](https://github.com/poseidon/typhoon/pull/536))
|
||||
* Run `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` as static pods on each controller
|
||||
* `kubectl` edits to `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are no longer possible (change)
|
||||
* Remove bootkube, self-hosted pivot, and `pod-checkpointer`
|
||||
* Update CoreDNS from v1.5.0 to v1.6.2 ([#535](https://github.com/poseidon/typhoon/pull/535))
|
||||
* Update etcd from v3.3.15 to [v3.4.0](https://github.com/etcd-io/etcd/releases/tag/v3.4.0)
|
||||
* Recommend updating `terraform-provider-ct` plugin from v0.3.2 to [v0.4.0](https://github.com/poseidon/terraform-provider-ct/releases/tag/v0.4.0)
|
||||
|
||||
#### Azure
|
||||
|
||||
* Change default `controller_type` to `Standard_B2s` ([#539](https://github.com/poseidon/typhoon/pull/539))
|
||||
* `B2s` is cheaper by $17/month and provides 2 vCPU, 4GB RAM
|
||||
* Change default `worker_type` to `Standard_DS1_v2` ([#539](https://github.com/poseidon/typhoon/pull/539))
|
||||
* `F1` is previous generation. `DS1_v2` is newer, similar cost, and supports Low Priority mode
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update Grafana from v6.3.3 to v6.3.5
|
||||
|
||||
## v1.15.3
|
||||
|
||||
* Kubernetes [v1.15.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#v1153)
|
||||
* Update etcd from v3.3.13 to [v3.3.15](https://github.com/etcd-io/etcd/releases/tag/v3.3.15)
|
||||
* Update Calico from v3.8.1 to [v3.8.2](https://docs.projectcalico.org/v3.8/release-notes/)
|
||||
|
||||
#### AWS
|
||||
|
||||
* Enable root block device encryption by default ([#527](https://github.com/poseidon/typhoon/pull/527))
|
||||
* Require `terraform-provider-aws` v2.23+ (**action required**)
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update Prometheus from v2.11.0 to [v2.12.0](https://github.com/prometheus/prometheus/releases/tag/v2.12.0)
|
||||
* Update kube-state-metrics from v1.7.1 to v1.7.2
|
||||
* Update Grafana from v6.2.5 to v6.3.3
|
||||
* Use stable IDs for etcd, CoreDNS, and Nginx Ingress dashboards ([#530](https://github.com/poseidon/typhoon/pull/530))
|
||||
* Update nginx-ingress from v0.25.0 to [v0.25.1](https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.25.1)
|
||||
* Fix Nginx security advisories
|
||||
|
||||
## v1.15.2
|
||||
|
||||
* Kubernetes [v1.15.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#v1152)
|
||||
|
24
README.md
24
README.md
@ -1,4 +1,4 @@
|
||||
# Typhoon []() <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.15.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.16.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
@ -48,7 +48,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.15.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.0"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -81,10 +81,10 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
```sh
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.15.2
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.15.2
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.15.2
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.0
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.16.0
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.16.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -97,16 +97,12 @@ kube-system calico-node-d1l5b 2/2 Running 0
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-controller-0 1/1 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-5x87r 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-bzrrt 1/1 Running 1 6m
|
||||
kube-system pod-checkpointer-l6lrt 1/1 Running 0 6m
|
||||
kube-system pod-checkpointer-l6lrt-controller-0 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-controller-0 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
## Non-Goals
|
||||
|
@ -1029,7 +1029,8 @@ data:
|
||||
"30d"
|
||||
]
|
||||
},
|
||||
"timezone": "browser",
|
||||
"timezone": "",
|
||||
"title": "CoreDNS",
|
||||
"uid": "2f3f749259235f58698ea949170d3bd5",
|
||||
"version": 0
|
||||
}
|
||||
|
@ -1224,7 +1224,8 @@ data:
|
||||
"30d"
|
||||
]
|
||||
},
|
||||
"timezone": "browser",
|
||||
"timezone": "",
|
||||
"title": "etcd",
|
||||
"uid": "c2f4e12cdf69feb95caa41a5a1b423d9",
|
||||
"version": 215
|
||||
}
|
||||
|
@ -1052,7 +1052,8 @@ data:
|
||||
"30d"
|
||||
]
|
||||
},
|
||||
"timezone": "browser",
|
||||
"timezone": "",
|
||||
"title": "Nginx Ingress Controller",
|
||||
"uid": "f4af03eca476c08ecf2b5cf15fd60168",
|
||||
"version": 0
|
||||
}
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: grafana
|
||||
image: docker.io/grafana/grafana:6.2.5
|
||||
image: docker.io/grafana/grafana:6.3.5
|
||||
env:
|
||||
- name: GF_PATHS_CONFIG
|
||||
value: "/etc/grafana/custom.ini"
|
||||
|
@ -20,11 +20,9 @@ spec:
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -20,11 +20,9 @@ spec:
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -20,11 +20,9 @@ spec:
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -20,11 +20,9 @@ spec:
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --ingress-class=public
|
||||
|
@ -20,7 +20,7 @@ spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.11.0
|
||||
image: quay.io/prometheus/prometheus:v2.12.0
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -24,16 +24,22 @@ spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: quay.io/coreos/kube-state-metrics:v1.7.1
|
||||
image: quay.io/coreos/kube-state-metrics:v1.7.2
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
readinessProbe:
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 8080
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
- name: addon-resizer
|
||||
image: k8s.gcr.io/addon-resizer:1.8.5
|
||||
resources:
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.15.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.16.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c21da0224984493e92dd2dc7bb3b755c564852fc"
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=539b725093c8cd94ba46603adb25ac5280562ec8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.13"
|
||||
Environment="ETCD_IMAGE_TAG=v3.4.0"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -64,11 +64,9 @@ systemd:
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--insecure-options=image"
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -87,8 +85,8 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
@ -98,17 +96,28 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: bootkube.service
|
||||
- name: bootstrap.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bootstrap a Kubernetes cluster
|
||||
ConditionPathExists=!/opt/bootkube/init_bootkube.done
|
||||
Description=Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootstrap/bootstrap.done
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootkube
|
||||
ExecStart=/opt/bootkube/bootkube-start
|
||||
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
ExecStartPre=-/usr/bin/bash -c 'set -x && [ -n "$(ls /opt/bootstrap/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootstrap/assets/manifests-*/* /opt/bootstrap/assets/manifests && rm -rf /opt/bootstrap/assets/manifests-*'
|
||||
ExecStart=/usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootstrap/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
@ -125,37 +134,27 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
export KUBECONFIG=/assets/auth/kubeconfig
|
||||
until kubectl version; do
|
||||
echo "Waiting for static pod control plane"
|
||||
sleep 5
|
||||
done
|
||||
until kubectl apply -f /assets/manifests -R; do
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /opt/bootkube/bootkube-start
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
user:
|
||||
id: 500
|
||||
group:
|
||||
id: 500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash
|
||||
# Wrapper for bootkube start
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -31,6 +31,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
}
|
||||
|
||||
# network
|
||||
@ -70,7 +71,7 @@ data "template_file" "controller-configs" {
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
cgroup_driver = local.flavor == "flatcar" && local.channel == "edge" ? "systemd" : "cgroupfs"
|
||||
kubeconfig = indent(10, module.bootkube.kubeconfig-kubelet)
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
|
@ -1,5 +1,5 @@
|
||||
output "kubeconfig-admin" {
|
||||
value = module.bootkube.kubeconfig-admin
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
# Outputs for Kubernetes Ingress
|
||||
@ -32,7 +32,7 @@ output "worker_security_groups" {
|
||||
}
|
||||
|
||||
output "kubeconfig" {
|
||||
value = module.bootkube.kubeconfig-kubelet
|
||||
value = module.bootstrap.kubeconfig-kubelet
|
||||
}
|
||||
|
||||
# Outputs for custom load balancing
|
||||
|
@ -33,6 +33,28 @@ resource "aws_security_group_rule" "controller-etcd" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler
|
||||
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10251
|
||||
to_port = 10251
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-controller-manager
|
||||
resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10252
|
||||
to_port = 10252
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape etcd metrics
|
||||
resource "aws_security_group_rule" "controller-etcd-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
@ -1,48 +1,57 @@
|
||||
# Secure copy etcd TLS assets to controllers.
|
||||
# Secure copy assets to controllers.
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = var.controller_count
|
||||
|
||||
depends_on = [
|
||||
module.bootstrap,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = element(aws_instance.controllers.*.public_ip, count.index)
|
||||
host = aws_instance.controllers.*.public_ip[count.index]
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_ca_cert
|
||||
content = module.bootstrap.etcd_ca_cert
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_cert
|
||||
content = module.bootstrap.etcd_client_cert
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_key
|
||||
content = module.bootstrap.etcd_client_key
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_cert
|
||||
content = module.bootstrap.etcd_server_cert
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_key
|
||||
content = module.bootstrap.etcd_server_key
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_cert
|
||||
content = module.bootstrap.etcd_peer_cert
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_key
|
||||
content = module.bootstrap.etcd_peer_key
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
@ -56,18 +65,22 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo chown -R etcd:etcd /etc/ssl/etcd",
|
||||
"sudo chmod -R 500 /etc/ssl/etcd",
|
||||
"sudo mv $HOME/assets /opt/bootstrap/assets",
|
||||
"sudo mkdir -p /etc/kubernetes/manifests",
|
||||
"sudo mkdir -p /etc/kubernetes/bootstrap-secrets",
|
||||
"sudo cp -r /opt/bootstrap/assets/tls/* /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp /opt/bootstrap/assets/auth/kubeconfig /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp -r /opt/bootstrap/assets/static-manifests/* /etc/kubernetes/manifests/",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
depends_on = [
|
||||
module.bootkube,
|
||||
null_resource.copy-controller-secrets,
|
||||
module.workers,
|
||||
aws_route53_record.apiserver,
|
||||
null_resource.copy-controller-secrets,
|
||||
]
|
||||
|
||||
connection {
|
||||
@ -77,15 +90,9 @@ resource "null_resource" "bootkube-start" {
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/assets /opt/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
"sudo systemctl start bootstrap",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -3,7 +3,7 @@
|
||||
terraform {
|
||||
required_version = "~> 0.12.0"
|
||||
required_providers {
|
||||
aws = "~> 2.7"
|
||||
aws = "~> 2.23"
|
||||
ct = "~> 0.3"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
|
@ -14,7 +14,7 @@ module "workers" {
|
||||
target_groups = var.worker_target_groups
|
||||
|
||||
# configuration
|
||||
kubeconfig = module.bootkube.kubeconfig-kubelet
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
|
@ -39,9 +39,9 @@ systemd:
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--insecure-options=image"
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -60,7 +60,7 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -95,7 +95,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -113,7 +113,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.15.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -56,6 +56,7 @@ resource "aws_launch_configuration" "worker" {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
}
|
||||
|
||||
# network
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.15.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.16.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -16,6 +16,6 @@ data "aws_ami" "fedora-coreos" {
|
||||
// pin on known ok versions as preview matures
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["fedora-coreos-30.20190725.0-hvm"]
|
||||
values = ["fedora-coreos-30.20190801.0-hvm"]
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c21da0224984493e92dd2dc7bb3b755c564852fc"
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=539b725093c8cd94ba46603adb25ac5280562ec8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -31,6 +31,7 @@ resource "aws_instance" "controllers" {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
}
|
||||
|
||||
# network
|
||||
@ -66,7 +67,7 @@ data "template_file" "controller-configs" {
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
kubeconfig = indent(10, module.bootkube.kubeconfig-kubelet)
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
|
@ -28,7 +28,7 @@ systemd:
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
quay.io/coreos/etcd:v3.3.13
|
||||
quay.io/coreos/etcd:v3.4.0
|
||||
ExecStop=/usr/bin/podman stop etcd
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
@ -56,9 +56,9 @@ systemd:
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
@ -80,13 +80,13 @@ systemd:
|
||||
--volume /var/run:/var/run \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
k8s.gcr.io/hyperkube:v1.15.2 /hyperkube kubelet \
|
||||
k8s.gcr.io/hyperkube:v1.16.0 /hyperkube kubelet \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=false \
|
||||
--enforce-node-allocatable="" \
|
||||
--cgroups-per-qos=true \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -95,8 +95,8 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
@ -107,33 +107,48 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: bootkube.service
|
||||
- name: bootstrap.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bootstrap a Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootkube/init_bootkube.done
|
||||
Description=Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootstrap/bootstrap.done
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootkube
|
||||
ExecStart=/usr/bin/bash -c 'set -x && \
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-* && exec podman run --name bootkube --privileged \
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
ExecStartPre=-/usr/bin/bash -c 'set -x && [ -n "$(ls /opt/bootstrap/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootstrap/assets/manifests-*/* /opt/bootstrap/assets/manifests && rm -rf /opt/bootstrap/assets/manifests-*'
|
||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||
--network host \
|
||||
--volume /opt/bootkube/assets:/assets \
|
||||
--volume /etc/kubernetes:/etc/kubernetes \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
/bootkube start --asset-dir=/assets'
|
||||
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
/apply
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
- path: /opt/bootkube
|
||||
- path: /opt/bootstrap
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
export KUBECONFIG=/assets/auth/kubeconfig
|
||||
until kubectl version; do
|
||||
echo "Waiting for static pod control plane"
|
||||
sleep 5
|
||||
done
|
||||
until kubectl apply -f /assets/manifests -R; do
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -1,5 +1,5 @@
|
||||
output "kubeconfig-admin" {
|
||||
value = module.bootkube.kubeconfig-admin
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
# Outputs for Kubernetes Ingress
|
||||
@ -32,7 +32,7 @@ output "worker_security_groups" {
|
||||
}
|
||||
|
||||
output "kubeconfig" {
|
||||
value = module.bootkube.kubeconfig-kubelet
|
||||
value = module.bootstrap.kubeconfig-kubelet
|
||||
}
|
||||
|
||||
# Outputs for custom load balancing
|
||||
|
@ -44,6 +44,28 @@ resource "aws_security_group_rule" "controller-etcd-metrics" {
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler
|
||||
resource "aws_security_group_rule" "controller-scheduler-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10251
|
||||
to_port = 10251
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-controller-manager
|
||||
resource "aws_security_group_rule" "controller-manager-metrics" {
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10252
|
||||
to_port = 10252
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-vxlan" {
|
||||
count = var.networking == "flannel" ? 1 : 0
|
||||
|
||||
|
@ -1,6 +1,10 @@
|
||||
# Secure copy etcd TLS assets to controllers.
|
||||
# Secure copy assets to controllers.
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = var.controller_count
|
||||
|
||||
depends_on = [
|
||||
module.bootstrap,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
@ -10,40 +14,45 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_ca_cert
|
||||
content = module.bootstrap.etcd_ca_cert
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_cert
|
||||
content = module.bootstrap.etcd_client_cert
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_key
|
||||
content = module.bootstrap.etcd_client_key
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_cert
|
||||
content = module.bootstrap.etcd_server_cert
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_key
|
||||
content = module.bootstrap.etcd_server_key
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_cert
|
||||
content = module.bootstrap.etcd_peer_cert
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_key
|
||||
content = module.bootstrap.etcd_peer_key
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mkdir -p /etc/ssl/etcd/etcd",
|
||||
@ -56,18 +65,22 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo chown -R etcd:etcd /etc/ssl/etcd",
|
||||
"sudo chmod -R 500 /etc/ssl/etcd",
|
||||
"sudo mv $HOME/assets /opt/bootstrap/assets",
|
||||
"sudo mkdir -p /etc/kubernetes/manifests",
|
||||
"sudo mkdir -p /etc/kubernetes/bootstrap-secrets",
|
||||
"sudo cp -r /opt/bootstrap/assets/tls/* /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp /opt/bootstrap/assets/auth/kubeconfig /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp -r /opt/bootstrap/assets/static-manifests/* /etc/kubernetes/manifests/"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
depends_on = [
|
||||
module.bootkube,
|
||||
null_resource.copy-controller-secrets,
|
||||
module.workers,
|
||||
aws_route53_record.apiserver,
|
||||
null_resource.copy-controller-secrets,
|
||||
]
|
||||
|
||||
connection {
|
||||
@ -77,15 +90,9 @@ resource "null_resource" "bootkube-start" {
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/assets /opt/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
"sudo systemctl start bootstrap",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -3,7 +3,7 @@
|
||||
terraform {
|
||||
required_version = "~> 0.12.0"
|
||||
required_providers {
|
||||
aws = "~> 2.7"
|
||||
aws = "~> 2.23"
|
||||
ct = "~> 0.4"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
|
@ -14,7 +14,7 @@ module "workers" {
|
||||
target_groups = var.worker_target_groups
|
||||
|
||||
# configuration
|
||||
kubeconfig = module.bootkube.kubeconfig-kubelet
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
|
@ -16,6 +16,6 @@ data "aws_ami" "fedora-coreos" {
|
||||
// pin on known ok versions as preview matures
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["fedora-coreos-30.20190725.0-hvm"]
|
||||
values = ["fedora-coreos-30.20190801.0-hvm"]
|
||||
}
|
||||
}
|
||||
|
@ -26,9 +26,9 @@ systemd:
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
@ -50,7 +50,7 @@ systemd:
|
||||
--volume /var/run:/var/run \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
k8s.gcr.io/hyperkube:v1.15.2 /hyperkube kubelet \
|
||||
k8s.gcr.io/hyperkube:v1.16.0 /hyperkube kubelet \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -65,7 +65,7 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -78,7 +78,6 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
- path: /opt/bootkube
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
mode: 0644
|
||||
|
@ -56,6 +56,7 @@ resource "aws_launch_configuration" "worker" {
|
||||
volume_type = var.disk_type
|
||||
volume_size = var.disk_size
|
||||
iops = var.disk_iops
|
||||
encrypted = true
|
||||
}
|
||||
|
||||
# network
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.15.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.16.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c21da0224984493e92dd2dc7bb3b755c564852fc"
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=539b725093c8cd94ba46603adb25ac5280562ec8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.13"
|
||||
Environment="ETCD_IMAGE_TAG=v3.4.0"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -63,11 +63,9 @@ systemd:
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -85,8 +83,8 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
@ -96,17 +94,28 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: bootkube.service
|
||||
- name: bootstrap.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bootstrap a Kubernetes cluster
|
||||
ConditionPathExists=!/opt/bootkube/init_bootkube.done
|
||||
Description=Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootstrap/bootstrap.done
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootkube
|
||||
ExecStart=/opt/bootkube/bootkube-start
|
||||
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
ExecStartPre=-/usr/bin/bash -c 'set -x && [ -n "$(ls /opt/bootstrap/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootstrap/assets/manifests-*/* /opt/bootstrap/assets/manifests && rm -rf /opt/bootstrap/assets/manifests-*'
|
||||
ExecStart=/usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootstrap/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
@ -123,37 +132,27 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
export KUBECONFIG=/assets/auth/kubeconfig
|
||||
until kubectl version; do
|
||||
echo "Waiting for static pod control plane"
|
||||
sleep 5
|
||||
done
|
||||
until kubectl apply -f /assets/manifests -R; do
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /opt/bootkube/bootkube-start
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
user:
|
||||
id: 500
|
||||
group:
|
||||
id: 500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash
|
||||
# Wrapper for bootkube start
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -155,7 +155,7 @@ data "template_file" "controller-configs" {
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
kubeconfig = indent(10, module.bootkube.kubeconfig-kubelet)
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
|
@ -1,5 +1,5 @@
|
||||
output "kubeconfig-admin" {
|
||||
value = module.bootkube.kubeconfig-admin
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
# Outputs for Kubernetes Ingress
|
||||
@ -28,7 +28,7 @@ output "security_group_id" {
|
||||
}
|
||||
|
||||
output "kubeconfig" {
|
||||
value = module.bootkube.kubeconfig-kubelet
|
||||
value = module.bootstrap.kubeconfig-kubelet
|
||||
}
|
||||
|
||||
# Outputs for custom firewalling
|
||||
|
@ -53,6 +53,22 @@ resource "azurerm_network_security_rule" "controller-etcd-metrics" {
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
|
||||
resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
name = "allow-kube-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2011"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10251-10252"
|
||||
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-apiserver" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
|
@ -1,50 +1,58 @@
|
||||
# Secure copy etcd TLS assets to controllers.
|
||||
# Secure copy assets to controllers.
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = var.controller_count
|
||||
|
||||
depends_on = [azurerm_virtual_machine.controllers]
|
||||
depends_on = [
|
||||
module.bootstrap,
|
||||
azurerm_virtual_machine.controllers
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = element(azurerm_public_ip.controllers.*.ip_address, count.index)
|
||||
host = azurerm_public_ip.controllers.*.ip_address[count.index]
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_ca_cert
|
||||
content = module.bootstrap.etcd_ca_cert
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_cert
|
||||
content = module.bootstrap.etcd_client_cert
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_key
|
||||
content = module.bootstrap.etcd_client_key
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_cert
|
||||
content = module.bootstrap.etcd_server_cert
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_key
|
||||
content = module.bootstrap.etcd_server_key
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_cert
|
||||
content = module.bootstrap.etcd_peer_cert
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_key
|
||||
content = module.bootstrap.etcd_peer_key
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
@ -58,18 +66,22 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo chown -R etcd:etcd /etc/ssl/etcd",
|
||||
"sudo chmod -R 500 /etc/ssl/etcd",
|
||||
"sudo mv $HOME/assets /opt/bootstrap/assets",
|
||||
"sudo mkdir -p /etc/kubernetes/manifests",
|
||||
"sudo mkdir -p /etc/kubernetes/bootstrap-secrets",
|
||||
"sudo cp -r /opt/bootstrap/assets/tls/* /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp /opt/bootstrap/assets/auth/kubeconfig /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp -r /opt/bootstrap/assets/static-manifests/* /etc/kubernetes/manifests/",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
depends_on = [
|
||||
module.bootkube,
|
||||
null_resource.copy-controller-secrets,
|
||||
module.workers,
|
||||
azurerm_dns_a_record.apiserver,
|
||||
null_resource.copy-controller-secrets,
|
||||
]
|
||||
|
||||
connection {
|
||||
@ -79,15 +91,9 @@ resource "null_resource" "bootkube-start" {
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/assets /opt/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
"sudo systemctl start bootstrap",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -36,13 +36,13 @@ variable "worker_count" {
|
||||
|
||||
variable "controller_type" {
|
||||
type = string
|
||||
default = "Standard_DS1_v2"
|
||||
default = "Standard_B2s"
|
||||
description = "Machine type for controllers (see `az vm list-skus --location centralus`)"
|
||||
}
|
||||
|
||||
variable "worker_type" {
|
||||
type = string
|
||||
default = "Standard_F1"
|
||||
default = "Standard_DS1_v2"
|
||||
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
|
||||
}
|
||||
|
||||
|
@ -15,7 +15,7 @@ module "workers" {
|
||||
priority = var.worker_priority
|
||||
|
||||
# configuration
|
||||
kubeconfig = module.bootkube.kubeconfig-kubelet
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
|
@ -38,9 +38,9 @@ systemd:
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -58,7 +58,7 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -93,7 +93,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -111,7 +111,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.15.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]')
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.15.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.16.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c21da0224984493e92dd2dc7bb3b755c564852fc"
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=539b725093c8cd94ba46603adb25ac5280562ec8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.13"
|
||||
Environment="ETCD_IMAGE_TAG=v3.4.0"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
|
||||
@ -76,11 +76,9 @@ systemd:
|
||||
--mount volume=iscsiadm,target=/sbin/iscsiadm \
|
||||
--insecure-options=image"
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -100,8 +98,8 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
@ -111,17 +109,30 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: bootkube.service
|
||||
- name: bootstrap.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bootstrap a Kubernetes control plane with a temp api-server
|
||||
ConditionPathExists=!/opt/bootkube/init_bootkube.done
|
||||
Description=Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootstrap/bootstrap.done
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootkube
|
||||
ExecStart=/opt/bootkube/bootkube-start
|
||||
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
ExecStartPre=-/usr/bin/bash -c 'set -x && [ -n "$(ls /opt/bootstrap/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootstrap/assets/manifests-*/* /opt/bootstrap/assets/manifests && rm -rf /opt/bootstrap/assets/manifests-*'
|
||||
ExecStart=/usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootstrap/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubelet.env
|
||||
@ -130,43 +141,33 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
export KUBECONFIG=/assets/auth/kubeconfig
|
||||
until kubectl version; do
|
||||
echo "Waiting for static pod control plane"
|
||||
sleep 5
|
||||
done
|
||||
until kubectl apply -f /assets/manifests -R; do
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /opt/bootkube/bootkube-start
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
user:
|
||||
id: 500
|
||||
group:
|
||||
id: 500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash
|
||||
# Wrapper for bootkube start
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -51,9 +51,9 @@ systemd:
|
||||
--mount volume=iscsiadm,target=/sbin/iscsiadm \
|
||||
--insecure-options=image"
|
||||
Environment=KUBELET_CGROUP_DRIVER=${cgroup_driver}
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -73,7 +73,7 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -91,7 +91,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
@ -1,4 +1,4 @@
|
||||
output "kubeconfig-admin" {
|
||||
value = module.bootkube.kubeconfig-admin
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
|
@ -160,7 +160,7 @@ data "template_file" "controller-configs" {
|
||||
etcd_name = element(var.controller_names, count.index)
|
||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controller_names, var.controller_domains))
|
||||
cgroup_driver = var.os_channel == "flatcar-edge" ? "systemd" : "cgroupfs"
|
||||
cluster_dns_service_ip = module.bootkube.cluster_dns_service_ip
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
@ -188,7 +188,7 @@ data "template_file" "worker-configs" {
|
||||
vars = {
|
||||
domain_name = element(var.worker_domains, count.index)
|
||||
cgroup_driver = var.os_channel == "flatcar-edge" ? "systemd" : "cgroupfs"
|
||||
cluster_dns_service_ip = module.bootkube.cluster_dns_service_ip
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Secure copy etcd TLS assets and kubeconfig to controllers. Activates kubelet.service
|
||||
# Secure copy assets to controllers. Activates kubelet.service
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = length(var.controller_names)
|
||||
|
||||
@ -8,54 +8,60 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
matchbox_group.install,
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
module.bootstrap,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = element(var.controller_domains, count.index)
|
||||
host = var.controller_domains[count.index]
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.kubeconfig-kubelet
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_ca_cert
|
||||
content = module.bootstrap.etcd_ca_cert
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_cert
|
||||
content = module.bootstrap.etcd_client_cert
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_key
|
||||
content = module.bootstrap.etcd_client_key
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_cert
|
||||
content = module.bootstrap.etcd_server_cert
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_key
|
||||
content = module.bootstrap.etcd_server_key
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_cert
|
||||
content = module.bootstrap.etcd_peer_cert
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_key
|
||||
content = module.bootstrap.etcd_peer_key
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
@ -69,7 +75,13 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo chown -R etcd:etcd /etc/ssl/etcd",
|
||||
"sudo chmod -R 500 /etc/ssl/etcd",
|
||||
"sudo mv $HOME/assets /opt/bootstrap/assets",
|
||||
"sudo mkdir -p /etc/kubernetes/manifests"
|
||||
"sudo mkdir -p /etc/kubernetes/bootstrap-secrets",
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo cp -r /opt/bootstrap/assets/tls/* /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp /opt/bootstrap/assets/auth/kubeconfig /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp -r /opt/bootstrap/assets/static-manifests/* /etc/kubernetes/manifests/",
|
||||
]
|
||||
}
|
||||
}
|
||||
@ -88,13 +100,13 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = element(var.worker_domains, count.index)
|
||||
host = var.worker_domains[count.index]
|
||||
user = "core"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.kubeconfig-kubelet
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
@ -105,9 +117,8 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
# Without depends_on, this remote-exec may start before the kubeconfig copy.
|
||||
# Terraform only does one task at a time, so it would try to bootstrap
|
||||
# while no Kubelets are running.
|
||||
@ -118,20 +129,14 @@ resource "null_resource" "bootkube-start" {
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = element(var.controller_domains, 0)
|
||||
host = var.controller_domains[0]
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/assets /opt/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
"sudo systemctl start bootstrap",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.15.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.16.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c21da0224984493e92dd2dc7bb3b755c564852fc"
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=539b725093c8cd94ba46603adb25ac5280562ec8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
|
@ -28,7 +28,7 @@ systemd:
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
quay.io/coreos/etcd:v3.3.13
|
||||
quay.io/coreos/etcd:v3.4.0
|
||||
ExecStop=/usr/bin/podman stop etcd
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
@ -55,9 +55,9 @@ systemd:
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
@ -81,13 +81,13 @@ systemd:
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
--volume /etc/iscsi:/etc/iscsi \
|
||||
--volume /sbin/iscsiadm:/sbin/iscsiadm \
|
||||
k8s.gcr.io/hyperkube:v1.15.2 /hyperkube kubelet \
|
||||
k8s.gcr.io/hyperkube:v1.16.0 /hyperkube kubelet \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=false \
|
||||
--enforce-node-allocatable="" \
|
||||
--cgroups-per-qos=true \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
@ -97,8 +97,8 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
@ -118,33 +118,48 @@ systemd:
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: bootkube.service
|
||||
- name: bootstrap.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bootstrap a Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootkube/init_bootkube.done
|
||||
Description=Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootstrap/bootstrap.done
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootkube
|
||||
ExecStart=/usr/bin/bash -c 'set -x && \
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-* && exec podman run --name bootkube --privileged \
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
ExecStartPre=-/usr/bin/bash -c 'set -x && [ -n "$(ls /opt/bootstrap/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootstrap/assets/manifests-*/* /opt/bootstrap/assets/manifests && rm -rf /opt/bootstrap/assets/manifests-*'
|
||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||
--network host \
|
||||
--volume /opt/bootkube/assets:/assets \
|
||||
--volume /etc/kubernetes:/etc/kubernetes \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
/bootkube start --asset-dir=/assets'
|
||||
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
/apply
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
- path: /opt/bootkube
|
||||
- path: /opt/bootstrap
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
mode: 0644
|
||||
contents:
|
||||
inline:
|
||||
${domain_name}
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
export KUBECONFIG=/assets/auth/kubeconfig
|
||||
until kubectl version; do
|
||||
echo "Waiting for static pod control plane"
|
||||
sleep 5
|
||||
done
|
||||
until kubectl apply -f /assets/manifests -R; do
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/sysctl.d/reverse-path-filter.conf
|
||||
contents:
|
||||
inline: |
|
||||
|
@ -25,9 +25,9 @@ systemd:
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
@ -51,7 +51,7 @@ systemd:
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
--volume /etc/iscsi:/etc/iscsi \
|
||||
--volume /sbin/iscsiadm:/sbin/iscsiadm \
|
||||
k8s.gcr.io/hyperkube:v1.15.2 /hyperkube kubelet \
|
||||
k8s.gcr.io/hyperkube:v1.16.0 /hyperkube kubelet \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -67,7 +67,7 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -89,7 +89,6 @@ systemd:
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
- path: /opt/bootkube
|
||||
files:
|
||||
- path: /etc/hostname
|
||||
mode: 0644
|
||||
|
@ -1,4 +1,4 @@
|
||||
output "kubeconfig-admin" {
|
||||
value = module.bootkube.kubeconfig-admin
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
|
@ -56,7 +56,7 @@ data "template_file" "controller-configs" {
|
||||
domain_name = var.controller_domains[count.index]
|
||||
etcd_name = var.controller_names[count.index]
|
||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controller_names, var.controller_domains))
|
||||
cluster_dns_service_ip = module.bootkube.cluster_dns_service_ip
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
@ -89,7 +89,7 @@ data "template_file" "worker-configs" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
vars = {
|
||||
domain_name = var.worker_domains[count.index]
|
||||
cluster_dns_service_ip = module.bootkube.cluster_dns_service_ip
|
||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
}
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Secure copy etcd TLS assets and kubeconfig to controllers. Activates kubelet.service
|
||||
# Secure copy assets to controllers. Activates kubelet.service
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = length(var.controller_names)
|
||||
|
||||
@ -7,6 +7,7 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
depends_on = [
|
||||
matchbox_group.controller,
|
||||
matchbox_group.worker,
|
||||
module.bootstrap,
|
||||
]
|
||||
|
||||
connection {
|
||||
@ -17,44 +18,49 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.kubeconfig-kubelet
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_ca_cert
|
||||
content = module.bootstrap.etcd_ca_cert
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_cert
|
||||
content = module.bootstrap.etcd_client_cert
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_key
|
||||
content = module.bootstrap.etcd_client_key
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_cert
|
||||
content = module.bootstrap.etcd_server_cert
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_key
|
||||
content = module.bootstrap.etcd_server_key
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_cert
|
||||
content = module.bootstrap.etcd_peer_cert
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_key
|
||||
content = module.bootstrap.etcd_peer_key
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
@ -66,7 +72,13 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/peer-ca.crt",
|
||||
"sudo mv etcd-peer.crt /etc/ssl/etcd/etcd/peer.crt",
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo mv $HOME/assets /opt/bootstrap/assets",
|
||||
"sudo mkdir -p /etc/kubernetes/manifests"
|
||||
"sudo mkdir -p /etc/kubernetes/bootstrap-secrets",
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo cp -r /opt/bootstrap/assets/tls/* /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp /opt/bootstrap/assets/auth/kubeconfig /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp -r /opt/bootstrap/assets/static-manifests/* /etc/kubernetes/manifests/"
|
||||
]
|
||||
}
|
||||
}
|
||||
@ -90,7 +102,7 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.kubeconfig-kubelet
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
@ -101,9 +113,8 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
# Without depends_on, this remote-exec may start before the kubeconfig copy.
|
||||
# Terraform only does one task at a time, so it would try to bootstrap
|
||||
# while no Kubelets are running.
|
||||
@ -119,15 +130,9 @@ resource "null_resource" "bootkube-start" {
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/assets /opt/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
"sudo systemctl start bootstrap",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.15.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.16.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c21da0224984493e92dd2dc7bb3b755c564852fc"
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=539b725093c8cd94ba46603adb25ac5280562ec8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.13"
|
||||
Environment="ETCD_IMAGE_TAG=v3.4.0"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -74,11 +74,9 @@ systemd:
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -97,8 +95,8 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
@ -108,17 +106,28 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: bootkube.service
|
||||
- name: bootstrap.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bootstrap a Kubernetes cluster
|
||||
ConditionPathExists=!/opt/bootkube/init_bootkube.done
|
||||
Description=Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootstrap/bootstrap.done
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootkube
|
||||
ExecStart=/opt/bootkube/bootkube-start
|
||||
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
ExecStartPre=-/usr/bin/bash -c 'set -x && [ -n "$(ls /opt/bootstrap/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootstrap/assets/manifests-*/* /opt/bootstrap/assets/manifests && rm -rf /opt/bootstrap/assets/manifests-*'
|
||||
ExecStart=/usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootstrap/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
@ -129,34 +138,24 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
export KUBECONFIG=/assets/auth/kubeconfig
|
||||
until kubectl version; do
|
||||
echo "Waiting for static pod control plane"
|
||||
sleep 5
|
||||
done
|
||||
until kubectl apply -f /assets/manifests -R; do
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /opt/bootkube/bootkube-start
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
user:
|
||||
id: 500
|
||||
group:
|
||||
id: 500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash
|
||||
# Wrapper for bootkube start
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
|
@ -49,9 +49,9 @@ systemd:
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -70,7 +70,7 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -99,7 +99,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -117,7 +117,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.15.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -53,24 +53,33 @@ resource "digitalocean_firewall" "controllers" {
|
||||
|
||||
tags = ["${var.cluster_name}-controller"]
|
||||
|
||||
# etcd, kube-apiserver, kubelet
|
||||
# etcd
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "2379-2380"
|
||||
source_tags = [digitalocean_tag.controllers.name]
|
||||
}
|
||||
|
||||
# etcd metrics
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "2381"
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# kube-apiserver
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "6443"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
|
||||
# kube-scheduler metrics, kube-controller-manager metrics
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "10251-10252"
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
}
|
||||
|
||||
resource "digitalocean_firewall" "workers" {
|
||||
|
@ -1,5 +1,5 @@
|
||||
output "kubeconfig-admin" {
|
||||
value = module.bootkube.kubeconfig-admin
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
output "controllers_dns" {
|
||||
|
@ -1,57 +1,63 @@
|
||||
# Secure copy etcd TLS assets and kubeconfig to controllers. Activates kubelet.service
|
||||
# Secure copy assets to controllers. Activates kubelet.service
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = var.controller_count
|
||||
|
||||
depends_on = [
|
||||
module.bootstrap,
|
||||
digitalocean_firewall.rules
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = element(digitalocean_droplet.controllers.*.ipv4_address, count.index)
|
||||
host = digitalocean_droplet.controllers.*.ipv4_address[count.index]
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.kubeconfig-kubelet
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_ca_cert
|
||||
content = module.bootstrap.etcd_ca_cert
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_cert
|
||||
content = module.bootstrap.etcd_client_cert
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_key
|
||||
content = module.bootstrap.etcd_client_key
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_cert
|
||||
content = module.bootstrap.etcd_server_cert
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_key
|
||||
content = module.bootstrap.etcd_server_key
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_cert
|
||||
content = module.bootstrap.etcd_peer_cert
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_key
|
||||
content = module.bootstrap.etcd_peer_key
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
@ -65,7 +71,13 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo chown -R etcd:etcd /etc/ssl/etcd",
|
||||
"sudo chmod -R 500 /etc/ssl/etcd",
|
||||
"sudo mv $HOME/assets /opt/bootstrap/assets",
|
||||
"sudo mkdir -p /etc/kubernetes/manifests"
|
||||
"sudo mkdir -p /etc/kubernetes/bootstrap-secrets",
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo cp -r /opt/bootstrap/assets/tls/* /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp /opt/bootstrap/assets/auth/kubeconfig /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp -r /opt/bootstrap/assets/static-manifests/* /etc/kubernetes/manifests/",
|
||||
]
|
||||
}
|
||||
}
|
||||
@ -76,13 +88,13 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = element(digitalocean_droplet.workers.*.ipv4_address, count.index)
|
||||
host = digitalocean_droplet.workers.*.ipv4_address[count.index]
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.kubeconfig-kubelet
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
@ -93,11 +105,9 @@ resource "null_resource" "copy-worker-secrets" {
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
depends_on = [
|
||||
module.bootkube,
|
||||
null_resource.copy-controller-secrets,
|
||||
null_resource.copy-worker-secrets,
|
||||
]
|
||||
@ -109,15 +119,9 @@ resource "null_resource" "bootkube-start" {
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/assets /opt/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
"sudo systemctl start bootstrap",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -147,5 +147,5 @@ module "digital-ocean-nemo" {
|
||||
}
|
||||
```
|
||||
|
||||
To customize lower-level Kubernetes control plane bootstrapping, see the [poseidon/terraform-render-bootkube](https://github.com/poseidon/terraform-render-bootkube) Terraform module.
|
||||
To customize low-level Kubernetes control plane bootstrapping, see the [poseidon/terraform-render-bootstrap](https://github.com/poseidon/terraform-render-bootstrap) Terraform module.
|
||||
|
||||
|
@ -76,7 +76,7 @@ Create a cluster following the Azure [tutorial](../cl/azure.md#cluster). Define
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.15.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.16.0"
|
||||
|
||||
# Azure
|
||||
region = module.azure-ramius.region
|
||||
@ -142,7 +142,7 @@ Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#clu
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.15.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.16.0"
|
||||
|
||||
# Google Cloud
|
||||
region = "europe-west2"
|
||||
@ -173,11 +173,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.15.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.15.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.15.2
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.15.2
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.15.2
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.16.0
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.16.0
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.16.0
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.16.0
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.16.0
|
||||
```
|
||||
|
||||
### Variables
|
||||
|
@ -30,7 +30,7 @@ Together, they diversify Typhoon to support a range of container technologies.
|
||||
|-------------------|-----------------|---------------|
|
||||
| single-master | all platforms | all platforms |
|
||||
| multi-master | all platforms | all platforms |
|
||||
| control plane | self-hosted | self-hosted |
|
||||
| control plane | static pods | static pods |
|
||||
| kubelet image | upstream hyperkube | upstream hyperkube |
|
||||
| control plane images | upstream hyperkube | upstream hyperkube |
|
||||
| on-host etcd | rkt-fly | podman |
|
||||
|
@ -1,10 +1,10 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.15.2 cluster on AWS with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.16.0 cluster on AWS with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -18,15 +18,15 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.12.2
|
||||
Terraform v0.12.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.3.2/terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.3.2-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.3.2
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.4.0/terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.4.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.4.0
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
@ -49,13 +49,13 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
|
||||
|
||||
```tf
|
||||
provider "aws" {
|
||||
version = "2.15.0"
|
||||
version = "2.29.0"
|
||||
region = "eu-central-1"
|
||||
shared_credentials_file = "/home/user/.config/aws/credentials"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.3.2"
|
||||
version = "0.4.0"
|
||||
}
|
||||
```
|
||||
|
||||
@ -69,8 +69,8 @@ Additional configuration options are described in the `aws` provider [docs](http
|
||||
Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.15.2"
|
||||
module "tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.16.0"
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
@ -91,7 +91,7 @@ Reference the [variables docs](#variables) or the [variables.tf](https://github.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
@ -118,9 +118,9 @@ Apply the changes to create the cluster.
|
||||
```sh
|
||||
$ terraform apply
|
||||
...
|
||||
module.aws-tempest.null_resource.bootkube-start: Still creating... (4m50s elapsed)
|
||||
module.aws-tempest.null_resource.bootkube-start: Still creating... (5m0s elapsed)
|
||||
module.aws-tempest.null_resource.bootkube-start: Creation complete after 11m8s (ID: 3961816482286168143)
|
||||
module.aws-tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
|
||||
module.aws-tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
|
||||
module.aws-tempest.null_resource.bootstrap: Creation complete after 11m8s (ID: 3961816482286168143)
|
||||
|
||||
Apply complete! Resources: 98 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
@ -134,10 +134,10 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready controller,master 10m v1.15.2
|
||||
ip-10-0-26-65 Ready node 10m v1.15.2
|
||||
ip-10-0-41-21 Ready node 10m v1.15.2
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready <none> 10m v1.16.0
|
||||
ip-10-0-26-65 Ready <none> 10m v1.16.0
|
||||
ip-10-0-41-21 Ready <none> 10m v1.16.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -150,16 +150,12 @@ kube-system calico-node-7jmr1 2/2 Running 0
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
|
||||
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-ip-10-0-3-155 1/1 Running 0 34m
|
||||
kube-system kube-proxy-14wxv 1/1 Running 0 34m
|
||||
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
|
||||
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
|
||||
kube-system kube-scheduler-3359497473-5plhf 1/1 Running 0 34m
|
||||
kube-system kube-scheduler-3359497473-r7zg7 1/1 Running 1 34m
|
||||
kube-system pod-checkpointer-4kxtl 1/1 Running 0 34m
|
||||
kube-system pod-checkpointer-4kxtl-ip-10-0-3-155 1/1 Running 0 33m
|
||||
kube-system kube-scheduler-ip-10-0-3-155 1/1 Running 1 34m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
@ -3,11 +3,11 @@
|
||||
!!! danger
|
||||
Typhoon for Azure is alpha. For production, use AWS, Google Cloud, or bare-metal. As Azure matures, check [errata](https://github.com/poseidon/typhoon/wiki/Errata) for known shortcomings.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.15.2 cluster on Azure with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.16.0 cluster on Azure with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -21,15 +21,15 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.12.2
|
||||
Terraform v0.12.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.3.2/terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.3.2-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.3.2
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.4.0/terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.4.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.4.0
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
@ -50,11 +50,11 @@ Configure the Azure provider in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "azurerm" {
|
||||
version = "1.30.1"
|
||||
version = "1.34.0"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.3.2"
|
||||
version = "0.4.0"
|
||||
}
|
||||
```
|
||||
|
||||
@ -65,8 +65,8 @@ Additional configuration options are described in the `azurerm` provider [docs](
|
||||
Define a Kubernetes cluster using the module `azure/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "azure-ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.15.2"
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.16.0"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
@ -88,7 +88,7 @@ Reference the [variables docs](#variables) or the [variables.tf](https://github.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
@ -115,9 +115,9 @@ Apply the changes to create the cluster.
|
||||
```sh
|
||||
$ terraform apply
|
||||
...
|
||||
module.azure-ramius.null_resource.bootkube-start: Still creating... (6m50s elapsed)
|
||||
module.azure-ramius.null_resource.bootkube-start: Still creating... (7m0s elapsed)
|
||||
module.azure-ramius.null_resource.bootkube-start: Creation complete after 7m8s (ID: 3961816482286168143)
|
||||
module.azure-ramius.null_resource.bootstrap: Still creating... (6m50s elapsed)
|
||||
module.azure-ramius.null_resource.bootstrap: Still creating... (7m0s elapsed)
|
||||
module.azure-ramius.null_resource.bootstrap: Creation complete after 7m8s (ID: 3961816482286168143)
|
||||
|
||||
Apply complete! Resources: 86 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
@ -131,10 +131,10 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/ramius/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready controller,master 24m v1.15.2
|
||||
ramius-worker-000001 Ready node 25m v1.15.2
|
||||
ramius-worker-000002 Ready node 24m v1.15.2
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready <none> 24m v1.16.0
|
||||
ramius-worker-000001 Ready <none> 25m v1.16.0
|
||||
ramius-worker-000002 Ready <none> 24m v1.16.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -144,19 +144,15 @@ $ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
|
||||
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
|
||||
kube-system flannel-bwf24 2/2 Running 2 26m
|
||||
kube-system flannel-bwf24 2/2 Running 0 26m
|
||||
kube-system flannel-ks5qb 2/2 Running 0 26m
|
||||
kube-system flannel-tq2wg 2/2 Running 0 26m
|
||||
kube-system kube-apiserver-hxgsx 1/1 Running 3 26m
|
||||
kube-system kube-controller-manager-5ff9cd7bb6-b942n 1/1 Running 0 26m
|
||||
kube-system kube-controller-manager-5ff9cd7bb6-bbr6w 1/1 Running 0 26m
|
||||
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
|
||||
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
|
||||
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
|
||||
kube-system kube-proxy-jxr5d 1/1 Running 0 26m
|
||||
kube-system kube-proxy-lbdw5 1/1 Running 0 26m
|
||||
kube-system kube-scheduler-5f76d69686-s4fbx 1/1 Running 0 26m
|
||||
kube-system kube-scheduler-5f76d69686-vgdgn 1/1 Running 0 26m
|
||||
kube-system pod-checkpointer-cnqdg 1/1 Running 0 26m
|
||||
kube-system pod-checkpointer-cnqdg-ramius-controller-0 1/1 Running 0 25m
|
||||
kube-system kube-scheduler-ramius-controller-0 1/1 Running 0 26m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.15.2 cluster on bare-metal with Container Linux.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.16.0 cluster on bare-metal with Container Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns` while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -111,7 +111,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.12.2
|
||||
Terraform v0.12.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-matchbox](https://github.com/poseidon/terraform-provider-matchbox) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
@ -125,9 +125,9 @@ mv terraform-provider-matchbox-v0.3.0-linux-amd64/terraform-provider-matchbox ~/
|
||||
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.3.2/terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.3.2-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.3.2
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.4.0/terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.4.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.4.0
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
@ -150,7 +150,7 @@ provider "matchbox" {
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.3.2"
|
||||
version = "0.4.0"
|
||||
}
|
||||
```
|
||||
|
||||
@ -160,7 +160,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.15.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.16.0"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -199,7 +199,7 @@ Reference the [variables docs](#variables) or the [variables.tf](https://github.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
@ -221,14 +221,12 @@ $ terraform plan
|
||||
Plan: 55 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes. Terraform will generate bootkube assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
|
||||
Apply the changes. Terraform will generate bootstrap assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
|
||||
|
||||
```sh
|
||||
$ terraform apply
|
||||
module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Provisioning with 'file'...
|
||||
module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Provisioning with 'file'...
|
||||
module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Still creating... (10s elapsed)
|
||||
module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Still creating... (10s elapsed)
|
||||
module.bare-metal-mercury.null_resource.copy-controller-secrets.0: Still creating... (10s elapsed)
|
||||
module.bare-metal-mercury.null_resource.copy-worker-secrets.0: Still creating... (10s elapsed)
|
||||
...
|
||||
```
|
||||
|
||||
@ -250,14 +248,14 @@ Machines will network boot, install Container Linux to disk, reboot into the dis
|
||||
|
||||
### Bootstrap
|
||||
|
||||
Wait for the `bootkube-start` step to finish bootstrapping the Kubernetes control plane. This may take 5-15 minutes depending on your network.
|
||||
Wait for the `bootstrap` step to finish bootstrapping the Kubernetes control plane. This may take 5-15 minutes depending on your network.
|
||||
|
||||
```
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m10s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m20s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m30s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m40s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Creation complete (ID: 5441741360626669024)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m10s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m20s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m30s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m40s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Creation complete (ID: 5441741360626669024)
|
||||
|
||||
Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
@ -265,9 +263,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
|
||||
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
|
||||
|
||||
```
|
||||
# before v1.15.2
|
||||
# before v1.16.0
|
||||
$ ssh debug@node1.example.com
|
||||
# after v1.15.2
|
||||
# after v1.16.0
|
||||
$ ssh -p 2222 core@node1.example.com
|
||||
```
|
||||
|
||||
@ -275,13 +273,12 @@ To watch the bootstrap process in detail, SSH to the first controller and journa
|
||||
|
||||
```
|
||||
$ ssh core@node1.example.com
|
||||
$ journalctl -f -u bootkube
|
||||
bootkube[5]: Pod Status: pod-checkpointer Running
|
||||
bootkube[5]: Pod Status: kube-apiserver Running
|
||||
bootkube[5]: Pod Status: kube-scheduler Running
|
||||
bootkube[5]: Pod Status: kube-controller-manager Running
|
||||
bootkube[5]: All self-hosted control plane components successfully started
|
||||
bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
$ journalctl -f -u bootstrap
|
||||
podman[1750]: The connection to the server cluster.example.com:6443 was refused - did you specify the right host or port?
|
||||
podman[1750]: Waiting for static pod control plane
|
||||
...
|
||||
podman[1750]: serviceaccount/calico-node unchanged
|
||||
systemd[1]: Started Kubernetes control plane.
|
||||
```
|
||||
|
||||
## Verify
|
||||
@ -291,10 +288,10 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready controller,master 10m v1.15.2
|
||||
node2.example.com Ready node 10m v1.15.2
|
||||
node3.example.com Ready node 10m v1.15.2
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready <none> 10m v1.16.0
|
||||
node2.example.com Ready <none> 10m v1.16.0
|
||||
node3.example.com Ready <none> 10m v1.16.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -307,16 +304,12 @@ kube-system calico-node-gnjrm 2/2 Running 0
|
||||
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
|
||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-7336w 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-v30js 1/1 Running 1 11m
|
||||
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m
|
||||
kube-system kube-controller-node1.example.com 1/1 Running 1 11m
|
||||
kube-system kube-proxy-50sd4 1/1 Running 0 11m
|
||||
kube-system kube-proxy-bczhp 1/1 Running 0 11m
|
||||
kube-system kube-proxy-mp2fw 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-fd3l7 1/1 Running 1 11m
|
||||
kube-system kube-scheduler-3895335239-hfjv0 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-wf65d 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-node1.example.com 1/1 Running 0 11m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Digital Ocean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.15.2 cluster on DigitalOcean with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.16.0 cluster on DigitalOcean with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -18,15 +18,15 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.12.2
|
||||
Terraform v0.12.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.3.2/terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.3.2-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.3.2
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.4.0/terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.4.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.4.0
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
@ -50,12 +50,12 @@ Configure the DigitalOcean provider to use your token in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "digitalocean" {
|
||||
version = "1.4.0"
|
||||
version = "1.7.0"
|
||||
token = "${chomp(file("~/.config/digital-ocean/token"))}"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.3.2"
|
||||
version = "0.4.0"
|
||||
}
|
||||
```
|
||||
|
||||
@ -65,7 +65,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.15.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.16.0"
|
||||
|
||||
# Digital Ocean
|
||||
cluster_name = "nemo"
|
||||
@ -85,7 +85,7 @@ Reference the [variables docs](#variables) or the [variables.tf](https://github.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
@ -111,11 +111,11 @@ Apply the changes to create the cluster.
|
||||
|
||||
```sh
|
||||
$ terraform apply
|
||||
module.digital-ocean-nemo.null_resource.bootkube-start: Still creating... (30s elapsed)
|
||||
module.digital-ocean-nemo.null_resource.bootkube-start: Provisioning with 'remote-exec'...
|
||||
module.digital-ocean-nemo.null_resource.bootstrap: Still creating... (30s elapsed)
|
||||
module.digital-ocean-nemo.null_resource.bootstrap: Provisioning with 'remote-exec'...
|
||||
...
|
||||
module.digital-ocean-nemo.null_resource.bootkube-start: Still creating... (6m20s elapsed)
|
||||
module.digital-ocean-nemo.null_resource.bootkube-start: Creation complete (ID: 7599298447329218468)
|
||||
module.digital-ocean-nemo.null_resource.bootstrap: Still creating... (6m20s elapsed)
|
||||
module.digital-ocean-nemo.null_resource.bootstrap: Creation complete (ID: 7599298447329218468)
|
||||
|
||||
Apply complete! Resources: 54 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
@ -129,10 +129,10 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
10.132.110.130 Ready controller,master 10m v1.15.2
|
||||
10.132.115.81 Ready node 10m v1.15.2
|
||||
10.132.124.107 Ready node 10m v1.15.2
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
10.132.110.130 Ready <none> 10m v1.16.0
|
||||
10.132.115.81 Ready <none> 10m v1.16.0
|
||||
10.132.124.107 Ready <none> 10m v1.16.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -142,18 +142,14 @@ NAMESPACE NAME READY STATUS RES
|
||||
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
||||
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
|
||||
kube-system flannel-1cq1v 2/2 Running 0 11m
|
||||
kube-system flannel-hq9t0 2/2 Running 1 11m
|
||||
kube-system flannel-hq9t0 2/2 Running 0 11m
|
||||
kube-system flannel-v0g9w 2/2 Running 0 11m
|
||||
kube-system kube-apiserver-n10qr 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-37gtw 1/1 Running 1 11m
|
||||
kube-system kube-controller-manager-3271970485-p52t5 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
|
||||
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
|
||||
kube-system kube-proxy-fh3td 1/1 Running 0 11m
|
||||
kube-system kube-proxy-k35rc 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-2bc4c 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-b7q47 1/1 Running 1 11m
|
||||
kube-system pod-checkpointer-pr1lq 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0 10m
|
||||
kube-system kube-scheduler-ip-10.132.115.81 1/1 Running 0 11m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.15.2 cluster on Google Compute Engine with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.16.0 cluster on Google Compute Engine with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -18,15 +18,15 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.12.2
|
||||
Terraform v0.12.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.3.2/terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.3.2-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.3.2-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.3.2
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.4.0/terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.4.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.4.0
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
@ -49,14 +49,14 @@ Configure the Google Cloud provider to use your service account key, project-id,
|
||||
|
||||
```tf
|
||||
provider "google" {
|
||||
version = "2.9.0"
|
||||
version = "2.15.0"
|
||||
project = "project-id"
|
||||
region = "us-central1"
|
||||
credentials = "${file("~/.config/google-cloud/terraform.json")}"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.3.2"
|
||||
version = "0.4.0"
|
||||
}
|
||||
```
|
||||
|
||||
@ -71,7 +71,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.15.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.0"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -92,7 +92,7 @@ Reference the [variables docs](#variables) or the [variables.tf](https://github.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
@ -118,12 +118,11 @@ Apply the changes to create the cluster.
|
||||
|
||||
```sh
|
||||
$ terraform apply
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (10s elapsed)
|
||||
module.google-cloud-yavin.null_resource.bootstrap: Still creating... (10s elapsed)
|
||||
...
|
||||
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m30s elapsed)
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m40s elapsed)
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Creation complete (ID: 5768638456220583358)
|
||||
module.google-cloud-yavin.null_resource.bootstrap: Still creating... (5m30s elapsed)
|
||||
module.google-cloud-yavin.null_resource.bootstrap: Still creating... (5m40s elapsed)
|
||||
module.google-cloud-yavin.null_resource.bootstrap: Creation complete (ID: 5768638456220583358)
|
||||
|
||||
Apply complete! Resources: 64 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
@ -137,10 +136,10 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.15.2
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.15.2
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.15.2
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.0
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.16.0
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.16.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -153,15 +152,12 @@ kube-system calico-node-d1l5b 2/2 Running 0
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-controller-0 1/1 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-5x87r 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-bzrrt 1/1 Running 1 6m
|
||||
kube-system pod-checkpointer-l6lrt 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-controller-0 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
@ -227,5 +223,5 @@ Check the list of valid [machine types](https://cloud.google.com/compute/docs/ma
|
||||
|
||||
#### Preemption
|
||||
|
||||
Add `worker_preemeptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`
|
||||
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`
|
||||
|
||||
|
@ -3,11 +3,11 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora CoreOS is an early preview! Fedora CoreOS itself is a preview! Expect bugs and design shifts. Please help both projects solve problems. Report Fedora CoreOS bugs to [Fedora](https://github.com/coreos/fedora-coreos-tracker/issues). Report Typhoon issues to Typhoon.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.15.2 cluster on AWS with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.16.0 cluster on AWS with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -21,7 +21,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.12.2
|
||||
Terraform v0.12.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
@ -52,7 +52,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
|
||||
|
||||
```tf
|
||||
provider "aws" {
|
||||
version = "2.19.0"
|
||||
version = "2.29.0"
|
||||
region = "us-east-1" # MUST be us-east-1 right now!
|
||||
shared_credentials_file = "/home/user/.config/aws/credentials"
|
||||
}
|
||||
@ -94,7 +94,7 @@ Reference the [variables docs](#variables) or the [variables.tf](https://github.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
@ -121,9 +121,9 @@ Apply the changes to create the cluster.
|
||||
```sh
|
||||
$ terraform apply
|
||||
...
|
||||
module.aws-tempest.null_resource.bootkube-start: Still creating... (4m50s elapsed)
|
||||
module.aws-tempest.null_resource.bootkube-start: Still creating... (5m0s elapsed)
|
||||
module.aws-tempest.null_resource.bootkube-start: Creation complete after 11m8s (ID: 3961816482286168143)
|
||||
module.aws-tempest.null_resource.bootstrap: Still creating... (4m50s elapsed)
|
||||
module.aws-tempest.null_resource.bootstrap: Still creating... (5m0s elapsed)
|
||||
module.aws-tempest.null_resource.bootstrap: Creation complete after 5m8s (ID: 3961816482286168143)
|
||||
|
||||
Apply complete! Resources: 98 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
@ -137,32 +137,28 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready controller,master 10m v1.15.2
|
||||
ip-10-0-26-65 Ready node 10m v1.15.2
|
||||
ip-10-0-41-21 Ready node 10m v1.15.2
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready <none> 10m v1.16.0
|
||||
ip-10-0-26-65 Ready <none> 10m v1.16.0
|
||||
ip-10-0-41-21 Ready <none> 10m v1.16.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
||||
```
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system calico-node-1m5bf 2/2 Running 0 34m
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
|
||||
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
|
||||
kube-system kube-proxy-14wxv 1/1 Running 0 34m
|
||||
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
|
||||
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
|
||||
kube-system kube-scheduler-3359497473-5plhf 1/1 Running 0 34m
|
||||
kube-system kube-scheduler-3359497473-r7zg7 1/1 Running 1 34m
|
||||
kube-system pod-checkpointer-4kxtl 1/1 Running 0 34m
|
||||
kube-system pod-checkpointer-4kxtl-ip-10-0-3-155 1/1 Running 0 33m
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system calico-node-1m5bf 2/2 Running 0 34m
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-ip-10-0-3-155 1/1 Running 0 34m
|
||||
kube-system kube-proxy-14wxv 1/1 Running 0 34m
|
||||
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
|
||||
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
|
||||
kube-system kube-scheduler-ip-10-0-3-155 1/1 Running 1 34m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
@ -3,11 +3,11 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora CoreOS is an early preview! Fedora CoreOS itself is a preview! Expect bugs and design shifts. Please help both projects solve problems. Report Fedora CoreOS bugs to [Fedora](https://github.com/coreos/fedora-coreos-tracker/issues). Report Typhoon issues to Typhoon.
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.15.2 cluster on bare-metal with Fedora CoreOS.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.16.0 cluster on bare-metal with Fedora CoreOS.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -114,7 +114,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.12.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.12.2
|
||||
Terraform v0.12.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-matchbox](https://github.com/poseidon/terraform-provider-matchbox) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
@ -169,8 +169,8 @@ module "bare-metal-mercury" {
|
||||
cluster_name = "mercury"
|
||||
matchbox_http_endpoint = "http://matchbox.example.com"
|
||||
os_stream = "testing"
|
||||
os_version = "30.20190716.1"
|
||||
cached_install = false
|
||||
os_version = "30.20190801.0"
|
||||
cached_install = "true"
|
||||
|
||||
# configuration
|
||||
k8s_domain_name = "node1.example.com"
|
||||
@ -200,7 +200,7 @@ Reference the [variables docs](#variables) or the [variables.tf](https://github.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
@ -222,7 +222,7 @@ $ terraform plan
|
||||
Plan: 55 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes. Terraform will generate bootkube assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
|
||||
Apply the changes. Terraform will generate bootstrap assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
|
||||
|
||||
```sh
|
||||
$ terraform apply
|
||||
@ -251,14 +251,14 @@ Machines will network boot, install Fedora CoreOS to disk, reboot into the disk
|
||||
|
||||
### Bootstrap
|
||||
|
||||
Wait for the `bootkube-start` step to finish bootstrapping the Kubernetes control plane. This may take 5-15 minutes depending on your network.
|
||||
Wait for the `bootstrap` step to finish bootstrapping the Kubernetes control plane. This may take 5-15 minutes depending on your network.
|
||||
|
||||
```
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m10s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m20s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m30s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m40s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Creation complete (ID: 5441741360626669024)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m10s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m20s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m30s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Still creating... (6m40s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootstrap: Creation complete (ID: 5441741360626669024)
|
||||
|
||||
Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
@ -267,13 +267,12 @@ To watch the bootstrap process in detail, SSH to the first controller and journa
|
||||
|
||||
```
|
||||
$ ssh core@node1.example.com
|
||||
$ journalctl -f -u bootkube
|
||||
bootkube[5]: Pod Status: pod-checkpointer Running
|
||||
bootkube[5]: Pod Status: kube-apiserver Running
|
||||
bootkube[5]: Pod Status: kube-scheduler Running
|
||||
bootkube[5]: Pod Status: kube-controller-manager Running
|
||||
bootkube[5]: All self-hosted control plane components successfully started
|
||||
bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
$ journalctl -f -u bootstrap
|
||||
podman[1750]: The connection to the server cluster.example.com:6443 was refused - did you specify the right host or port?
|
||||
podman[1750]: Waiting for static pod control plane
|
||||
...
|
||||
podman[1750]: serviceaccount/calico-node unchanged
|
||||
systemd[1]: Started Kubernetes control plane.
|
||||
```
|
||||
|
||||
## Verify
|
||||
@ -283,10 +282,10 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready controller,master 10m v1.15.2
|
||||
node2.example.com Ready node 10m v1.15.2
|
||||
node3.example.com Ready node 10m v1.15.2
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready <none> 10m v1.16.0
|
||||
node2.example.com Ready <none> 10m v1.16.0
|
||||
node3.example.com Ready <none> 10m v1.16.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -299,16 +298,12 @@ kube-system calico-node-gnjrm 2/2 Running 0
|
||||
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
|
||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-7336w 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-v30js 1/1 Running 1 11m
|
||||
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-node1.example.com 1/1 Running 1 11m
|
||||
kube-system kube-proxy-50sd4 1/1 Running 0 11m
|
||||
kube-system kube-proxy-bczhp 1/1 Running 0 11m
|
||||
kube-system kube-proxy-mp2fw 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-fd3l7 1/1 Running 1 11m
|
||||
kube-system kube-scheduler-3895335239-hfjv0 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-wf65d 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-node1.example.com 1/1 Running 0 11m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.15.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.16.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](cl/google-cloud/#preemption) workers, and [snippets](advanced/customization/#container-linux) customization
|
||||
@ -47,7 +47,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.15.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.16.0"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -79,10 +79,10 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.15.2
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.15.2
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.15.2
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.16.0
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.16.0
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.16.0
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -95,16 +95,12 @@ kube-system calico-node-d1l5b 2/2 Running 0
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-controller-0 1/1 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-5x87r 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-bzrrt 1/1 Running 1 6m
|
||||
kube-system pod-checkpointer-l6lrt 1/1 Running 0 6m
|
||||
kube-system pod-checkpointer-l6lrt-controller-0 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-controller-0 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
## Help
|
||||
|
@ -18,7 +18,7 @@ module "google-cloud-yavin" {
|
||||
}
|
||||
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.15.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.16.0"
|
||||
...
|
||||
}
|
||||
```
|
||||
@ -110,7 +110,7 @@ Apply complete! Resources: 0 added, 0 changed, 55 destroyed.
|
||||
|
||||
#### In-place Edits
|
||||
|
||||
Typhoon uses a self-hosted Kubernetes control plane which allows certain manifest upgrades to be performed in-place. Components like `apiserver`, `controller-manager`, `scheduler`, `flannel`/`calico`, `coredns`, and `kube-proxy` are run on Kubernetes itself and can be edited via `kubectl`. If you're interested, see the bootkube [upgrade docs](https://github.com/kubernetes-incubator/bootkube/blob/master/Documentation/upgrading.md).
|
||||
Typhoon uses a static pod Kubernetes control plane which allows certain manifest upgrades to be performed in-place. Components like `kube-apiserver`, `kube-controller-manager`, and `kube-scheduler` are run as static pods. Components `flannel`/`calico`, `coredns`, and `kube-proxy` are scheduled on Kubernetes and can be edited via `kubectl`.
|
||||
|
||||
In certain scenarios, in-place edits can be useful for quickly rolling out security patches (e.g. bumping `coredns`) or prioritizing speed over the safety of a proper cluster re-provision and transition.
|
||||
|
||||
@ -279,15 +279,15 @@ Typhoon modules have been adapted for Terraform v0.12. Provider plugins requirem
|
||||
|
||||
| Typhoon Release | Terraform version |
|
||||
|-------------------|---------------------|
|
||||
| v1.15.2 - ? | v0.12.x |
|
||||
| v1.10.3 - v1.15.2 | v0.11.x |
|
||||
| v1.16.0 - ? | v0.12.x |
|
||||
| v1.10.3 - v1.16.0 | v0.11.x |
|
||||
| v1.9.2 - v1.10.2 | v0.10.4+ or v0.11.x |
|
||||
| v1.7.3 - v1.9.1 | v0.10.x |
|
||||
| v1.6.4 - v1.7.2 | v0.9.x |
|
||||
|
||||
### New users
|
||||
|
||||
New users can start with Terraform v0.12.x and follow the docs for Typhoon v1.15.2+ without issue.
|
||||
New users can start with Terraform v0.12.x and follow the docs for Typhoon v1.16.0+ without issue.
|
||||
|
||||
### Existing users
|
||||
|
||||
@ -404,7 +404,7 @@ tree .
|
||||
└── infraB <- new Terraform v0.12.x configs
|
||||
```
|
||||
|
||||
Define Typhoon clusters in the new config directory using Terraform v0.12 syntax. Follow the Typhoon v1.15.2+ docs (e.g. use `terraform12` in the `infraB` dir). See [AWS](/cl/aws), [Azure](/cl/azure), [Bare-Metal](/cl/bare-metal), [Digital Ocean](/cl/digital-ocean), or [Google-Cloud](/cl/google-cloud)) to create new clusters. Follow the usual [upgrade](/topics/maintenance/#upgrades) process to apply workloads and shift traffic. Later, switch back to the old config directory and deprovision clusters with Terraform v0.11.
|
||||
Define Typhoon clusters in the new config directory using Terraform v0.12 syntax. Follow the Typhoon v1.16.0+ docs (e.g. use `terraform12` in the `infraB` dir). See [AWS](/cl/aws), [Azure](/cl/azure), [Bare-Metal](/cl/bare-metal), [Digital Ocean](/cl/digital-ocean), or [Google-Cloud](/cl/google-cloud)) to create new clusters. Follow the usual [upgrade](/topics/maintenance/#upgrades) process to apply workloads and shift traffic. Later, switch back to the old config directory and deprovision clusters with Terraform v0.11.
|
||||
|
||||
```shell
|
||||
terraform12 init
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.15.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.16.0 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c21da0224984493e92dd2dc7bb3b755c564852fc"
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=539b725093c8cd94ba46603adb25ac5280562ec8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.13"
|
||||
Environment="ETCD_IMAGE_TAG=v3.4.0"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -64,11 +64,9 @@ systemd:
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--hosts-entry=host \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -86,8 +84,8 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--read-only-port=0 \
|
||||
@ -97,17 +95,28 @@ systemd:
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: bootkube.service
|
||||
- name: bootstrap.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bootstrap a Kubernetes cluster
|
||||
ConditionPathExists=!/opt/bootkube/init_bootkube.done
|
||||
Description=Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootstrap/bootstrap.done
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootkube
|
||||
ExecStart=/opt/bootkube/bootkube-start
|
||||
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
ExecStartPre=-/usr/bin/bash -c 'set -x && [ -n "$(ls /opt/bootstrap/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootstrap/assets/manifests-*/* /opt/bootstrap/assets/manifests && rm -rf /opt/bootstrap/assets/manifests-*'
|
||||
ExecStart=/usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootstrap/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume script,kind=host,source=/opt/bootstrap/apply \
|
||||
--mount volume=script,target=/apply \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/apply
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
@ -124,37 +133,27 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /opt/bootstrap/apply
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
export KUBECONFIG=/assets/auth/kubeconfig
|
||||
until kubectl version; do
|
||||
echo "Waiting for static pod control plane"
|
||||
sleep 5
|
||||
done
|
||||
until kubectl apply -f /assets/manifests -R; do
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /opt/bootkube/bootkube-start
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
user:
|
||||
id: 500
|
||||
group:
|
||||
id: 500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash
|
||||
# Wrapper for bootkube start
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -85,7 +85,7 @@ data "template_file" "controller-configs" {
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
kubeconfig = indent(10, module.bootkube.kubeconfig-kubelet)
|
||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
|
@ -48,6 +48,20 @@ resource "google_compute_firewall" "internal-etcd-metrics" {
|
||||
target_tags = ["${var.cluster_name}-controller"]
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
|
||||
resource "google_compute_firewall" "internal-kube-metrics" {
|
||||
name = "${var.cluster_name}-internal-kube-metrics"
|
||||
network = google_compute_network.network.name
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [10251, 10252]
|
||||
}
|
||||
|
||||
source_tags = ["${var.cluster_name}-worker"]
|
||||
target_tags = ["${var.cluster_name}-controller"]
|
||||
}
|
||||
|
||||
resource "google_compute_firewall" "allow-apiserver" {
|
||||
name = "${var.cluster_name}-allow-apiserver"
|
||||
network = google_compute_network.network.name
|
||||
|
@ -1,5 +1,5 @@
|
||||
output "kubeconfig-admin" {
|
||||
value = module.bootkube.kubeconfig-admin
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
# Outputs for Kubernetes Ingress
|
||||
@ -21,7 +21,7 @@ output "network_name" {
|
||||
}
|
||||
|
||||
output "kubeconfig" {
|
||||
value = module.bootkube.kubeconfig-kubelet
|
||||
value = module.bootstrap.kubeconfig-kubelet
|
||||
}
|
||||
|
||||
# Outputs for custom firewalling
|
||||
|
@ -1,48 +1,57 @@
|
||||
# Secure copy etcd TLS assets to controllers.
|
||||
# Secure copy assets to controllers.
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = var.controller_count
|
||||
|
||||
depends_on = [
|
||||
module.bootstrap,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = element(local.controllers_ipv4_public, count.index)
|
||||
host = local.controllers_ipv4_public[count.index]
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_ca_cert
|
||||
content = module.bootstrap.etcd_ca_cert
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_cert
|
||||
content = module.bootstrap.etcd_client_cert
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_client_key
|
||||
content = module.bootstrap.etcd_client_key
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_cert
|
||||
content = module.bootstrap.etcd_server_cert
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_server_key
|
||||
content = module.bootstrap.etcd_server_key
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_cert
|
||||
content = module.bootstrap.etcd_peer_cert
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootkube.etcd_peer_key
|
||||
content = module.bootstrap.etcd_peer_key
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
@ -56,36 +65,34 @@ resource "null_resource" "copy-controller-secrets" {
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo chown -R etcd:etcd /etc/ssl/etcd",
|
||||
"sudo chmod -R 500 /etc/ssl/etcd",
|
||||
"sudo mv $HOME/assets /opt/bootstrap/assets",
|
||||
"sudo mkdir -p /etc/kubernetes/manifests",
|
||||
"sudo mkdir -p /etc/kubernetes/bootstrap-secrets",
|
||||
"sudo cp -r /opt/bootstrap/assets/tls/* /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp /opt/bootstrap/assets/auth/kubeconfig /etc/kubernetes/bootstrap-secrets/",
|
||||
"sudo cp -r /opt/bootstrap/assets/static-manifests/* /etc/kubernetes/manifests/",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
depends_on = [
|
||||
module.bootkube,
|
||||
null_resource.copy-controller-secrets,
|
||||
module.workers,
|
||||
google_dns_record_set.apiserver,
|
||||
null_resource.copy-controller-secrets,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = element(local.controllers_ipv4_public, 0)
|
||||
host = local.controllers_ipv4_public[0]
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = var.asset_dir
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/assets /opt/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
"sudo systemctl start bootstrap",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
@ -13,7 +13,7 @@ module "workers" {
|
||||
preemptible = var.worker_preemptible
|
||||
|
||||
# configuration
|
||||
kubeconfig = module.bootkube.kubeconfig-kubelet
|
||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||
ssh_authorized_key = var.ssh_authorized_key
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
|
@ -39,9 +39,9 @@ systemd:
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--hosts-entry=host \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
@ -59,7 +59,7 @@ systemd:
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
@ -94,7 +94,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.15.2
|
||||
KUBELET_IMAGE_TAG=v1.16.0
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -112,7 +112,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.15.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.16.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -1,4 +1,4 @@
|
||||
mkdocs==1.0.4
|
||||
mkdocs-material==4.4.0
|
||||
mkdocs-material==4.4.2
|
||||
pygments==2.2.0
|
||||
pymdown-extensions==5.0.0
|
||||
|
Reference in New Issue
Block a user