mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-03 12:31:33 +02:00
Compare commits
29 Commits
Author | SHA1 | Date | |
---|---|---|---|
dbdc3fc850 | |||
e00f97c578 | |||
f7ebdf475d | |||
716dfe4d17 | |||
edc250d62a | |||
db64ce3312 | |||
7c327b8bf4 | |||
e6720cf738 | |||
844f380b4e | |||
13beb13aab | |||
90c4a7483d | |||
4e7dfc115d | |||
ec5ea51141 | |||
d8d524d10b | |||
02cd8eb8d3 | |||
84d6cfe7b3 | |||
3352388fe6 | |||
915f89d3c8 | |||
f40f60b83c | |||
6f958d7577 | |||
ee31074679 | |||
97517fa7f3 | |||
18502d64d6 | |||
a3349b5c68 | |||
74dc6b0bf9 | |||
fd1de27aef | |||
93de7506ef | |||
def445a344 | |||
8464b258d8 |
62
CHANGES.md
62
CHANGES.md
@ -2,7 +2,67 @@
|
||||
|
||||
Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
## v1.11.2
|
||||
|
||||
* Kubernetes [v1.11.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1112)
|
||||
* Update etcd from v3.3.8 to [v3.3.9](https://github.com/coreos/etcd/blob/master/CHANGELOG-3.3.md#v339-2018-07-24)
|
||||
* Use kubernetes-incubator/bootkube v0.13.0
|
||||
* Fix Fedora Atomic modules' Kubelet version ([#270](https://github.com/poseidon/typhoon/issues/270))
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Introduce [Container Linux Config snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) on bare-metal
|
||||
* Validate and additively merge custom Container Linux Configs during terraform plan
|
||||
* Define files, systemd units, dropins, networkd configs, mounts, users, and more
|
||||
* [Require](https://typhoon.psdn.io/cl/bare-metal/#terraform-setup) `terraform-provider-ct` plugin v0.2.1 (action required!)
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from 0.16.2 to 0.17.1
|
||||
* Add nginx-ingress manifests for bare-metal
|
||||
* Update Grafana from 5.2.1 to 5.2.2
|
||||
* Update heapster from v1.5.3 to v1.5.4
|
||||
|
||||
## v1.11.1
|
||||
|
||||
* Kubernetes [v1.11.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1111)
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update Prometheus from v2.3.1 to v2.3.2
|
||||
|
||||
#### Errata
|
||||
|
||||
* Fedora Atomic modules shipped with Kubelet v1.11.0, instead of v1.11.1. Fixed in [#270](https://github.com/poseidon/typhoon/issues/270).
|
||||
|
||||
## v1.11.0
|
||||
|
||||
* Kubernetes [v1.11.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1110)
|
||||
* Force apiserver to stop listening on `127.0.0.1:8080`
|
||||
* Replace `kube-dns` with [CoreDNS](https://coredns.io/) ([#261](https://github.com/poseidon/typhoon/pull/261))
|
||||
* Edit the `coredns` ConfigMap to [customize](https://coredns.io/plugins/)
|
||||
* CoreDNS doesn't use a resizer. For large clusters, scaling may be required.
|
||||
|
||||
#### AWS
|
||||
|
||||
* Update from Fedora Atomic 27 to 28 ([#258](https://github.com/poseidon/typhoon/pull/258))
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Update from Fedora Atomic 27 to 28 ([#263](https://github.com/poseidon/typhoon/pull/263))
|
||||
|
||||
#### Google
|
||||
|
||||
* Promote Google Cloud to stable
|
||||
* Update from Fedora Atomic 27 to 28 ([#259](https://github.com/poseidon/typhoon/pull/259))
|
||||
* Remove `ingress_static_ip` module output. Use `ingress_static_ipv4`.
|
||||
* Remove `controllers_ipv4_public` module output.
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from 0.15.0 to 0.16.2
|
||||
* Update Grafana from 5.1.4 to [5.2.1](http://docs.grafana.org/guides/whats-new-in-v5-2/)
|
||||
* Update heapster from v1.5.2 to v1.5.3
|
||||
|
||||
## v1.10.5
|
||||
|
||||
|
14
README.md
14
README.md
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
|
||||
@ -29,7 +29,7 @@ Typhoon provides a Terraform Module for each supported operating system and plat
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | beta |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
|
||||
|
||||
The AWS and bare-metal `container-linux` modules allow picking Red Hat Container Linux (formerly CoreOS Container Linux) or Kinvolk's Flatcar Linux friendly fork.
|
||||
@ -46,7 +46,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -88,9 +88,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -101,10 +101,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-dns-1187388186-zj5dl 3/3 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
|
@ -21,7 +21,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: grafana
|
||||
image: grafana/grafana:5.1.4
|
||||
image: grafana/grafana:5.2.2
|
||||
env:
|
||||
- name: GF_SERVER_HTTP_PORT
|
||||
value: "8080"
|
||||
|
@ -14,11 +14,13 @@ spec:
|
||||
labels:
|
||||
name: heapster
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
serviceAccountName: heapster
|
||||
containers:
|
||||
- name: heapster
|
||||
image: k8s.gcr.io/heapster-amd64:v1.5.2
|
||||
image: k8s.gcr.io/heapster-amd64:v1.5.4
|
||||
command:
|
||||
- /heapster
|
||||
- --source=kubernetes.summary_api:''
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
@ -67,6 +67,11 @@ spec:
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
securityContext:
|
||||
runAsNonRoot: false
|
||||
capabilities:
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
drop:
|
||||
- ALL
|
||||
runAsUser: 33 # www-data
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 60
|
||||
|
6
addons/nginx-ingress/bare-metal/0-namespace.yaml
Normal file
6
addons/nginx-ingress/bare-metal/0-namespace.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress
|
||||
labels:
|
||||
name: ingress
|
@ -0,0 +1,40 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: default-backend
|
||||
namespace: ingress
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
# Any image is permissable as long as:
|
||||
# 1. It serves a 404 page at /
|
||||
# 2. It serves 200 on a /healthz endpoint
|
||||
image: k8s.gcr.io/defaultbackend:1.4
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
resources:
|
||||
limits:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
terminationGracePeriodSeconds: 60
|
15
addons/nginx-ingress/bare-metal/default-backend/service.yaml
Normal file
15
addons/nginx-ingress/bare-metal/default-backend/service.yaml
Normal file
@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: default-backend
|
||||
namespace: ingress
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 8080
|
73
addons/nginx-ingress/bare-metal/deployment.yaml
Normal file
73
addons/nginx-ingress/bare-metal/deployment.yaml
Normal file
@ -0,0 +1,73 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ingress-controller-public
|
||||
namespace: ingress
|
||||
spec:
|
||||
replicas: 2
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
name: ingress-controller-public
|
||||
phase: prod
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: ingress-controller-public
|
||||
phase: prod
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
- --ingress-class=public
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
- name: https
|
||||
containerPort: 443
|
||||
- name: health
|
||||
containerPort: 10254
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
timeoutSeconds: 1
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
timeoutSeconds: 1
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
drop:
|
||||
- ALL
|
||||
runAsUser: 33 # www-data
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 60
|
||||
|
@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: ingress
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: ingress
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
namespace: ingress
|
||||
name: default
|
51
addons/nginx-ingress/bare-metal/rbac/cluster-role.yaml
Normal file
51
addons/nginx-ingress/bare-metal/rbac/cluster-role.yaml
Normal file
@ -0,0 +1,51 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: ingress
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- endpoints
|
||||
- nodes
|
||||
- pods
|
||||
- secrets
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses/status
|
||||
verbs:
|
||||
- update
|
13
addons/nginx-ingress/bare-metal/rbac/role-binding.yaml
Normal file
13
addons/nginx-ingress/bare-metal/rbac/role-binding.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: ingress
|
||||
namespace: ingress
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: ingress
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
namespace: ingress
|
||||
name: default
|
41
addons/nginx-ingress/bare-metal/rbac/role.yaml
Normal file
41
addons/nginx-ingress/bare-metal/rbac/role.yaml
Normal file
@ -0,0 +1,41 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: ingress
|
||||
namespace: ingress
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
resourceNames:
|
||||
# Defaults to "<election-id>-<ingress-class>"
|
||||
# Here: "<ingress-controller-leader>-<nginx>"
|
||||
# This has to be adapted if you change either parameter
|
||||
# when launching the nginx-ingress-controller.
|
||||
- "ingress-controller-leader-public"
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
23
addons/nginx-ingress/bare-metal/service.yaml
Normal file
23
addons/nginx-ingress/bare-metal/service.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ingress-controller-public
|
||||
namespace: ingress
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
prometheus.io/port: '10254'
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: 10.3.0.12
|
||||
selector:
|
||||
name: ingress-controller-public
|
||||
phase: prod
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- name: https
|
||||
protocol: TCP
|
||||
port: 443
|
||||
targetPort: 443
|
@ -22,7 +22,7 @@ spec:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
@ -67,6 +67,11 @@ spec:
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
securityContext:
|
||||
runAsNonRoot: false
|
||||
capabilities:
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
drop:
|
||||
- ALL
|
||||
runAsUser: 33 # www-data
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 60
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
@ -67,6 +67,11 @@ spec:
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
securityContext:
|
||||
runAsNonRoot: false
|
||||
capabilities:
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
drop:
|
||||
- ALL
|
||||
runAsUser: 33 # www-data
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 60
|
||||
|
@ -18,7 +18,7 @@ spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.3.1
|
||||
image: quay.io/prometheus/prometheus:v2.3.2
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -496,6 +496,13 @@ data:
|
||||
annotations:
|
||||
description: device {{$labels.device}} on node {{$labels.instance}} is running
|
||||
full within the next 2 hours (mounted at {{$labels.mountpoint}})
|
||||
- alert: InactiveRAIDDisk
|
||||
expr: node_md_disks - node_md_disks_active > 0
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
description: '{{$value}} RAID disk(s) on node {{$labels.instance}} are inactive'
|
||||
prometheus.rules.yaml: |
|
||||
groups:
|
||||
- name: prometheus.rules
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.8"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -74,7 +74,6 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -123,7 +122,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.5
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -144,7 +143,7 @@ storage:
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
|
@ -116,7 +116,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -124,7 +124,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -47,7 +47,6 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -93,7 +92,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.5
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -111,7 +110,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.5 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.2 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -79,7 +79,7 @@ variable "ssh_authorized_key" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -87,7 +87,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
|
@ -14,6 +14,6 @@ data "aws_ami" "fedora" {
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Fedora-Atomic-27-20180419.0.x86_64-*-gp2-*"]
|
||||
values = ["Fedora-AtomicHost-28-20180625.1.x86_64-*-gp2-*"]
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -51,8 +51,7 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
@ -93,9 +92,9 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
|
@ -98,7 +98,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -106,7 +106,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -14,6 +14,6 @@ data "aws_ami" "fedora" {
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Fedora-Atomic-27-20180419.0.x86_64-*-gp2-*"]
|
||||
values = ["Fedora-AtomicHost-28-20180625.1.x86_64-*-gp2-*"]
|
||||
}
|
||||
}
|
||||
|
@ -30,8 +30,7 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
@ -70,7 +69,7 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
|
@ -67,7 +67,7 @@ variable "ssh_authorized_key" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -75,7 +75,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.8"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
|
||||
@ -82,7 +82,6 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -124,7 +123,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.5
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
@ -151,7 +150,7 @@ storage:
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
|
@ -55,7 +55,6 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -85,7 +84,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.5
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
@ -118,9 +118,18 @@ resource "matchbox_profile" "flatcar-install" {
|
||||
resource "matchbox_profile" "controllers" {
|
||||
count = "${length(var.controller_names)}"
|
||||
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
||||
container_linux_config = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
raw_ignition = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = "${length(var.controller_names)}"
|
||||
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
|
||||
snippets = ["${local.clc_map[element(var.controller_names, count.index)]}"]
|
||||
}
|
||||
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
count = "${length(var.controller_names)}"
|
||||
|
||||
@ -143,7 +152,16 @@ data "template_file" "controller-configs" {
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = "${length(var.worker_names)}"
|
||||
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
||||
container_linux_config = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
||||
raw_ignition = "${element(data.ct_config.worker-ignitions.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
|
||||
data "ct_config" "worker-ignitions" {
|
||||
count = "${length(var.worker_names)}"
|
||||
content = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
|
||||
snippets = ["${local.clc_map[element(var.worker_names, count.index)]}"]
|
||||
}
|
||||
|
||||
data "template_file" "worker-configs" {
|
||||
@ -161,3 +179,18 @@ data "template_file" "worker-configs" {
|
||||
networkd_content = "${length(var.worker_networkds) == 0 ? "" : element(concat(var.worker_networkds, list("")), count.index)}"
|
||||
}
|
||||
}
|
||||
|
||||
locals {
|
||||
# Hack to workaround https://github.com/hashicorp/terraform/issues/17251
|
||||
# Default Container Linux config snippets map every node names to list("\n") so
|
||||
# all lookups succeed
|
||||
clc_defaults = "${zipmap(concat(var.controller_names, var.worker_names), chunklist(data.template_file.clc-default-snippets.*.rendered, 1))}"
|
||||
# Union of the default and user specific snippets, later overrides prior.
|
||||
clc_map = "${merge(local.clc_defaults, var.clc_snippets)}"
|
||||
}
|
||||
|
||||
// Horrible hack to generate a Terraform list of node count length
|
||||
data "template_file" "clc-default-snippets" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
template = "\n"
|
||||
}
|
||||
|
@ -25,26 +25,38 @@ variable "os_version" {
|
||||
|
||||
variable "controller_names" {
|
||||
type = "list"
|
||||
description = "Ordered list of controller names (e.g. [node1])"
|
||||
}
|
||||
|
||||
variable "controller_macs" {
|
||||
type = "list"
|
||||
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
|
||||
}
|
||||
|
||||
variable "controller_domains" {
|
||||
type = "list"
|
||||
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
|
||||
}
|
||||
|
||||
variable "worker_names" {
|
||||
type = "list"
|
||||
description = "Ordered list of worker names (e.g. [node2, node3])"
|
||||
}
|
||||
|
||||
variable "worker_macs" {
|
||||
type = "list"
|
||||
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
|
||||
}
|
||||
|
||||
variable "worker_domains" {
|
||||
type = "list"
|
||||
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
|
||||
}
|
||||
|
||||
variable "clc_snippets" {
|
||||
type = "map"
|
||||
description = "Map from machine names to lists of Container Linux Config snippets"
|
||||
default = {}
|
||||
}
|
||||
|
||||
# configuration
|
||||
@ -91,7 +103,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -101,7 +113,7 @@ EOD
|
||||
# optional
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
|
@ -36,8 +36,7 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
@ -84,9 +83,9 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
|
@ -15,8 +15,7 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
@ -60,7 +59,7 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -1,5 +1,5 @@
|
||||
// Install Fedora to disk
|
||||
resource "matchbox_group" "fedora-install" {
|
||||
resource "matchbox_group" "install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
|
||||
name = "${format("fedora-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
@ -17,7 +17,7 @@ network --bootproto=dhcp --device=link --activate --onboot=on
|
||||
bootloader --timeout=1 --append="ds=nocloud\;seedfrom=/var/cloud-init/"
|
||||
services --enabled=cloud-init,cloud-init-local,cloud-config,cloud-final
|
||||
|
||||
ostreesetup --osname="fedora-atomic" --remote="fedora-atomic" --url="${atomic_assets_endpoint}/repo" --ref=fedora/27/x86_64/atomic-host --nogpg
|
||||
ostreesetup --osname="fedora-atomic" --remote="fedora-atomic" --url="${atomic_assets_endpoint}/repo" --ref=fedora/28/x86_64/atomic-host --nogpg
|
||||
|
||||
reboot
|
||||
|
||||
@ -27,7 +27,7 @@ curl --retry 10 "${matchbox_http_endpoint}/generic?mac=${mac}&os=installed" -o /
|
||||
echo "instance-id: iid-local01" > /var/cloud-init/meta-data
|
||||
|
||||
rm -f /etc/ostree/remotes.d/fedora-atomic.conf
|
||||
ostree remote add fedora-atomic https://kojipkgs.fedoraproject.org/atomic/27 --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-primary
|
||||
ostree remote add fedora-atomic https://dl.fedoraproject.org/atomic/repo/ --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-primary
|
||||
|
||||
# lock root user
|
||||
passwd -l root
|
||||
|
@ -1,5 +1,5 @@
|
||||
locals {
|
||||
default_assets_endpoint = "${var.matchbox_http_endpoint}/assets/fedora/27"
|
||||
default_assets_endpoint = "${var.matchbox_http_endpoint}/assets/fedora/28"
|
||||
atomic_assets_endpoint = "${var.atomic_assets_endpoint != "" ? var.atomic_assets_endpoint : local.default_assets_endpoint}"
|
||||
}
|
||||
|
||||
|
@ -17,7 +17,7 @@ variable "atomic_assets_endpoint" {
|
||||
description = <<EOD
|
||||
HTTP endpoint serving the Fedora Atomic Host vmlinuz, initrd, os repo, and ostree repo (.e.g `http://example.com/some/path`).
|
||||
|
||||
Ensure the HTTP server directory contains `vmlinuz` and `initrd` files and `os` and `repo` directories. Leave unset to assume ${matchbox_http_endpoint}/assets/fedora/27
|
||||
Ensure the HTTP server directory contains `vmlinuz` and `initrd` files and `os` and `repo` directories. Leave unset to assume ${matchbox_http_endpoint}/assets/fedora/28
|
||||
EOD
|
||||
}
|
||||
|
||||
@ -26,26 +26,32 @@ EOD
|
||||
|
||||
variable "controller_names" {
|
||||
type = "list"
|
||||
description = "Ordered list of controller names (e.g. [node1])"
|
||||
}
|
||||
|
||||
variable "controller_macs" {
|
||||
type = "list"
|
||||
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
|
||||
}
|
||||
|
||||
variable "controller_domains" {
|
||||
type = "list"
|
||||
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
|
||||
}
|
||||
|
||||
variable "worker_names" {
|
||||
type = "list"
|
||||
description = "Ordered list of worker names (e.g. [node2, node3])"
|
||||
}
|
||||
|
||||
variable "worker_macs" {
|
||||
type = "list"
|
||||
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
|
||||
}
|
||||
|
||||
variable "worker_domains" {
|
||||
type = "list"
|
||||
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
|
||||
}
|
||||
|
||||
# configuration
|
||||
@ -86,7 +92,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -94,7 +100,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.8"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -85,7 +85,6 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -129,7 +128,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.5
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -150,7 +149,7 @@ storage:
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
|
@ -58,7 +58,6 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -99,7 +98,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.5
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -117,7 +116,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.5 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.2 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -80,7 +80,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -88,7 +88,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -51,8 +51,7 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
@ -90,9 +89,9 @@ bootcmd:
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
|
@ -30,8 +30,7 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
@ -67,7 +66,7 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -73,7 +73,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -81,7 +81,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -80,7 +80,7 @@ aap2.example.com -> 11.22.33.44
|
||||
app3.example.com -> 11.22.33.44
|
||||
```
|
||||
|
||||
Find the IPv4 address with `gcloud compute addresses list` or use the Typhoon module's output `ingress_static_ip`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
Find the IPv4 address with `gcloud compute addresses list` or use the Typhoon module's output `ingress_static_ipv4`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
|
||||
```tf
|
||||
resource "google_dns_record_set" "some-application" {
|
||||
@ -91,7 +91,7 @@ resource "google_dns_record_set" "some-application" {
|
||||
name = "app.example.com."
|
||||
type = "A"
|
||||
ttl = 300
|
||||
rrdatas = ["${module.google-cloud-yavin.ingress_static_ip}"]
|
||||
rrdatas = ["${module.google-cloud-yavin.ingress_static_ipv4}"]
|
||||
}
|
||||
```
|
||||
|
||||
@ -101,11 +101,17 @@ On bare-metal, routing traffic to Ingress controller pods can be done in number
|
||||
|
||||
### Equal-Cost Multi-Path
|
||||
|
||||
Deploy the Nginx Ingress Controller as a deployment. Deploy the service with a fixed ClusterIP (e.g. 10.3.0.12) in the Kubernetes service IPv4 CIDR range. There is no need for a NodePort or for pods to bind host ports. Any node can proxy packets destined for the service's ClusterIP to a node which has a pod endpoint.
|
||||
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, and default backend. The service should use a fixed ClusterIP (e.g. 10.3.0.12) in the Kubernetes service IPv4 CIDR range.
|
||||
|
||||
Configure the network router or load balancer with a static route for the Kubernetes service range and set the next hop to a node. Repeat for each node and set the metric (i.e. cost) of each. Finally, DNAT traffic destined for the WAN on ports 80 or 443 to the service's fixed ClusterIP.
|
||||
```
|
||||
kubectl apply -R -f addons/nginx-ingress/bare-metal
|
||||
```
|
||||
|
||||
Add a DNS record resolving to the WAN for each application.
|
||||
There is no need for pods to use host networking or for the ingress service to use NodePort or LoadBalancer. Nodes already proxy packets destined for the service's ClusterIP to node(s) with a pod endpoint.
|
||||
|
||||
Configure the network router or load balancer with a static route for the Kubernetes service range and set the next hop to a node. Repeat for each node, as desired, and set the metric (i.e. cost) of each. Finally, DNAT traffic destined for the WAN on ports 80 or 443 to the service's fixed ClusterIP.
|
||||
|
||||
For each application, add a DNS record resolving to the WAN(s).
|
||||
|
||||
```tf
|
||||
resource "google_dns_record_set" "some-application" {
|
||||
|
@ -69,7 +69,7 @@ View the Container Linux Config [format](https://coreos.com/os/docs/1576.4.0/con
|
||||
|
||||
Write Container Linux Configs *snippets* as files in the repository where you keep Terraform configs for clusters (perhaps in a `clc` or `snippets` subdirectory). You may organize snippets in multiple files as desired, provided they are each valid.
|
||||
|
||||
Define an [AWS](https://typhoon.psdn.io/aws/#cluster), [Google Cloud](https://typhoon.psdn.io/google-cloud/#cluster), or [Digital Ocean](https://typhoon.psdn.io/digital-ocean/#cluster) cluster and fill in the optional `controller_clc_snippets` or `worker_clc_snippets` fields.
|
||||
[AWS](/cl/aws/#cluster), [Google Cloud](/cl/google-cloud/#cluster), and [Digital Ocean](/cl/digital-ocean/#cluster) clusters allow populating a list of `controller_clc_snippets` or `worker_clc_snippets`.
|
||||
|
||||
```
|
||||
module "digital-ocean-nemo" {
|
||||
@ -89,6 +89,29 @@ module "digital-ocean-nemo" {
|
||||
}
|
||||
```
|
||||
|
||||
[Bare-Metal](/cl/bare-metal/#cluster) clusters allow different Container Linux snippets to be used for each node (since hardware may be heterogeneous). Populate the optional `clc_snippets` map variable with any controller or worker name keys and lists of snippets.
|
||||
|
||||
```
|
||||
module "bare-metal-mercury" {
|
||||
...
|
||||
controller_names = ["node1"]
|
||||
worker_names = [
|
||||
"node2",
|
||||
"node3",
|
||||
]
|
||||
clc_snippets = {
|
||||
"node2" = [
|
||||
"${file("./units/hello.yaml")}"
|
||||
]
|
||||
"node3" = [
|
||||
"${file("./units/world.yaml")}",
|
||||
"${file("./units/hello.yaml")}",
|
||||
]
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```
|
||||
@ -113,7 +136,7 @@ $ terraform apply
|
||||
Container Linux Configs (and the CoreOS Ignition system) create immutable infrastructure. Disk provisioning is performed only on first boot from disk. That means if you change a snippet used by an instance, Terraform will (correctly) try to destroy and recreate that instance. Be careful!
|
||||
|
||||
!!! danger
|
||||
Destroying and recreating controller instances is destructive! etcd runs on controller instances and stores data there. Do not modify controller snippets. See [blue/green](https://typhoon.psdn.io/topics/maintenance/#upgrades) clusters.
|
||||
Destroying and recreating controller instances is destructive! etcd runs on controller instances and stores data there. Do not modify controller snippets. See [blue/green](/topics/maintenance/#upgrades) clusters.
|
||||
|
||||
### Fedora Atomic
|
||||
|
||||
@ -130,5 +153,5 @@ module "digital-ocean-nemo" {
|
||||
}
|
||||
```
|
||||
|
||||
To customize lower-level Kubernetes control plane bootstrapping, see the [poseidon/bootkube-terraform](https://github.com/poseidon/bootkube-terraform) Terraform module.
|
||||
To customize lower-level Kubernetes control plane bootstrapping, see the [poseidon/terraform-render-bootkube](https://github.com/poseidon/terraform-render-bootkube) Terraform module.
|
||||
|
||||
|
@ -15,7 +15,7 @@ Create a cluster following the AWS [tutorial](../cl/aws.md#cluster). Define a wo
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -80,7 +80,7 @@ Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#clu
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -114,11 +114,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.10.5
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.10.5
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.11.2
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.11.2
|
||||
```
|
||||
|
||||
### Variables
|
||||
|
@ -12,7 +12,7 @@ Cluster nodes provision themselves from a declarative configuration upfront. Nod
|
||||
|
||||
#### Controllers
|
||||
|
||||
Controller nodes are scheduled to run the Kubernetes `apiserver`, `scheduler`, `controller-manager`, `kube-dns`, and `kube-proxy`. A fully qualified domain name (e.g. cluster_name.domain.com) resolving to a network load balancer or round-robin DNS (depends on platform) is used to refer to the control plane.
|
||||
Controller nodes are scheduled to run the Kubernetes `apiserver`, `scheduler`, `controller-manager`, `coredns`, and `kube-proxy`. A fully qualified domain name (e.g. cluster_name.domain.com) resolving to a network load balancer or round-robin DNS (depends on platform) is used to refer to the control plane.
|
||||
|
||||
#### Workers
|
||||
|
||||
|
@ -3,11 +3,11 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on AWS with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on AWS with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -83,7 +83,7 @@ Define a Kubernetes cluster using the module `aws/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -156,9 +156,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
ip-10-0-12-221 Ready 34m v1.10.5
|
||||
ip-10-0-19-112 Ready 34m v1.10.5
|
||||
ip-10-0-4-22 Ready 34m v1.10.5
|
||||
ip-10-0-12-221 Ready 34m v1.11.2
|
||||
ip-10-0-19-112 Ready 34m v1.11.2
|
||||
ip-10-0-4-22 Ready 34m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -169,10 +169,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1m5bf 2/2 Running 0 34m
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
|
||||
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
|
||||
kube-system kube-dns-1187388186-wx1lg 3/3 Running 0 34m
|
||||
kube-system kube-proxy-14wxv 1/1 Running 0 34m
|
||||
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
|
||||
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
|
||||
@ -233,12 +233,9 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
|
||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
||||
|
||||
!!! warning
|
||||
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller.
|
||||
|
||||
!!! tip "MTU"
|
||||
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
|
||||
|
@ -3,11 +3,11 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.5 cluster on bare-metal with Fedora Atomic.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.11.2 cluster on bare-metal with Fedora Atomic.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora Atomic via kickstart, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via cloud-init.
|
||||
|
||||
Controllers are provisioned to run `etcd` and `kubelet` [system containers](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/). Workers run just a `kubelet` system container. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run `etcd` and `kubelet` [system containers](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/). Workers run just a `kubelet` system container. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -121,16 +121,17 @@ sudo systemctl enable httpd --now
|
||||
Download the [Fedora Atomic](https://getfedora.org/en/atomic/download/) ISO which contains install files and add them to the serve directory.
|
||||
|
||||
```
|
||||
sudo mount -o loop,ro Fedora-Atomic-ostree-*.iso /mnt
|
||||
sudo mkdir -p /var/www/html/fedora/27
|
||||
sudo cp -av /mnt/* /var/www/html/fedora/27/
|
||||
sudo mount -o loop,ro Fedora-AtomicHost-ostree-*.iso /mnt
|
||||
sudo mkdir -p /var/www/html/fedora/28
|
||||
sudo cp -av /mnt/* /var/www/html/fedora/28/
|
||||
sudo umount /mnt
|
||||
```
|
||||
|
||||
Checkout the [fedora-atomic](https://pagure.io/fedora-atomic) ostree manifest repo.
|
||||
|
||||
```
|
||||
git clone https://pagure.io/fedora-atomic.git && cd fedora-atomic
|
||||
git checkout f27
|
||||
git checkout f28
|
||||
```
|
||||
|
||||
Compose an ostree repo from RPM sources.
|
||||
@ -145,12 +146,12 @@ sudo rpm-ostree compose tree --repo=repo fedora-atomic-host.json
|
||||
Serve the ostree `repo` as well.
|
||||
|
||||
```
|
||||
sudo cp -r repo /var/www/html/fedora/27/
|
||||
tree /var/www/html/fedora/27/
|
||||
├── images
|
||||
│ ├── pxeboot
|
||||
│ ├── initrd.img
|
||||
│ └── vmlinuz
|
||||
sudo cp -r repo /var/www/html/fedora/28/
|
||||
tree /var/www/html/fedora/28/
|
||||
├── images
|
||||
│ ├── pxeboot
|
||||
│ ├── initrd.img
|
||||
│ └── vmlinuz
|
||||
├── isolinux/
|
||||
├── repo/
|
||||
```
|
||||
@ -158,7 +159,7 @@ tree /var/www/html/fedora/27/
|
||||
Verify `vmlinuz`, `initrd.img`, and `repo` are accessible from the HTTP server (i.e. `atomic_assets_endpoint`).
|
||||
|
||||
```
|
||||
curl http://example.com/fedora/27/
|
||||
curl http://example.com/fedora/28/
|
||||
```
|
||||
|
||||
!!! note
|
||||
@ -234,7 +235,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-atomic/kubernete
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -246,7 +247,7 @@ module "bare-metal-mercury" {
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
matchbox_http_endpoint = "http://matchbox.example.com"
|
||||
atomic_assets_endpoint = "http://example.com/fedora/27"
|
||||
atomic_assets_endpoint = "http://example.com/fedora/28"
|
||||
|
||||
# configuration
|
||||
k8s_domain_name = "node1.example.com"
|
||||
@ -360,9 +361,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
node1.example.com Ready 11m v1.10.5
|
||||
node2.example.com Ready 11m v1.10.5
|
||||
node3.example.com Ready 11m v1.10.5
|
||||
node1.example.com Ready 11m v1.11.2
|
||||
node2.example.com Ready 11m v1.11.2
|
||||
node3.example.com Ready 11m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -373,10 +374,10 @@ NAMESPACE NAME READY STATUS RES
|
||||
kube-system calico-node-6qp7f 2/2 Running 1 11m
|
||||
kube-system calico-node-gnjrm 2/2 Running 0 11m
|
||||
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-7336w 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-v30js 1/1 Running 1 11m
|
||||
kube-system kube-dns-1187388186-mx9rt 3/3 Running 0 11m
|
||||
kube-system kube-proxy-50sd4 1/1 Running 0 11m
|
||||
kube-system kube-proxy-bczhp 1/1 Running 0 11m
|
||||
kube-system kube-proxy-mp2fw 1/1 Running 0 11m
|
||||
@ -400,7 +401,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name | mercury |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | "http://matchbox.example.com:port" |
|
||||
| atomic_assets_endpoint | HTTP endpoint serving the Fedora Atomic vmlinuz, initrd.img, and ostree repo | "http://example.com/fedora/27" |
|
||||
| atomic_assets_endpoint | HTTP endpoint serving the Fedora Atomic vmlinuz, initrd.img, and ostree repo | "http://example.com/fedora/28" |
|
||||
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3Nz..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
|
||||
@ -419,6 +420,6 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
| kernel_args | Additional kernel args to provide at PXE boot | [] | "kvm-intel.nested=1" |
|
||||
|
||||
|
@ -3,11 +3,11 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on DigitalOcean with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on DigitalOcean with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -77,7 +77,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-atomic/kubern
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
@ -152,19 +152,19 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
10.132.110.130 Ready 10m v1.10.5
|
||||
10.132.115.81 Ready 10m v1.10.5
|
||||
10.132.124.107 Ready 10m v1.10.5
|
||||
10.132.110.130 Ready 10m v1.11.2
|
||||
10.132.115.81 Ready 10m v1.11.2
|
||||
10.132.124.107 Ready 10m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-n10qr 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-37gtw 1/1 Running 1 11m
|
||||
kube-system kube-controller-manager-3271970485-p52t5 1/1 Running 0 11m
|
||||
kube-system kube-dns-1187388186-ld1j7 3/3 Running 0 11m
|
||||
kube-system kube-flannel-1cq1v 2/2 Running 0 11m
|
||||
kube-system kube-flannel-hq9t0 2/2 Running 1 11m
|
||||
kube-system kube-flannel-v0g9w 2/2 Running 0 11m
|
||||
@ -241,7 +241,7 @@ Digital Ocean requires the SSH public key be uploaded to your account, so you ma
|
||||
| worker_type | Droplet type for workers | s-1vcpu-1gb | s-1vcpu-1gb, s-1vcpu-2gb, s-2vcpu-2gb, ... |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [droplet types](https://developers.digitalocean.com/documentation/changelog/api-v2/new-size-slugs-for-droplet-plan-changes/) or use `doctl compute size list`.
|
||||
|
||||
|
@ -1,13 +1,13 @@
|
||||
# Google Cloud
|
||||
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is very alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Some addons don't work yet. Expect rough edges and changes.
|
||||
Typhoon for Fedora Atomic is alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on Google Compute Engine with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on Google Compute Engine with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -83,35 +83,37 @@ Additional configuration options are described in the `google` provider [docs](h
|
||||
|
||||
Project Atomic does not publish official Fedora Atomic images to Google Cloud. However, Google Cloud allows [custom boot images](https://cloud.google.com/compute/docs/images/import-existing-image) to be uploaded to a bucket and imported into your project.
|
||||
|
||||
Download the Fedora Atomic 27 [raw image](https://getfedora.org/en/atomic/download/) and decompress the file.
|
||||
Download the Fedora Atomic 28 [raw image](https://getfedora.org/en/atomic/download/) and decompress the file.
|
||||
|
||||
```
|
||||
xz -d Fedora-Atomic-27-20180326.1.x86_64.raw.xz
|
||||
xz -d Fedora-AtomicHost-28-20180528.0.x86_64.raw.xz
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Download the exact dated version shown in docs. Fedora has no official Atomic images for Google Cloud. We've verified specific versions and found others to have problems.
|
||||
|
||||
Rename the image `disk.raw`. Gzip compress and tar the image.
|
||||
|
||||
```
|
||||
mv Fedora-Atomic-27-20180326.1.x86_64.raw disk.raw
|
||||
tar cvzf fedora-atomic-27.tar.gz disk.raw
|
||||
mv Fedora-AtomicHost-28-20180528.0.x86_64.raw disk.raw
|
||||
tar cvzf fedora-atomic-28.tar.gz disk.raw
|
||||
```
|
||||
|
||||
List available storage buckets and upload the tar.gz.
|
||||
|
||||
```
|
||||
gsutil list
|
||||
gsutil cp fedora-atomic-27.tar.gz gs://BUCKET_NAME
|
||||
gsutil cp fedora-atomic-28.tar.gz gs://BUCKET_NAME
|
||||
```
|
||||
|
||||
Create a Google Compute Engine image from the bucket file.
|
||||
|
||||
```
|
||||
gcloud compute images list
|
||||
gcloud compute images create fedora-atomic-27 --source-uri gs://BUCKET/fedora-atomic-27.tar.gz
|
||||
gcloud compute images create fedora-atomic-28 --source-uri gs://BUCKET/fedora-atomic-28.tar.gz
|
||||
```
|
||||
|
||||
Note your project id and the image name for setting `os_image` later (e.g. proj-id/fedora-atomic-27).
|
||||
|
||||
Note your project id and the image name for setting `os_image` later (e.g. proj-id/fedora-atomic-28).
|
||||
|
||||
## Cluster
|
||||
|
||||
@ -119,7 +121,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-atomic/kuberne
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -138,7 +140,7 @@ module "google-cloud-yavin" {
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
asset_dir = "/home/user/.secrets/clusters/yavin"
|
||||
os_image = "MY-PROJECT_ID/fedora-atomic-27"
|
||||
os_image = "MY-PROJECT_ID/fedora-atomic-28"
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
@ -195,9 +197,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -208,10 +210,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-dns-1187388186-zj5dl 3/3 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
@ -236,7 +238,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-
|
||||
| region | Google Cloud region | "us-central1" |
|
||||
| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" |
|
||||
| dns_zone_name | Google Cloud DNS zone name | "example-zone" |
|
||||
| os_image | Custom uploaded Fedora Atomic 27 image | "PROJECT-ID/fedora-atomic-27" |
|
||||
| os_image | Custom uploaded Fedora Atomic image | "PROJECT-ID/fedora-atomic-28" |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/yavin" |
|
||||
|
||||
@ -272,7 +274,7 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on AWS with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on AWS with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -96,7 +96,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -169,9 +169,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
ip-10-0-12-221 Ready 34m v1.10.5
|
||||
ip-10-0-19-112 Ready 34m v1.10.5
|
||||
ip-10-0-4-22 Ready 34m v1.10.5
|
||||
ip-10-0-12-221 Ready 34m v1.11.2
|
||||
ip-10-0-19-112 Ready 34m v1.11.2
|
||||
ip-10-0-4-22 Ready 34m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -182,10 +182,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1m5bf 2/2 Running 0 34m
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
|
||||
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
|
||||
kube-system kube-dns-1187388186-wx1lg 3/3 Running 0 34m
|
||||
kube-system kube-proxy-14wxv 1/1 Running 0 34m
|
||||
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
|
||||
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
|
||||
@ -252,7 +252,7 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
|
||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.5 cluster on bare-metal with Container Linux.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.11.2 cluster on bare-metal with Container Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -12,7 +12,7 @@ Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service
|
||||
* PXE-enabled [network boot](https://coreos.com/matchbox/docs/latest/network-setup.html) environment
|
||||
* Matchbox v0.6+ deployment with API enabled
|
||||
* Matchbox credentials `client.crt`, `client.key`, `ca.crt`
|
||||
* Terraform v0.11.x and [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) installed locally
|
||||
* Terraform v0.11.x, [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox), and [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) installed locally
|
||||
|
||||
## Machines
|
||||
|
||||
@ -121,6 +121,14 @@ tar xzf terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-matchbox-v0.2.2-linux-amd64/terraform-provider-matchbox /usr/local/bin/
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
@ -174,7 +182,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -283,9 +291,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
|
||||
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
|
||||
|
||||
```
|
||||
# before v1.10.5
|
||||
# before v1.11.2
|
||||
$ ssh debug@node1.example.com
|
||||
# after v1.10.5
|
||||
# after v1.11.2
|
||||
$ ssh -p 2222 core@node1.example.com
|
||||
```
|
||||
|
||||
@ -310,9 +318,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
node1.example.com Ready 11m v1.10.5
|
||||
node2.example.com Ready 11m v1.10.5
|
||||
node3.example.com Ready 11m v1.10.5
|
||||
node1.example.com Ready 11m v1.11.2
|
||||
node2.example.com Ready 11m v1.11.2
|
||||
node3.example.com Ready 11m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -323,10 +331,10 @@ NAMESPACE NAME READY STATUS RES
|
||||
kube-system calico-node-6qp7f 2/2 Running 1 11m
|
||||
kube-system calico-node-gnjrm 2/2 Running 0 11m
|
||||
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-7336w 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-v30js 1/1 Running 1 11m
|
||||
kube-system kube-dns-1187388186-mx9rt 3/3 Running 0 11m
|
||||
kube-system kube-proxy-50sd4 1/1 Running 0 11m
|
||||
kube-system kube-proxy-bczhp 1/1 Running 0 11m
|
||||
kube-system kube-proxy-mp2fw 1/1 Running 0 11m
|
||||
@ -373,9 +381,10 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
||||
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
|
||||
| clc_snippets | Map from machine names to lists of Container Linux Config snippets | {} | [example](/advanced/customization/#usage) |
|
||||
| network_ip_autodetection_method | Method to detect host IPv4 address (calico-only) | first-found | can-reach=10.0.0.1 |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
| kernel_args | Additional kernel args to provide at PXE boot | [] | "kvm-intel.nested=1" |
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Digital Ocean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on DigitalOcean with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on DigitalOcean with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -90,7 +90,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
@ -164,19 +164,19 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
10.132.110.130 Ready 10m v1.10.5
|
||||
10.132.115.81 Ready 10m v1.10.5
|
||||
10.132.124.107 Ready 10m v1.10.5
|
||||
10.132.110.130 Ready 10m v1.11.2
|
||||
10.132.115.81 Ready 10m v1.11.2
|
||||
10.132.124.107 Ready 10m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
||||
kube-system kube-apiserver-n10qr 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-37gtw 1/1 Running 1 11m
|
||||
kube-system kube-controller-manager-3271970485-p52t5 1/1 Running 0 11m
|
||||
kube-system kube-dns-1187388186-ld1j7 3/3 Running 0 11m
|
||||
kube-system kube-flannel-1cq1v 2/2 Running 0 11m
|
||||
kube-system kube-flannel-hq9t0 2/2 Running 1 11m
|
||||
kube-system kube-flannel-v0g9w 2/2 Running 0 11m
|
||||
@ -258,7 +258,7 @@ Digital Ocean requires the SSH public key be uploaded to your account, so you ma
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [droplet types](https://developers.digitalocean.com/documentation/changelog/api-v2/new-size-slugs-for-droplet-plan-changes/) or use `doctl compute size list`.
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on Google Compute Engine with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on Google Compute Engine with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -97,7 +97,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -172,9 +172,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -185,10 +185,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-dns-1187388186-zj5dl 3/3 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
@ -254,7 +254,7 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
|
||||
@ -29,7 +29,7 @@ Typhoon provides a Terraform Module for each supported operating system and plat
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](atomic/bare-metal.md) | alpha |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | beta |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](atomic/digital-ocean.md) | alpha |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | beta |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | stable |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/container-linux/kubernetes](atomic/google-cloud.md) | alpha |
|
||||
|
||||
The AWS and bare-metal `container-linux` modules allow picking Red Hat Container Linux (formerly CoreOS Container Linux) or Kinvolk's Flatcar Linux friendly fork.
|
||||
@ -45,7 +45,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -86,9 +86,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -99,10 +99,10 @@ NAMESPACE NAME READY STATUS RESTART
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-dns-1187388186-zj5dl 3/3 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
|
@ -85,7 +85,7 @@ Restart `dnsmasq`.
|
||||
sudo /etc/init.d/dnsmasq restart
|
||||
```
|
||||
|
||||
Configure queries for `*.svc.cluster.local` to be forwarded to a Kubernetes `kube-dns` service IP to allow hosts to resolve cluster-local Kubernetes names.
|
||||
Configure queries for `*.svc.cluster.local` to be forwarded to the Kubernetes `coredns` service IP to allow hosts to resolve cluster-local Kubernetes names.
|
||||
|
||||
```
|
||||
configure
|
||||
@ -108,7 +108,7 @@ commit-confirm
|
||||
|
||||
### Port Forwarding
|
||||
|
||||
Expose the [Ingress Controller](/addons/ingress.md#bare-metal) by adding `port-forward` rules that DNAT a port on the router's WAN interface to an internal IP and port. By convention, a public Ingress controller is assigned a fixed service IP like kube-dns (e.g. 10.3.0.12).
|
||||
Expose the [Ingress Controller](/addons/ingress.md#bare-metal) by adding `port-forward` rules that DNAT a port on the router's WAN interface to an internal IP and port. By convention, a public Ingress controller is assigned a fixed service IP (e.g. 10.3.0.12).
|
||||
|
||||
```
|
||||
configure
|
||||
|
@ -18,7 +18,7 @@ module "google-cloud-yavin" {
|
||||
}
|
||||
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.5"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.11.2"
|
||||
...
|
||||
}
|
||||
```
|
||||
@ -110,9 +110,9 @@ Apply complete! Resources: 0 added, 0 changed, 55 destroyed.
|
||||
|
||||
#### In-place Edits
|
||||
|
||||
Typhoon uses a self-hosted Kubernetes control plane which allows certain manifest upgrades to be performed in-place. Components like `apiserver`, `controller-manager`, `scheduler`, `flannel`/`calico`, `kube-dns`, and `kube-proxy` are run on Kubernetes itself and can be edited via `kubectl`. If you're interested, see the bootkube [upgrade docs](https://github.com/kubernetes-incubator/bootkube/blob/master/Documentation/upgrading.md).
|
||||
Typhoon uses a self-hosted Kubernetes control plane which allows certain manifest upgrades to be performed in-place. Components like `apiserver`, `controller-manager`, `scheduler`, `flannel`/`calico`, `coredns`, and `kube-proxy` are run on Kubernetes itself and can be edited via `kubectl`. If you're interested, see the bootkube [upgrade docs](https://github.com/kubernetes-incubator/bootkube/blob/master/Documentation/upgrading.md).
|
||||
|
||||
In certain scenarios, in-place edits can be useful for quickly rolling out security patches (e.g. bumping `kube-dns`) or prioritizing speed over the safety of a proper cluster re-provision and transition.
|
||||
In certain scenarios, in-place edits can be useful for quickly rolling out security patches (e.g. bumping `coredns`) or prioritizing speed over the safety of a proper cluster re-provision and transition.
|
||||
|
||||
!!! note
|
||||
Rarely, we may test certain security in-place edits and mention them as an option in release notes.
|
||||
@ -126,113 +126,3 @@ Typhoon supports multi-controller clusters, so it is possible to upgrade a clust
|
||||
|
||||
!!! warning
|
||||
Typhoon does not support or document node replacement as an upgrade strategy. It limits Typhoon's ability to make infrastructure and architectural changes between tagged releases.
|
||||
|
||||
## Terraform v0.11.x
|
||||
|
||||
Terraform v0.10.x to v0.11.x introduced breaking changes in the provider and module inheritance relationship that you MUST be aware of when upgrading to the v0.11.x `terraform` binary. Terraform now allows multiple named (i.e. aliased) copies of a provider to exist (e.g `aws.default`, `aws.somename`). Terraform now also requires providers be explicitly passed to modules in order to satisfy module version contraints (which Typhoon modules define). Full details can be found in [typhoon#77](https://github.com/poseidon/typhoon/issues/77) and [hashicorp#16824](https://github.com/hashicorp/terraform/issues/16824).
|
||||
|
||||
In particular, after upgrading to the v0.11.x `terraform` binary, you'll notice:
|
||||
|
||||
* `terraform plan` does not succeed and prompts for variables when it didn't before
|
||||
* `terraform plan` does not succeed and mentions "provider configuration block is required for all operations"
|
||||
* `terraform apply` fails when you comment or remove a module usage in order to delete a cluster
|
||||
|
||||
### New users
|
||||
|
||||
New users can start with Terraform v0.11.x and follow the Typhoon docs without issue.
|
||||
|
||||
### Existing
|
||||
|
||||
Users who used modules to create clusters with Terraform v0.10.x and still manage those clusters via Terraform must explicitly add each provider used in `provider.tf`:
|
||||
|
||||
```
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
```
|
||||
|
||||
Modify the `google`, `aws`, or `digitalocean` provider section to specify an explicit `alias` name.
|
||||
|
||||
```
|
||||
provider "digitalocean" {
|
||||
version = "0.1.2"
|
||||
token = "${chomp(file("~/.config/digital-ocean/token"))}"
|
||||
alias = "default"
|
||||
}
|
||||
```
|
||||
|
||||
!!! note
|
||||
In these examples, we've chosen to name each provider "default", though the point of the Terraform changes is that other possibilities are possible.
|
||||
|
||||
Edit each instance (i.e. usage) of a module and explicitly pass the providers.
|
||||
|
||||
```
|
||||
module "aws-cluster" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
local = "local.default"
|
||||
null = "null.default"
|
||||
template = "template.default"
|
||||
tls = "tls.default"
|
||||
}
|
||||
|
||||
cluster_name = "somename"
|
||||
```
|
||||
|
||||
Re-run `terraform plan`. Plan will claim there are no changes to apply. Run `terraform apply` anyway as this will update Terraform state to be aware of the explicit provider versions.
|
||||
|
||||
### Verify
|
||||
|
||||
You should now be able to run `terraform plan` without errors. When you choose, you may comment or delete a module from Terraform configs and `terraform apply` should destroy the cluster correctly.
|
||||
|
||||
## terraform-provider-ct v0.2.1
|
||||
|
||||
Typhoon requires updating the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin installed on your system from v0.2.0 to [v0.2.1](https://github.com/coreos/terraform-provider-ct/releases/tag/v0.2.1).
|
||||
|
||||
Check your `~/.terraformrc` to find your current `terraform-provider-ct` plugin.
|
||||
|
||||
```
|
||||
providers {
|
||||
ct = "/usr/local/bin/terraform-provider-ct"
|
||||
}
|
||||
```
|
||||
|
||||
Make a backup copy. Install `terraform-provider-ct` v0.2.1.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
```
|
||||
|
||||
Re-initialize Terraform configs which have Typhoon cluster resources.
|
||||
|
||||
```
|
||||
cd clusters
|
||||
terraform init
|
||||
```
|
||||
|
||||
Verify Terraform does not produce a diff related to Container Linux provisioning.
|
||||
|
||||
```
|
||||
terraform plan
|
||||
```
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.8"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -75,7 +75,6 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -124,7 +123,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.5
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -145,7 +144,7 @@ storage:
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
|
@ -1,8 +1,3 @@
|
||||
# Deprecated
|
||||
output "controllers_ipv4_public" {
|
||||
value = ["${google_compute_instance.controllers.*.network_interface.0.access_config.0.assigned_nat_ip}"]
|
||||
}
|
||||
|
||||
# Outputs for Kubernetes Ingress
|
||||
|
||||
output "ingress_static_ipv4" {
|
||||
@ -10,12 +5,6 @@ output "ingress_static_ipv4" {
|
||||
value = "${google_compute_global_address.ingress-ipv4.address}"
|
||||
}
|
||||
|
||||
# Deprecated, use ingress_static_ipv4
|
||||
output "ingress_static_ip" {
|
||||
description = "Global IPv4 address for proxy load balancing to the nearest Ingress controller"
|
||||
value = "${google_compute_global_address.ingress-ipv4.address}"
|
||||
}
|
||||
|
||||
# Outputs for worker pools
|
||||
|
||||
output "network_name" {
|
||||
|
@ -103,7 +103,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -111,7 +111,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -48,7 +48,6 @@ systemd:
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
@ -94,7 +93,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.5
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -112,7 +111,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.5 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.2 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -67,7 +67,7 @@ variable "ssh_authorized_key" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -75,7 +75,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -52,8 +52,7 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
@ -94,9 +93,9 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
|
@ -5,12 +5,6 @@ output "ingress_static_ipv4" {
|
||||
value = "${google_compute_global_address.ingress-ipv4.address}"
|
||||
}
|
||||
|
||||
# Deprecated, use ingress_static_ipv4
|
||||
output "ingress_static_ip" {
|
||||
description = "Global IPv4 address for proxy load balancing to the nearest Ingress controller"
|
||||
value = "${google_compute_global_address.ingress-ipv4.address}"
|
||||
}
|
||||
|
||||
# Outputs for worker pools
|
||||
|
||||
output "network_name" {
|
||||
|
@ -90,7 +90,7 @@ variable "pod_cidr" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -98,7 +98,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
@ -31,8 +31,7 @@ write_files:
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
ARGS="--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
@ -71,7 +70,7 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
|
@ -66,7 +66,7 @@ variable "ssh_authorized_key" {
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
@ -74,7 +74,7 @@ EOD
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
Reference in New Issue
Block a user