Compare commits

..

16 Commits

Author SHA1 Message Date
dbdc3fc850 Add nginx-ingress addon manifests for bare-metal 2018-08-11 12:14:23 -07:00
e00f97c578 Update nginx-ingress from 0.16.2 to 0.17.1
* https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.17.1
* https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.17.0
2018-08-08 00:45:20 -07:00
f7ebdf475d Update Kubernetes from v1.11.1 to v1.11.2
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1112
2018-08-07 21:57:25 -07:00
716dfe4d17 Fix cluster links in customization docs 2018-07-29 12:46:59 -07:00
edc250d62a Fix Kublet version for Fedora Atomic modules
* Release v1.11.1 erroneously left Fedora Atomic clusters using
the v1.11.0 Kubelet. The rest of the control plane ran v1.11.1
as expected
* Update Kubelet from v1.11.0 to v1.11.1 so Fedora Atomic matches
Container Linux
* Container Linux modules were not affected
2018-07-29 12:13:29 -07:00
db64ce3312 Update etcd from v3.3.8 to v3.3.9
* https://github.com/coreos/etcd/blob/master/CHANGELOG-3.3.md#v339-2018-07-24
2018-07-29 11:27:37 -07:00
7c327b8bf4 Update from bootkube v0.12.0 to v0.13.0 2018-07-29 11:20:17 -07:00
e6720cf738 Update heapster from v1.5.3 to v1.5.4
* https://github.com/kubernetes/heapster/releases/tag/v1.5.4
2018-07-29 11:19:57 -07:00
844f380b4e Update Grafana from v5.2.1 to v5.2.2
* https://github.com/grafana/grafana/releases/tag/v5.2.2
2018-07-29 11:12:56 -07:00
13beb13aab Add descriptions to bare-metal fedora-atomic variables 2018-07-29 11:07:48 -07:00
90c4a7483d Combine bare-metal CLC snippets maps into one map 2018-07-26 23:31:08 -07:00
4e7dfc115d Support Container Linux Config snippets on bare-metal 2018-07-25 23:14:54 -07:00
ec5ea51141 Remove old migration docs and fix link 2018-07-22 12:23:37 -07:00
d8d524d10b Update Kubernetes from v1.11.0 to v1.11.1
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1111
2018-07-20 00:41:27 -07:00
02cd8eb8d3 Update Prometheus from v2.3.1 to v2.3.2
* https://github.com/prometheus/prometheus/releases/tag/v2.3.2
2018-07-14 14:25:49 -07:00
84d6cfe7b3 Add Prometheus alert rule for inactive md devices
* node-exporter exposes metrics to Prometheus about total and
active md devices (e.g. disks in mdadm RAID arrays)
* Add alert that fires when a RAID disk fails or becomes inactive
for another reason
2018-07-10 00:20:30 -07:00
65 changed files with 527 additions and 236 deletions

View File

@ -2,7 +2,38 @@
Notable changes between versions.
## Latest
## v1.11.2
* Kubernetes [v1.11.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1112)
* Update etcd from v3.3.8 to [v3.3.9](https://github.com/coreos/etcd/blob/master/CHANGELOG-3.3.md#v339-2018-07-24)
* Use kubernetes-incubator/bootkube v0.13.0
* Fix Fedora Atomic modules' Kubelet version ([#270](https://github.com/poseidon/typhoon/issues/270))
#### Bare-Metal
* Introduce [Container Linux Config snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) on bare-metal
* Validate and additively merge custom Container Linux Configs during terraform plan
* Define files, systemd units, dropins, networkd configs, mounts, users, and more
* [Require](https://typhoon.psdn.io/cl/bare-metal/#terraform-setup) `terraform-provider-ct` plugin v0.2.1 (action required!)
#### Addons
* Update nginx-ingress from 0.16.2 to 0.17.1
* Add nginx-ingress manifests for bare-metal
* Update Grafana from 5.2.1 to 5.2.2
* Update heapster from v1.5.3 to v1.5.4
## v1.11.1
* Kubernetes [v1.11.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1111)
#### Addons
* Update Prometheus from v2.3.1 to v2.3.2
#### Errata
* Fedora Atomic modules shipped with Kubelet v1.11.0, instead of v1.11.1. Fixed in [#270](https://github.com/poseidon/typhoon/issues/270).
## v1.11.0

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
@ -46,7 +46,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.2"
providers = {
google = "google.default"
@ -88,9 +88,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.11.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.0
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
```
List the pods.

View File

@ -21,7 +21,7 @@ spec:
spec:
containers:
- name: grafana
image: grafana/grafana:5.2.1
image: grafana/grafana:5.2.2
env:
- name: GF_SERVER_HTTP_PORT
value: "8080"

View File

@ -20,7 +20,7 @@ spec:
serviceAccountName: heapster
containers:
- name: heapster
image: k8s.gcr.io/heapster-amd64:v1.5.3
image: k8s.gcr.io/heapster-amd64:v1.5.4
command:
- /heapster
- --source=kubernetes.summary_api:''

View File

@ -22,7 +22,7 @@ spec:
node-role.kubernetes.io/node: ""
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.2
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-backend

View File

@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: ingress
labels:
name: ingress

View File

@ -0,0 +1,40 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-backend
namespace: ingress
spec:
replicas: 1
selector:
matchLabels:
name: default-backend
phase: prod
template:
metadata:
labels:
name: default-backend
phase: prod
spec:
containers:
- name: default-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: k8s.gcr.io/defaultbackend:1.4
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
terminationGracePeriodSeconds: 60

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: default-backend
namespace: ingress
spec:
type: ClusterIP
selector:
name: default-backend
phase: prod
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080

View File

@ -0,0 +1,73 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-controller-public
namespace: ingress
spec:
replicas: 2
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
name: ingress-controller-public
phase: prod
template:
metadata:
labels:
name: ingress-controller-public
phase: prod
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-backend
- --ingress-class=public
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: health
containerPort: 10254
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 1
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33 # www-data
restartPolicy: Always
terminationGracePeriodSeconds: 60

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress
subjects:
- kind: ServiceAccount
namespace: ingress
name: default

View File

@ -0,0 +1,51 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update

View File

@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress
namespace: ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress
subjects:
- kind: ServiceAccount
namespace: ingress
name: default

View File

@ -0,0 +1,41 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress
namespace: ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-public"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update

View File

@ -0,0 +1,23 @@
apiVersion: v1
kind: Service
metadata:
name: ingress-controller-public
namespace: ingress
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '10254'
spec:
type: ClusterIP
clusterIP: 10.3.0.12
selector:
name: ingress-controller-public
phase: prod
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443

View File

@ -22,7 +22,7 @@ spec:
node-role.kubernetes.io/node: ""
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.2
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-backend

View File

@ -22,7 +22,7 @@ spec:
node-role.kubernetes.io/node: ""
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.2
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-backend

View File

@ -18,7 +18,7 @@ spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: quay.io/prometheus/prometheus:v2.3.1
image: quay.io/prometheus/prometheus:v2.3.2
args:
- --web.listen-address=0.0.0.0:9090
- --config.file=/etc/prometheus/prometheus.yaml

View File

@ -496,6 +496,13 @@ data:
annotations:
description: device {{$labels.device}} on node {{$labels.instance}} is running
full within the next 2 hours (mounted at {{$labels.mountpoint}})
- alert: InactiveRAIDDisk
expr: node_md_disks - node_md_disks_active > 0
for: 10m
labels:
severity: warning
annotations:
description: '{{$value}} RAID disk(s) on node {{$labels.instance}} are inactive'
prometheus.rules.yaml: |
groups:
- name: prometheus.rules

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.3.8"
Environment="ETCD_IMAGE_TAG=v3.3.9"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
@ -122,7 +122,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.11.0
KUBELET_IMAGE_TAG=v1.11.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -143,7 +143,7 @@ storage:
# Move experimental manifests
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \

View File

@ -92,7 +92,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.11.0
KUBELET_IMAGE_TAG=v1.11.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -110,7 +110,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.11.0 \
docker://k8s.gcr.io/hyperkube:v1.11.2 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -92,9 +92,9 @@ bootcmd:
runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
- [systemctl, start, --no-block, etcd.service]
- [systemctl, enable, cloud-metadata.service]
- [systemctl, start, --no-block, kubelet.service]

View File

@ -69,7 +69,7 @@ runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- [systemctl, enable, cloud-metadata.service]
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
- [systemctl, start, --no-block, kubelet.service]
users:
- default

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.3.8"
Environment="ETCD_IMAGE_TAG=v3.3.9"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
@ -123,7 +123,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.11.0
KUBELET_IMAGE_TAG=v1.11.2
- path: /etc/hostname
filesystem: root
mode: 0644
@ -150,7 +150,7 @@ storage:
# Move experimental manifests
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \

View File

@ -84,7 +84,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.11.0
KUBELET_IMAGE_TAG=v1.11.2
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -118,9 +118,18 @@ resource "matchbox_profile" "flatcar-install" {
resource "matchbox_profile" "controllers" {
count = "${length(var.controller_names)}"
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
container_linux_config = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
raw_ignition = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
}
data "ct_config" "controller-ignitions" {
count = "${length(var.controller_names)}"
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
pretty_print = false
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
snippets = ["${local.clc_map[element(var.controller_names, count.index)]}"]
}
data "template_file" "controller-configs" {
count = "${length(var.controller_names)}"
@ -143,7 +152,16 @@ data "template_file" "controller-configs" {
resource "matchbox_profile" "workers" {
count = "${length(var.worker_names)}"
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
container_linux_config = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
raw_ignition = "${element(data.ct_config.worker-ignitions.*.rendered, count.index)}"
}
data "ct_config" "worker-ignitions" {
count = "${length(var.worker_names)}"
content = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
pretty_print = false
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
snippets = ["${local.clc_map[element(var.worker_names, count.index)]}"]
}
data "template_file" "worker-configs" {
@ -161,3 +179,18 @@ data "template_file" "worker-configs" {
networkd_content = "${length(var.worker_networkds) == 0 ? "" : element(concat(var.worker_networkds, list("")), count.index)}"
}
}
locals {
# Hack to workaround https://github.com/hashicorp/terraform/issues/17251
# Default Container Linux config snippets map every node names to list("\n") so
# all lookups succeed
clc_defaults = "${zipmap(concat(var.controller_names, var.worker_names), chunklist(data.template_file.clc-default-snippets.*.rendered, 1))}"
# Union of the default and user specific snippets, later overrides prior.
clc_map = "${merge(local.clc_defaults, var.clc_snippets)}"
}
// Horrible hack to generate a Terraform list of node count length
data "template_file" "clc-default-snippets" {
count = "${length(var.controller_names) + length(var.worker_names)}"
template = "\n"
}

View File

@ -25,26 +25,38 @@ variable "os_version" {
variable "controller_names" {
type = "list"
description = "Ordered list of controller names (e.g. [node1])"
}
variable "controller_macs" {
type = "list"
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
}
variable "controller_domains" {
type = "list"
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
}
variable "worker_names" {
type = "list"
description = "Ordered list of worker names (e.g. [node2, node3])"
}
variable "worker_macs" {
type = "list"
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
}
variable "worker_domains" {
type = "list"
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
}
variable "clc_snippets" {
type = "map"
description = "Map from machine names to lists of Container Linux Config snippets"
default = {}
}
# configuration

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]

View File

@ -83,9 +83,9 @@ runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- [hostnamectl, set-hostname, ${domain_name}]
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
- [systemctl, start, --no-block, etcd.service]
- [systemctl, enable, kubelet.path]
- [systemctl, start, --no-block, kubelet.path]

View File

@ -59,7 +59,7 @@ runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- [hostnamectl, set-hostname, ${domain_name}]
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
- [systemctl, enable, kubelet.path]
- [systemctl, start, --no-block, kubelet.path]
users:

View File

@ -26,26 +26,32 @@ EOD
variable "controller_names" {
type = "list"
description = "Ordered list of controller names (e.g. [node1])"
}
variable "controller_macs" {
type = "list"
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
}
variable "controller_domains" {
type = "list"
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
}
variable "worker_names" {
type = "list"
description = "Ordered list of worker names (e.g. [node2, node3])"
}
variable "worker_macs" {
type = "list"
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
}
variable "worker_domains" {
type = "list"
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
}
# configuration

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.3.8"
Environment="ETCD_IMAGE_TAG=v3.3.9"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
@ -128,7 +128,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.11.0
KUBELET_IMAGE_TAG=v1.11.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -149,7 +149,7 @@ storage:
# Move experimental manifests
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \

View File

@ -98,7 +98,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.11.0
KUBELET_IMAGE_TAG=v1.11.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -116,7 +116,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.11.0 \
docker://k8s.gcr.io/hyperkube:v1.11.2 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -89,9 +89,9 @@ bootcmd:
- [modprobe, ip_vs]
runcmd:
- [systemctl, daemon-reload]
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
- [systemctl, start, --no-block, etcd.service]
- [systemctl, enable, cloud-metadata.service]
- [systemctl, enable, kubelet.path]

View File

@ -66,7 +66,7 @@ bootcmd:
runcmd:
- [systemctl, daemon-reload]
- [systemctl, enable, cloud-metadata.service]
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
- [systemctl, enable, kubelet.path]
- [systemctl, start, --no-block, kubelet.path]
users:

View File

@ -101,11 +101,17 @@ On bare-metal, routing traffic to Ingress controller pods can be done in number
### Equal-Cost Multi-Path
Deploy the Nginx Ingress Controller as a deployment. Deploy the service with a fixed ClusterIP (e.g. 10.3.0.12) in the Kubernetes service IPv4 CIDR range. There is no need for a NodePort or for pods to bind host ports. Any node can proxy packets destined for the service's ClusterIP to a node which has a pod endpoint.
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, and default backend. The service should use a fixed ClusterIP (e.g. 10.3.0.12) in the Kubernetes service IPv4 CIDR range.
Configure the network router or load balancer with a static route for the Kubernetes service range and set the next hop to a node. Repeat for each node and set the metric (i.e. cost) of each. Finally, DNAT traffic destined for the WAN on ports 80 or 443 to the service's fixed ClusterIP.
```
kubectl apply -R -f addons/nginx-ingress/bare-metal
```
Add a DNS record resolving to the WAN for each application.
There is no need for pods to use host networking or for the ingress service to use NodePort or LoadBalancer. Nodes already proxy packets destined for the service's ClusterIP to node(s) with a pod endpoint.
Configure the network router or load balancer with a static route for the Kubernetes service range and set the next hop to a node. Repeat for each node, as desired, and set the metric (i.e. cost) of each. Finally, DNAT traffic destined for the WAN on ports 80 or 443 to the service's fixed ClusterIP.
For each application, add a DNS record resolving to the WAN(s).
```tf
resource "google_dns_record_set" "some-application" {

View File

@ -69,7 +69,7 @@ View the Container Linux Config [format](https://coreos.com/os/docs/1576.4.0/con
Write Container Linux Configs *snippets* as files in the repository where you keep Terraform configs for clusters (perhaps in a `clc` or `snippets` subdirectory). You may organize snippets in multiple files as desired, provided they are each valid.
Define an [AWS](https://typhoon.psdn.io/aws/#cluster), [Google Cloud](https://typhoon.psdn.io/google-cloud/#cluster), or [Digital Ocean](https://typhoon.psdn.io/digital-ocean/#cluster) cluster and fill in the optional `controller_clc_snippets` or `worker_clc_snippets` fields.
[AWS](/cl/aws/#cluster), [Google Cloud](/cl/google-cloud/#cluster), and [Digital Ocean](/cl/digital-ocean/#cluster) clusters allow populating a list of `controller_clc_snippets` or `worker_clc_snippets`.
```
module "digital-ocean-nemo" {
@ -89,6 +89,29 @@ module "digital-ocean-nemo" {
}
```
[Bare-Metal](/cl/bare-metal/#cluster) clusters allow different Container Linux snippets to be used for each node (since hardware may be heterogeneous). Populate the optional `clc_snippets` map variable with any controller or worker name keys and lists of snippets.
```
module "bare-metal-mercury" {
...
controller_names = ["node1"]
worker_names = [
"node2",
"node3",
]
clc_snippets = {
"node2" = [
"${file("./units/hello.yaml")}"
]
"node3" = [
"${file("./units/world.yaml")}",
"${file("./units/hello.yaml")}",
]
}
...
}
```
Plan the resources to be created.
```
@ -113,7 +136,7 @@ $ terraform apply
Container Linux Configs (and the CoreOS Ignition system) create immutable infrastructure. Disk provisioning is performed only on first boot from disk. That means if you change a snippet used by an instance, Terraform will (correctly) try to destroy and recreate that instance. Be careful!
!!! danger
Destroying and recreating controller instances is destructive! etcd runs on controller instances and stores data there. Do not modify controller snippets. See [blue/green](https://typhoon.psdn.io/topics/maintenance/#upgrades) clusters.
Destroying and recreating controller instances is destructive! etcd runs on controller instances and stores data there. Do not modify controller snippets. See [blue/green](/topics/maintenance/#upgrades) clusters.
### Fedora Atomic
@ -130,5 +153,5 @@ module "digital-ocean-nemo" {
}
```
To customize lower-level Kubernetes control plane bootstrapping, see the [poseidon/bootkube-terraform](https://github.com/poseidon/bootkube-terraform) Terraform module.
To customize lower-level Kubernetes control plane bootstrapping, see the [poseidon/terraform-render-bootkube](https://github.com/poseidon/terraform-render-bootkube) Terraform module.

View File

@ -15,7 +15,7 @@ Create a cluster following the AWS [tutorial](../cl/aws.md#cluster). Define a wo
```tf
module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.11.2"
providers = {
aws = "aws.default"
@ -80,7 +80,7 @@ Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#clu
```tf
module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.11.2"
providers = {
google = "google.default"
@ -114,11 +114,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
```
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.11.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.0
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.11.0
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.11.0
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.11.2
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.11.2
```
### Variables

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on AWS with Fedora Atomic.
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on AWS with Fedora Atomic.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. Instances are provisioned on first boot with cloud-init.
@ -83,7 +83,7 @@ Define a Kubernetes cluster using the module `aws/fedora-atomic/kubernetes`.
```tf
module "aws-tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.11.2"
providers = {
aws = "aws.default"
@ -156,9 +156,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
ip-10-0-12-221 Ready 34m v1.11.0
ip-10-0-19-112 Ready 34m v1.11.0
ip-10-0-4-22 Ready 34m v1.11.0
ip-10-0-12-221 Ready 34m v1.11.2
ip-10-0-19-112 Ready 34m v1.11.2
ip-10-0-4-22 Ready 34m v1.11.2
```
List the pods.

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
In this tutorial, we'll network boot and provision a Kubernetes v1.11.0 cluster on bare-metal with Fedora Atomic.
In this tutorial, we'll network boot and provision a Kubernetes v1.11.2 cluster on bare-metal with Fedora Atomic.
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora Atomic via kickstart, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via cloud-init.
@ -235,7 +235,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-atomic/kubernete
```tf
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.11.2"
providers = {
local = "local.default"
@ -361,9 +361,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
node1.example.com Ready 11m v1.11.0
node2.example.com Ready 11m v1.11.0
node3.example.com Ready 11m v1.11.0
node1.example.com Ready 11m v1.11.2
node2.example.com Ready 11m v1.11.2
node3.example.com Ready 11m v1.11.2
```
List the pods.

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on DigitalOcean with Fedora Atomic.
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on DigitalOcean with Fedora Atomic.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. Instances are provisioned on first boot with cloud-init.
@ -77,7 +77,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-atomic/kubern
```tf
module "digital-ocean-nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.11.2"
providers = {
digitalocean = "digitalocean.default"
@ -152,9 +152,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
10.132.110.130 Ready 10m v1.11.0
10.132.115.81 Ready 10m v1.11.0
10.132.124.107 Ready 10m v1.11.0
10.132.110.130 Ready 10m v1.11.2
10.132.115.81 Ready 10m v1.11.2
10.132.124.107 Ready 10m v1.11.2
```
List the pods.

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Fedora Atomic is alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Expect rough edges and changes.
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on Google Compute Engine with Fedora Atomic.
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on Google Compute Engine with Fedora Atomic.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
@ -121,7 +121,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-atomic/kuberne
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.11.2"
providers = {
google = "google.default"
@ -197,9 +197,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.11.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.0
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
```
List the pods.

View File

@ -1,6 +1,6 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on AWS with Container Linux.
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on AWS with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
@ -96,7 +96,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
```tf
module "aws-tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.11.2"
providers = {
aws = "aws.default"
@ -169,9 +169,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
ip-10-0-12-221 Ready 34m v1.11.0
ip-10-0-19-112 Ready 34m v1.11.0
ip-10-0-4-22 Ready 34m v1.11.0
ip-10-0-12-221 Ready 34m v1.11.2
ip-10-0-19-112 Ready 34m v1.11.2
ip-10-0-4-22 Ready 34m v1.11.2
```
List the pods.

View File

@ -1,6 +1,6 @@
# Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.11.0 cluster on bare-metal with Container Linux.
In this tutorial, we'll network boot and provision a Kubernetes v1.11.2 cluster on bare-metal with Container Linux.
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -12,7 +12,7 @@ Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service
* PXE-enabled [network boot](https://coreos.com/matchbox/docs/latest/network-setup.html) environment
* Matchbox v0.6+ deployment with API enabled
* Matchbox credentials `client.crt`, `client.key`, `ca.crt`
* Terraform v0.11.x and [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) installed locally
* Terraform v0.11.x, [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox), and [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) installed locally
## Machines
@ -121,6 +121,14 @@ tar xzf terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
sudo mv terraform-provider-matchbox-v0.2.2-linux-amd64/terraform-provider-matchbox /usr/local/bin/
```
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
```sh
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
```
Add the plugin to your `~/.terraformrc`.
```
@ -174,7 +182,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
```tf
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.11.2"
providers = {
local = "local.default"
@ -283,9 +291,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
```
# before v1.11.0
# before v1.11.2
$ ssh debug@node1.example.com
# after v1.11.0
# after v1.11.2
$ ssh -p 2222 core@node1.example.com
```
@ -310,9 +318,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
node1.example.com Ready 11m v1.11.0
node2.example.com Ready 11m v1.11.0
node3.example.com Ready 11m v1.11.0
node1.example.com Ready 11m v1.11.2
node2.example.com Ready 11m v1.11.2
node3.example.com Ready 11m v1.11.2
```
List the pods.
@ -373,6 +381,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
| clc_snippets | Map from machine names to lists of Container Linux Config snippets | {} | [example](/advanced/customization/#usage) |
| network_ip_autodetection_method | Method to detect host IPv4 address (calico-only) | first-found | can-reach=10.0.0.1 |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |

View File

@ -1,6 +1,6 @@
# Digital Ocean
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on DigitalOcean with Container Linux.
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on DigitalOcean with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -90,7 +90,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
```tf
module "digital-ocean-nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.11.2"
providers = {
digitalocean = "digitalocean.default"
@ -164,9 +164,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
10.132.110.130 Ready 10m v1.11.0
10.132.115.81 Ready 10m v1.11.0
10.132.124.107 Ready 10m v1.11.0
10.132.110.130 Ready 10m v1.11.2
10.132.115.81 Ready 10m v1.11.2
10.132.124.107 Ready 10m v1.11.2
```
List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.11.0 cluster on Google Compute Engine with Container Linux.
In this tutorial, we'll create a Kubernetes v1.11.2 cluster on Google Compute Engine with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -97,7 +97,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.2"
providers = {
google = "google.default"
@ -172,9 +172,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.11.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.0
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
```
List the pods.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
@ -45,7 +45,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.2"
providers = {
google = "google.default"
@ -86,9 +86,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.11.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.0
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
```
List the pods.

View File

@ -18,7 +18,7 @@ module "google-cloud-yavin" {
}
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.11.0"
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.11.2"
...
}
```
@ -126,113 +126,3 @@ Typhoon supports multi-controller clusters, so it is possible to upgrade a clust
!!! warning
Typhoon does not support or document node replacement as an upgrade strategy. It limits Typhoon's ability to make infrastructure and architectural changes between tagged releases.
## Terraform v0.11.x
Terraform v0.10.x to v0.11.x introduced breaking changes in the provider and module inheritance relationship that you MUST be aware of when upgrading to the v0.11.x `terraform` binary. Terraform now allows multiple named (i.e. aliased) copies of a provider to exist (e.g `aws.default`, `aws.somename`). Terraform now also requires providers be explicitly passed to modules in order to satisfy module version contraints (which Typhoon modules define). Full details can be found in [typhoon#77](https://github.com/poseidon/typhoon/issues/77) and [hashicorp#16824](https://github.com/hashicorp/terraform/issues/16824).
In particular, after upgrading to the v0.11.x `terraform` binary, you'll notice:
* `terraform plan` does not succeed and prompts for variables when it didn't before
* `terraform plan` does not succeed and mentions "provider configuration block is required for all operations"
* `terraform apply` fails when you comment or remove a module usage in order to delete a cluster
### New users
New users can start with Terraform v0.11.x and follow the Typhoon docs without issue.
### Existing
Users who used modules to create clusters with Terraform v0.10.x and still manage those clusters via Terraform must explicitly add each provider used in `provider.tf`:
```
provider "local" {
version = "~> 1.0"
alias = "default"
}
provider "null" {
version = "~> 1.0"
alias = "default"
}
provider "template" {
version = "~> 1.0"
alias = "default"
}
provider "tls" {
version = "~> 1.0"
alias = "default"
}
```
Modify the `google`, `aws`, or `digitalocean` provider section to specify an explicit `alias` name.
```
provider "digitalocean" {
version = "0.1.2"
token = "${chomp(file("~/.config/digital-ocean/token"))}"
alias = "default"
}
```
!!! note
In these examples, we've chosen to name each provider "default", though the point of the Terraform changes is that other possibilities are possible.
Edit each instance (i.e. usage) of a module and explicitly pass the providers.
```
module "aws-cluster" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes"
providers = {
aws = "aws.default"
local = "local.default"
null = "null.default"
template = "template.default"
tls = "tls.default"
}
cluster_name = "somename"
```
Re-run `terraform plan`. Plan will claim there are no changes to apply. Run `terraform apply` anyway as this will update Terraform state to be aware of the explicit provider versions.
### Verify
You should now be able to run `terraform plan` without errors. When you choose, you may comment or delete a module from Terraform configs and `terraform apply` should destroy the cluster correctly.
## terraform-provider-ct v0.2.1
Typhoon requires updating the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin installed on your system from v0.2.0 to [v0.2.1](https://github.com/coreos/terraform-provider-ct/releases/tag/v0.2.1).
Check your `~/.terraformrc` to find your current `terraform-provider-ct` plugin.
```
providers {
ct = "/usr/local/bin/terraform-provider-ct"
}
```
Make a backup copy. Install `terraform-provider-ct` v0.2.1.
```sh
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
```
Re-initialize Terraform configs which have Typhoon cluster resources.
```
cd clusters
terraform init
```
Verify Terraform does not produce a diff related to Container Linux provisioning.
```
terraform plan
```

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.11.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.3.8"
Environment="ETCD_IMAGE_TAG=v3.3.9"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
@ -123,7 +123,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.11.0
KUBELET_IMAGE_TAG=v1.11.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -144,7 +144,7 @@ storage:
# Move experimental manifests
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.12.0}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \

View File

@ -93,7 +93,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.11.0
KUBELET_IMAGE_TAG=v1.11.2
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -111,7 +111,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.11.0 \
docker://k8s.gcr.io/hyperkube:v1.11.2 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=81ba300e712e116c9ea9470ccdce7859fecc76b6"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -93,9 +93,9 @@ bootcmd:
runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
- [systemctl, start, --no-block, etcd.service]
- [systemctl, enable, cloud-metadata.service]
- [systemctl, start, --no-block, kubelet.service]

View File

@ -70,7 +70,7 @@ runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- [systemctl, enable, cloud-metadata.service]
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.0"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
- [systemctl, start, --no-block, kubelet.service]
users:
- default