mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 22:31:33 +02:00
Compare commits
21 Commits
Author | SHA1 | Date | |
---|---|---|---|
d5537405e1 | |||
949ce21fb2 | |||
ccd96c37da | |||
acd539f865 | |||
244a1a601a | |||
d02af3d40d | |||
130daeac26 | |||
1ab06f69d7 | |||
eb08593eae | |||
e9659a8539 | |||
6b87132aa1 | |||
f5ff003d0e | |||
d697dd46dc | |||
2f3097ebea | |||
f4d3508578 | |||
67fb9602e7 | |||
c8a85fabe1 | |||
7eafa59d8f | |||
679079b242 | |||
1d27dc6528 | |||
b74cc8afd2 |
33
CHANGES.md
33
CHANGES.md
@ -4,6 +4,39 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
## v1.13.3
|
||||
|
||||
* Kubernetes [v1.13.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1133)
|
||||
* Update etcd from v3.3.10 to [v3.3.11](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md#v3311-2019-1-11)
|
||||
* Update CoreDNS from v1.3.0 to [v1.3.1](https://coredns.io/2019/01/13/coredns-1.3.1-release/)
|
||||
* Switch from the `proxy` plugin to the faster `forward` plugin for upsteam resolvers
|
||||
* Update Calico from v3.4.0 to [v3.5.0](https://docs.projectcalico.org/v3.5/releases/)
|
||||
* Update flannel from v0.10.0 to [v0.11.0](https://github.com/coreos/flannel/releases/tag/v0.11.0)
|
||||
* Reduce pod eviction timeout for deleting pods on unready nodes to 1 minute
|
||||
* Respond more quickly to node preemption (previously 5 minutes)
|
||||
* Fix automatic worker deletion on shutdown for cloud platforms
|
||||
* Lowering Kubelet privileges in [#372](https://github.com/poseidon/typhoon/pull/372) dropped a needed node deletion authorization. Scale-in due to manual terraform apply (any cloud), AWS spot termination, or Azure low priority deletion left old nodes registered, requiring manual deletion (`kubectl delete node name`)
|
||||
|
||||
#### AWS
|
||||
|
||||
* Add `ingress_zone_id` output with the NLB DNS name's Route53 zone for use in alias records ([#380](https://github.com/poseidon/typhoon/pull/380))
|
||||
|
||||
#### Azure
|
||||
|
||||
* Fix azure provider warning, `public_ip` `allocation_method` replaces `public_ip_address_allocation`
|
||||
* Require `terraform-provider-azurerm` v1.21+ (action required)
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from v0.21.0 to v0.22.0
|
||||
* Update Prometheus from v2.6.0 to v2.7.1
|
||||
* Update kube-state-metrics from v1.4.0 to v1.5.0
|
||||
* Fix ClusterRole to collect and export PodDisruptionBudget metrics ([#383](https://github.com/poseidon/typhoon/pull/383))
|
||||
* Update node-exporter from v0.15.2 to v0.17.0
|
||||
* Update Grafana from v5.4.2 to v5.4.3
|
||||
|
||||
## v1.13.2
|
||||
|
||||
* Kubernetes [v1.13.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1132)
|
||||
* Add ServiceAccounts for `kube-apiserver` and `kube-scheduler` ([#370](https://github.com/poseidon/typhoon/pull/370))
|
||||
* Use lower-privilege TLS client certificates for Kubelets ([#372](https://github.com/poseidon/typhoon/pull/372))
|
||||
|
10
README.md
10
README.md
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
@ -50,7 +50,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -91,9 +91,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.13.2
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.13.2
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.13.2
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.13.3
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.13.3
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1963,7 +1963,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(node_cpu{mode=\"idle\"}[2m])) * 100",
|
||||
"expr": "sum(rate(node_cpu_seconds_total{mode=\"idle\"}[2m])) * 100",
|
||||
"hide": false,
|
||||
"intervalFactor": 10,
|
||||
"legendFormat": "",
|
||||
@ -2138,7 +2138,7 @@ data:
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [
|
||||
{
|
||||
"alias": "node_memory_SwapFree{instance=\"172.17.0.1:9100\",job=\"prometheus\"}",
|
||||
"alias": "node_memory_SwapFree_bytes{instance=\"172.17.0.1:9100\",job=\"prometheus\"}",
|
||||
"yaxis": 2
|
||||
}
|
||||
],
|
||||
@ -2148,7 +2148,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(node_memory_MemTotal) - sum(node_memory_MemFree) - sum(node_memory_Buffers) - sum(node_memory_Cached)",
|
||||
"expr": "sum(node_memory_MemTotal_bytes) - sum(node_memory_MemFree_bytes) - sum(node_memory_Buffers_bytes) - sum(node_memory_Cached_bytes)",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "memory usage",
|
||||
"metric": "memo",
|
||||
@ -2157,7 +2157,7 @@ data:
|
||||
"target": ""
|
||||
},
|
||||
{
|
||||
"expr": "sum(node_memory_Buffers)",
|
||||
"expr": "sum(node_memory_Buffers_bytes)",
|
||||
"interval": "",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "memory buffers",
|
||||
@ -2167,7 +2167,7 @@ data:
|
||||
"target": ""
|
||||
},
|
||||
{
|
||||
"expr": "sum(node_memory_Cached)",
|
||||
"expr": "sum(node_memory_Cached_bytes)",
|
||||
"interval": "",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "memory cached",
|
||||
@ -2177,7 +2177,7 @@ data:
|
||||
"target": ""
|
||||
},
|
||||
{
|
||||
"expr": "sum(node_memory_MemFree)",
|
||||
"expr": "sum(node_memory_MemFree_bytes)",
|
||||
"interval": "",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "memory free",
|
||||
@ -2268,7 +2268,7 @@ data:
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "((sum(node_memory_MemTotal) - sum(node_memory_MemFree) - sum(node_memory_Buffers) - sum(node_memory_Cached)) / sum(node_memory_MemTotal)) * 100",
|
||||
"expr": "((sum(node_memory_MemTotal_bytes) - sum(node_memory_MemFree_bytes) - sum(node_memory_Buffers_bytes) - sum(node_memory_Cached_bytes)) / sum(node_memory_MemTotal_bytes)) * 100",
|
||||
"intervalFactor": 2,
|
||||
"metric": "",
|
||||
"refId": "A",
|
||||
@ -2355,7 +2355,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(node_disk_bytes_read[5m]))",
|
||||
"expr": "max(rate(node_disk_read_bytes_total[5m]))",
|
||||
"hide": false,
|
||||
"intervalFactor": 4,
|
||||
"legendFormat": "read",
|
||||
@ -2364,14 +2364,14 @@ data:
|
||||
"target": ""
|
||||
},
|
||||
{
|
||||
"expr": "sum(rate(node_disk_bytes_written[5m]))",
|
||||
"expr": "max(rate(node_disk_written_bytes_total[5m]))",
|
||||
"intervalFactor": 4,
|
||||
"legendFormat": "written",
|
||||
"refId": "B",
|
||||
"step": 20
|
||||
},
|
||||
{
|
||||
"expr": "sum(rate(node_disk_io_time_ms[5m]))",
|
||||
"expr": "max(rate(node_disk_io_time_seconds_total[5m]))",
|
||||
"intervalFactor": 4,
|
||||
"legendFormat": "io time",
|
||||
"refId": "C",
|
||||
@ -2458,7 +2458,7 @@ data:
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "(sum(node_filesystem_size{device!=\"rootfs\"}) - sum(node_filesystem_free{device!=\"rootfs\"})) / sum(node_filesystem_size{device!=\"rootfs\"})",
|
||||
"expr": "(sum(node_filesystem_size_bytes{device!=\"rootfs\"}) - sum(node_filesystem_free_bytes{device!=\"rootfs\"})) / sum(node_filesystem_size_bytes{device!=\"rootfs\"})",
|
||||
"intervalFactor": 2,
|
||||
"refId": "A",
|
||||
"step": 60,
|
||||
@ -2536,7 +2536,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(node_network_receive_bytes{device!~\"lo\"}[5m]))",
|
||||
"expr": "sum(rate(node_network_receive_bytes_total{device!~\"lo\"}[5m]))",
|
||||
"hide": false,
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "",
|
||||
@ -2618,7 +2618,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(node_network_transmit_bytes{device!~\"lo\"}[5m]))",
|
||||
"expr": "sum(rate(node_network_transmit_bytes_total{device!~\"lo\"}[5m]))",
|
||||
"hide": false,
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "",
|
||||
@ -4093,7 +4093,7 @@ data:
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(100 - (avg by (instance) (rate(node_cpu{job=\"node-exporter\",mode=\"idle\"}[5m])) * 100)) / count(node_cpu{job=\"node-exporter\",mode=\"idle\"})",
|
||||
"expr": "sum(100 - (avg by (instance) (rate(node_cpu_seconds_total{job=\"node-exporter\",mode=\"idle\"}[5m])) * 100)) / count(node_cpu_seconds_total{job=\"node-exporter\",mode=\"idle\"})",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 2,
|
||||
"refId": "A",
|
||||
@ -4165,7 +4165,7 @@ data:
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "((sum(node_memory_MemTotal) - sum(node_memory_MemFree) - sum(node_memory_Buffers) - sum(node_memory_Cached)) / sum(node_memory_MemTotal)) * 100",
|
||||
"expr": "((sum(node_memory_MemTotal_bytes) - sum(node_memory_MemFree_bytes) - sum(node_memory_Buffers_bytes) - sum(node_memory_Cached_bytes)) / sum(node_memory_MemTotal_bytes)) * 100",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 2,
|
||||
"refId": "A",
|
||||
@ -4237,7 +4237,7 @@ data:
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "(sum(node_filesystem_size{device!=\"rootfs\"}) - sum(node_filesystem_free{device!=\"rootfs\"})) / sum(node_filesystem_size{device!=\"rootfs\"})",
|
||||
"expr": "(sum(node_filesystem_size_bytes{device!=\"rootfs\"}) - sum(node_filesystem_free_bytes{device!=\"rootfs\"})) / sum(node_filesystem_size_bytes{device!=\"rootfs\"})",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 2,
|
||||
"refId": "A",
|
||||
@ -5476,7 +5476,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "100 - (avg by (cpu) (irate(node_cpu{mode=\"idle\", instance=\"$server\"}[5m])) * 100)",
|
||||
"expr": "100 - (avg by (cpu) (irate(node_cpu_seconds_total{mode=\"idle\", instance=\"$server\"}[5m])) * 100)",
|
||||
"hide": false,
|
||||
"intervalFactor": 10,
|
||||
"legendFormat": "{{cpu}}",
|
||||
@ -5652,7 +5652,7 @@ data:
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [
|
||||
{
|
||||
"alias": "node_memory_SwapFree{instance=\"172.17.0.1:9100\",job=\"prometheus\"}",
|
||||
"alias": "node_memory_SwapFree_bytes{instance=\"172.17.0.1:9100\",job=\"prometheus\"}",
|
||||
"yaxis": 2
|
||||
}
|
||||
],
|
||||
@ -5662,7 +5662,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "node_memory_MemTotal{instance=\"$server\"} - node_memory_MemFree{instance=\"$server\"} - node_memory_Buffers{instance=\"$server\"} - node_memory_Cached{instance=\"$server\"}",
|
||||
"expr": "node_memory_MemTotal_bytes{instance=\"$server\"} - node_memory_MemFree_bytes{instance=\"$server\"} - node_memory_Buffers_bytes{instance=\"$server\"} - node_memory_Cached_bytes{instance=\"$server\"}",
|
||||
"hide": false,
|
||||
"interval": "",
|
||||
"intervalFactor": 2,
|
||||
@ -5672,7 +5672,7 @@ data:
|
||||
"step": 10
|
||||
},
|
||||
{
|
||||
"expr": "node_memory_Buffers{instance=\"$server\"}",
|
||||
"expr": "node_memory_Buffers_bytes{instance=\"$server\"}",
|
||||
"interval": "",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "memory buffers",
|
||||
@ -5689,7 +5689,7 @@ data:
|
||||
"step": 10
|
||||
},
|
||||
{
|
||||
"expr": "node_memory_MemFree{instance=\"$server\"}",
|
||||
"expr": "node_memory_MemFree_bytes{instance=\"$server\"}",
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "memory free",
|
||||
"metric": "",
|
||||
@ -5778,7 +5778,7 @@ data:
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "((node_memory_MemTotal{instance=\"$server\"} - node_memory_MemFree{instance=\"$server\"} - node_memory_Buffers{instance=\"$server\"} - node_memory_Cached{instance=\"$server\"}) / node_memory_MemTotal{instance=\"$server\"}) * 100",
|
||||
"expr": "((node_memory_MemTotal_bytes{instance=\"$server\"} - node_memory_MemFree_bytes{instance=\"$server\"} - node_memory_Buffers_bytes{instance=\"$server\"} - node_memory_Cached_bytes{instance=\"$server\"}) / node_memory_MemTotal_bytes{instance=\"$server\"}) * 100",
|
||||
"intervalFactor": 2,
|
||||
"refId": "A",
|
||||
"step": 60,
|
||||
@ -5864,7 +5864,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by (instance) (rate(node_disk_bytes_read{instance=\"$server\"}[2m]))",
|
||||
"expr": "sum by (instance) (rate(node_disk_read_bytes_total{instance=\"$server\"}[2m]))",
|
||||
"hide": false,
|
||||
"intervalFactor": 4,
|
||||
"legendFormat": "read",
|
||||
@ -5873,14 +5873,14 @@ data:
|
||||
"target": ""
|
||||
},
|
||||
{
|
||||
"expr": "sum by (instance) (rate(node_disk_bytes_written{instance=\"$server\"}[2m]))",
|
||||
"expr": "sum by (instance) (rate(node_disk_written_bytes_total{instance=\"$server\"}[2m]))",
|
||||
"intervalFactor": 4,
|
||||
"legendFormat": "written",
|
||||
"refId": "B",
|
||||
"step": 20
|
||||
},
|
||||
{
|
||||
"expr": "sum by (instance) (rate(node_disk_io_time_ms{instance=\"$server\"}[2m]))",
|
||||
"expr": "sum by (instance) (rate(node_disk_io_time_seconds_total{instance=\"$server\"}[2m]))",
|
||||
"intervalFactor": 4,
|
||||
"legendFormat": "io time",
|
||||
"refId": "C",
|
||||
@ -5967,7 +5967,7 @@ data:
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"expr": "(sum(node_filesystem_size{device!=\"rootfs\",instance=\"$server\"}) - sum(node_filesystem_free{device!=\"rootfs\",instance=\"$server\"})) / sum(node_filesystem_size{device!=\"rootfs\",instance=\"$server\"})",
|
||||
"expr": "(sum(node_filesystem_size_bytes{device!=\"rootfs\",instance=\"$server\"}) - sum(node_filesystem_free_bytes{device!=\"rootfs\",instance=\"$server\"})) / sum(node_filesystem_size_bytes{device!=\"rootfs\",instance=\"$server\"})",
|
||||
"intervalFactor": 2,
|
||||
"refId": "A",
|
||||
"step": 60,
|
||||
@ -6045,7 +6045,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(node_network_receive_bytes{instance=\"$server\",device!~\"lo\"}[5m])",
|
||||
"expr": "rate(node_network_receive_bytes_total{instance=\"$server\",device!~\"lo\"}[5m])",
|
||||
"hide": false,
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{device}}",
|
||||
@ -6127,7 +6127,7 @@ data:
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "rate(node_network_transmit_bytes{instance=\"$server\",device!~\"lo\"}[5m])",
|
||||
"expr": "rate(node_network_transmit_bytes_total{instance=\"$server\",device!~\"lo\"}[5m])",
|
||||
"hide": false,
|
||||
"intervalFactor": 2,
|
||||
"legendFormat": "{{device}}",
|
||||
@ -6184,7 +6184,7 @@ data:
|
||||
"multi": false,
|
||||
"name": "server",
|
||||
"options": [],
|
||||
"query": "label_values(node_boot_time, instance)",
|
||||
"query": "label_values(node_boot_time_seconds, instance)",
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"sort": 0,
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: grafana
|
||||
image: grafana/grafana:5.4.2
|
||||
image: grafana/grafana:5.4.3
|
||||
env:
|
||||
- name: GF_SERVER_HTTP_PORT
|
||||
value: "8080"
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -20,7 +20,7 @@ spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.6.0
|
||||
image: quay.io/prometheus/prometheus:v2.7.1
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -3,7 +3,8 @@ kind: ClusterRole
|
||||
metadata:
|
||||
name: kube-state-metrics
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- secrets
|
||||
@ -17,23 +18,47 @@ rules:
|
||||
- persistentvolumes
|
||||
- namespaces
|
||||
- endpoints
|
||||
verbs: ["list", "watch"]
|
||||
- apiGroups: ["extensions"]
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- daemonsets
|
||||
- deployments
|
||||
- replicasets
|
||||
verbs: ["list", "watch"]
|
||||
- apiGroups: ["apps"]
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- statefulsets
|
||||
verbs: ["list", "watch"]
|
||||
- apiGroups: ["batch"]
|
||||
- daemonsets
|
||||
- deployments
|
||||
- replicasets
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- batch
|
||||
resources:
|
||||
- cronjobs
|
||||
- jobs
|
||||
verbs: ["list", "watch"]
|
||||
- apiGroups: ["autoscaling"]
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- autoscaling
|
||||
resources:
|
||||
- horizontalpodautoscalers
|
||||
verbs: ["list", "watch"]
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- poddisruptionbudgets
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
|
@ -24,7 +24,7 @@ spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: quay.io/coreos/kube-state-metrics:v1.4.0
|
||||
image: quay.io/coreos/kube-state-metrics:v1.5.0
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
@ -35,7 +35,7 @@ spec:
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
- name: addon-resizer
|
||||
image: k8s.gcr.io/addon-resizer:1.7
|
||||
image: k8s.gcr.io/addon-resizer:1.8.4
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
|
@ -6,7 +6,7 @@ metadata:
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: kube-state-metrics-resizer
|
||||
name: kube-state-metrics
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kube-state-metrics
|
||||
|
@ -1,15 +1,31 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: kube-state-metrics-resizer
|
||||
name: kube-state-metrics
|
||||
namespace: monitoring
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
verbs: ["get"]
|
||||
- apiGroups: ["extensions"]
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- deployments
|
||||
resourceNames: ["kube-state-metrics"]
|
||||
verbs: ["get", "update"]
|
||||
resourceNames:
|
||||
- kube-state-metrics
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- deployments
|
||||
resourceNames:
|
||||
- kube-state-metrics
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
|
||||
|
@ -28,21 +28,24 @@ spec:
|
||||
hostPID: true
|
||||
containers:
|
||||
- name: node-exporter
|
||||
image: quay.io/prometheus/node-exporter:v0.15.2
|
||||
image: quay.io/prometheus/node-exporter:v0.17.0
|
||||
args:
|
||||
- "--path.procfs=/host/proc"
|
||||
- "--path.sysfs=/host/sys"
|
||||
- --path.procfs=/host/proc
|
||||
- --path.sysfs=/host/sys
|
||||
- --path.rootfs=/host/root
|
||||
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
|
||||
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 9100
|
||||
hostPort: 9100
|
||||
resources:
|
||||
requests:
|
||||
memory: 30Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 50Mi
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
volumeMounts:
|
||||
- name: proc
|
||||
mountPath: /host/proc
|
||||
@ -50,6 +53,9 @@ spec:
|
||||
- name: sys
|
||||
mountPath: /host/sys
|
||||
readOnly: true
|
||||
- name: root
|
||||
mountPath: /host/root
|
||||
readOnly: true
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
@ -60,3 +66,6 @@ spec:
|
||||
- name: sys
|
||||
hostPath:
|
||||
path: /sys
|
||||
- name: root
|
||||
hostPath:
|
||||
path: /
|
||||
|
@ -456,22 +456,22 @@ data:
|
||||
- name: node.rules
|
||||
rules:
|
||||
- record: instance:node_cpu:rate:sum
|
||||
expr: sum(rate(node_cpu{mode!="idle",mode!="iowait",mode!~"^(?:guest.*)$"}[3m]))
|
||||
expr: sum(rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!~"^(?:guest.*)$"}[3m]))
|
||||
BY (instance)
|
||||
- record: instance:node_filesystem_usage:sum
|
||||
expr: sum((node_filesystem_size{mountpoint="/"} - node_filesystem_free{mountpoint="/"}))
|
||||
expr: sum((node_filesystem_size_bytes{mountpoint="/"} - node_filesystem_free_bytes{mountpoint="/"}))
|
||||
BY (instance)
|
||||
- record: instance:node_network_receive_bytes:rate:sum
|
||||
expr: sum(rate(node_network_receive_bytes[3m])) BY (instance)
|
||||
expr: sum(rate(node_network_receive_bytes_total[3m])) BY (instance)
|
||||
- record: instance:node_network_transmit_bytes:rate:sum
|
||||
expr: sum(rate(node_network_transmit_bytes[3m])) BY (instance)
|
||||
expr: sum(rate(node_network_transmit_bytes_total[3m])) BY (instance)
|
||||
- record: instance:node_cpu:ratio
|
||||
expr: sum(rate(node_cpu{mode!="idle"}[5m])) WITHOUT (cpu, mode) / ON(instance)
|
||||
GROUP_LEFT() count(sum(node_cpu) BY (instance, cpu)) BY (instance)
|
||||
expr: sum(rate(node_cpu_seconds_total{mode!="idle"}[5m])) WITHOUT (cpu, mode) / ON(instance)
|
||||
GROUP_LEFT() count(sum(node_cpu_seconds_total) BY (instance, cpu)) BY (instance)
|
||||
- record: cluster:node_cpu:sum_rate5m
|
||||
expr: sum(rate(node_cpu{mode!="idle"}[5m]))
|
||||
expr: sum(rate(node_cpu_seconds_total{mode!="idle"}[5m]))
|
||||
- record: cluster:node_cpu:ratio
|
||||
expr: cluster:node_cpu:rate5m / count(sum(node_cpu) BY (instance, cpu))
|
||||
expr: cluster:node_cpu:sum_rate5m / count(sum(node_cpu_seconds_total) BY (instance, cpu))
|
||||
- alert: NodeExporterDown
|
||||
expr: absent(up{job="node-exporter"} == 1)
|
||||
for: 10m
|
||||
@ -481,7 +481,7 @@ data:
|
||||
description: Prometheus could not scrape a node-exporter for more than 10m,
|
||||
or node-exporters have disappeared from discovery
|
||||
- alert: NodeDiskRunningFull
|
||||
expr: predict_linear(node_filesystem_free[6h], 3600 * 24) < 0
|
||||
expr: predict_linear(node_filesystem_free_bytes[6h], 3600 * 24) < 0
|
||||
for: 30m
|
||||
labels:
|
||||
severity: warning
|
||||
@ -489,7 +489,7 @@ data:
|
||||
description: device {{$labels.device}} on node {{$labels.instance}} is running
|
||||
full within the next 24 hours (mounted at {{$labels.mountpoint}})
|
||||
- alert: NodeDiskRunningFull
|
||||
expr: predict_linear(node_filesystem_free[30m], 3600 * 2) < 0
|
||||
expr: predict_linear(node_filesystem_free_bytes[30m], 3600 * 2) < 0
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=bcbdddd8d07c99ab88b2e9ebfb662de4c104de0a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c12a11c8006606b59335ecc994abe22358aaf68b"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.11"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -123,7 +123,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
|
@ -9,6 +9,11 @@ output "ingress_dns_name" {
|
||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||
}
|
||||
|
||||
output "ingress_zone_id" {
|
||||
value = "${aws_lb.nlb.zone_id}"
|
||||
description = "Route53 zone id of the network load balancer DNS name that can be used in Route53 alias records"
|
||||
}
|
||||
|
||||
# Outputs for worker pools
|
||||
|
||||
output "vpc_id" {
|
||||
|
@ -93,7 +93,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -111,7 +111,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.13.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.13.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [spot](https://typhoon.psdn.io/cl/aws/#spot) workers
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=bcbdddd8d07c99ab88b2e9ebfb662de4c104de0a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c12a11c8006606b59335ecc994abe22358aaf68b"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -78,8 +78,8 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.2"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.11"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
|
@ -9,6 +9,11 @@ output "ingress_dns_name" {
|
||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||
}
|
||||
|
||||
output "ingress_zone_id" {
|
||||
value = "${aws_lb.nlb.zone_id}"
|
||||
description = "Route53 zone id of the network load balancer DNS name that can be used in Route53 alias records"
|
||||
}
|
||||
|
||||
# Outputs for worker pools
|
||||
|
||||
output "vpc_id" {
|
||||
|
@ -54,7 +54,7 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.3"
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=bcbdddd8d07c99ab88b2e9ebfb662de4c104de0a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c12a11c8006606b59335ecc994abe22358aaf68b"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.11"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -123,7 +123,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
|
@ -121,10 +121,10 @@ resource "azurerm_public_ip" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
location = "${azurerm_resource_group.cluster.location}"
|
||||
sku = "Standard"
|
||||
public_ip_address_allocation = "static"
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
location = "${azurerm_resource_group.cluster.location}"
|
||||
sku = "Standard"
|
||||
allocation_method = "Static"
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
|
@ -17,20 +17,20 @@ resource "azurerm_dns_a_record" "apiserver" {
|
||||
resource "azurerm_public_ip" "apiserver-ipv4" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-apiserver-ipv4"
|
||||
location = "${var.region}"
|
||||
sku = "Standard"
|
||||
public_ip_address_allocation = "static"
|
||||
name = "${var.cluster_name}-apiserver-ipv4"
|
||||
location = "${var.region}"
|
||||
sku = "Standard"
|
||||
allocation_method = "Static"
|
||||
}
|
||||
|
||||
# Static IPv4 address for the ingress frontend
|
||||
resource "azurerm_public_ip" "ingress-ipv4" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-ingress-ipv4"
|
||||
location = "${var.region}"
|
||||
sku = "Standard"
|
||||
public_ip_address_allocation = "static"
|
||||
name = "${var.cluster_name}-ingress-ipv4"
|
||||
location = "${var.region}"
|
||||
sku = "Standard"
|
||||
allocation_method = "Static"
|
||||
}
|
||||
|
||||
# Network Load Balancer for apiservers and ingress
|
||||
|
@ -5,7 +5,7 @@ terraform {
|
||||
}
|
||||
|
||||
provider "azurerm" {
|
||||
version = "~> 1.19"
|
||||
version = "~> 1.21"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
|
@ -93,7 +93,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -111,7 +111,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.13.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.13.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]')
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=bcbdddd8d07c99ab88b2e9ebfb662de4c104de0a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c12a11c8006606b59335ecc994abe22358aaf68b"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.11"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
|
||||
@ -128,7 +128,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
@ -89,7 +89,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=bcbdddd8d07c99ab88b2e9ebfb662de4c104de0a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c12a11c8006606b59335ecc994abe22358aaf68b"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
|
@ -84,8 +84,8 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.2"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.11"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
|
@ -60,7 +60,7 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.3"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=bcbdddd8d07c99ab88b2e9ebfb662de4c104de0a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c12a11c8006606b59335ecc994abe22358aaf68b"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.11"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -125,7 +125,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
|
@ -95,7 +95,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -113,7 +113,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.13.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.13.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=bcbdddd8d07c99ab88b2e9ebfb662de4c104de0a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c12a11c8006606b59335ecc994abe22358aaf68b"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -75,8 +75,8 @@ bootcmd:
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.2"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.11"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
|
@ -51,7 +51,7 @@ bootcmd:
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.3"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Heapster
|
||||
|
||||
[Heapster](https://kubernetes.io/docs/user-guide/monitoring/) collects data from apiservers and kubelets and exposes it through a REST API. This API powers the `kubectl top` command and Kubernetes dashboard graphs.
|
||||
[Heapster](https://kubernetes.io/docs/user-guide/monitoring/) collects data from apiservers and kubelets and exposes it through a REST API. This API powers the `kubectl top` command.
|
||||
|
||||
## Create
|
||||
|
||||
|
@ -16,7 +16,7 @@ Create a cluster following the AWS [tutorial](../cl/aws.md#cluster). Define a wo
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -82,7 +82,7 @@ Create a cluster following the Azure [tutorial](../cl/azure.md#cluster). Define
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes/workers?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
azurerm = "azurerm.default"
|
||||
@ -152,7 +152,7 @@ Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#clu
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -187,11 +187,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.13.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.13.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.13.2
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.13.2
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.13.2
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.13.3
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.13.3
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.13.3
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.13.3
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.13.3
|
||||
```
|
||||
|
||||
### Variables
|
||||
|
@ -18,7 +18,7 @@ Fedora Atomic is a container-optimized operating system designed for large-scale
|
||||
|
||||
For newcomers, Typhoon is a free (cost and freedom) Kubernetes distribution providing upstream Kubernetes, declarative configuration via [Terraform](https://www.terraform.io/intro/index.html), and support for AWS, Google Cloud, DigitalOcean, and bare-metal. Typhoon clusters use a [self-hosted](https://github.com/kubernetes-incubator/bootkube) control plane, support [Calico](https://www.projectcalico.org/blog/) and [flannel](https://coreos.com/flannel/docs/latest/) CNI networking, and enable etcd TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/), and network policy.
|
||||
|
||||
Typhoon for Fedora Atomic reflects many of the same principles that created Typhoon for Container Linux. Clusters are declared using plain Terraform configs that can be versioned. In lieu of Ignition, instances are declaratively provisioned with Cloud-Init and kickstart (bare-metal only). TLS assets are generated. Hosts run only a kubelet service, other components are scheduled (i.e. self-hosted). The upstream hyperkube is used directly[^1]. And clusters are kept minimal by offering optional addons for [Ingress](https://typhoon.psdn.io/addons/ingress/), [Prometheus](https://typhoon.psdn.io/addons/prometheus/), and [Grafana](https://typhoon.psdn.io/addons/grafana/). Typhoon compliments and enhances Fedora Atomic as a choice of operating system for Kubernetes.
|
||||
Typhoon for Fedora Atomic reflects many of the same principles that created Typhoon for Container Linux. Clusters are declared using plain Terraform configs that can be versioned. In lieu of Ignition, instances are declaratively provisioned with Cloud-Init and kickstart (bare-metal only). TLS assets are generated. Hosts run only a kubelet service, other components are scheduled (i.e. self-hosted). The upstream hyperkube is used directly[^1]. And clusters are kept minimal by offering optional addons for [Ingress](/addons/ingress/), [Prometheus](/addons/prometheus/), and [Grafana](/addons/grafana/). Typhoon compliments and enhances Fedora Atomic as a choice of operating system for Kubernetes.
|
||||
|
||||
Meanwhile, Fedora Atomic adds some promising new low-level technologies:
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# AWS
|
||||
# DigitalOcean
|
||||
|
||||
## IPv6
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.13.2 cluster on AWS with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.13.3 cluster on AWS with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
@ -83,7 +83,7 @@ Define a Kubernetes cluster using the module `aws/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -156,9 +156,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready controller,master 10m v1.13.2
|
||||
ip-10-0-26-65 Ready node 10m v1.13.2
|
||||
ip-10-0-41-21 Ready node 10m v1.13.2
|
||||
ip-10-0-3-155 Ready controller,master 10m v1.13.3
|
||||
ip-10-0-26-65 Ready node 10m v1.13.3
|
||||
ip-10-0-41-21 Ready node 10m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.13.2 cluster on bare-metal with Fedora Atomic.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.13.3 cluster on bare-metal with Fedora Atomic.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora Atomic via kickstart, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via cloud-init.
|
||||
|
||||
@ -235,7 +235,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-atomic/kubernete
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -361,9 +361,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready controller,master 10m v1.13.2
|
||||
node2.example.com Ready node 10m v1.13.2
|
||||
node3.example.com Ready node 10m v1.13.2
|
||||
node1.example.com Ready controller,master 10m v1.13.3
|
||||
node2.example.com Ready node 10m v1.13.3
|
||||
node3.example.com Ready node 10m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.13.2 cluster on DigitalOcean with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.13.3 cluster on DigitalOcean with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
@ -77,7 +77,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-atomic/kubern
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
@ -152,9 +152,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
nemo-controller-0 Ready controller,master 10m v1.13.2
|
||||
nemo-worker-0 Ready node 10m v1.13.2
|
||||
nemo-worker-1 Ready node 10m v1.13.2
|
||||
nemo-controller-0 Ready controller,master 10m v1.13.3
|
||||
nemo-worker-0 Ready node 10m v1.13.3
|
||||
nemo-worker-1 Ready node 10m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -204,9 +204,9 @@ Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to contro
|
||||
You'll need a registered domain name or delegated subdomain in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
# Declare a DigitalOcean record to also create a zone file
|
||||
resource "digitalocean_domain" "zone-for-clusters" {
|
||||
name = "do.example.com"
|
||||
# Digital Ocean oddly requires an IP here. You may have to delete the A record it makes. :(
|
||||
ip_address = "8.8.8.8"
|
||||
}
|
||||
```
|
||||
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.13.2 cluster on Google Compute Engine with Fedora Atomic.
|
||||
In this tutorial, we'll create a Kubernetes v1.13.3 cluster on Google Compute Engine with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
@ -121,7 +121,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-atomic/kuberne
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -197,9 +197,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.13.2
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.13.2
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.13.2
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.13.3
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.13.3
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.13.2 cluster on AWS with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.13.3 cluster on AWS with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
@ -92,7 +92,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -113,7 +113,7 @@ module "aws-tempest" {
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
worker_type = "t2.medium"
|
||||
worker_type = "t3.small"
|
||||
}
|
||||
```
|
||||
|
||||
@ -165,9 +165,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready controller,master 10m v1.13.2
|
||||
ip-10-0-26-65 Ready node 10m v1.13.2
|
||||
ip-10-0-41-21 Ready node 10m v1.13.2
|
||||
ip-10-0-3-155 Ready controller,master 10m v1.13.3
|
||||
ip-10-0-26-65 Ready node 10m v1.13.3
|
||||
ip-10-0-41-21 Ready node 10m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -3,7 +3,7 @@
|
||||
!!! danger
|
||||
Typhoon for Azure is alpha. For production, use AWS, Google Cloud, or bare-metal. As Azure matures, check [errata](https://github.com/poseidon/typhoon/wiki/Errata) for known shortcomings.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.13.2 cluster on Azure with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.13.3 cluster on Azure with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
@ -87,7 +87,7 @@ Define a Kubernetes cluster using the module `azure/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "azure-ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/container-linux/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
azurerm = "azurerm.default"
|
||||
@ -161,9 +161,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/ramius/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready controller,master 24m v1.13.2
|
||||
ramius-worker-000001 Ready node 25m v1.13.2
|
||||
ramius-worker-000002 Ready node 24m v1.13.2
|
||||
ramius-controller-0 Ready controller,master 24m v1.13.3
|
||||
ramius-worker-000001 Ready node 25m v1.13.3
|
||||
ramius-worker-000002 Ready node 24m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -249,7 +249,7 @@ Reference the DNS zone with `"${azurerm_dns_zone.clusters.name}"` and its resour
|
||||
| controller_type | Machine type for controllers | "Standard_DS1_v2" | See below |
|
||||
| worker_type | Machine type for workers | "Standard_F1" | See below |
|
||||
| os_image | Channel for a Container Linux derivative | coreos-stable | coreos-stable, coreos-beta, coreos-alpha |
|
||||
| disk_size | Size of the disk GB | "40" | "100" |
|
||||
| disk_size | Size of the disk in GB | "40" | "100" |
|
||||
| worker_priority | Set priority to Low to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Low |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.13.2 cluster on bare-metal with Container Linux.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.13.3 cluster on bare-metal with Container Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
@ -179,7 +179,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -288,9 +288,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
|
||||
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
|
||||
|
||||
```
|
||||
# before v1.13.2
|
||||
# before v1.13.3
|
||||
$ ssh debug@node1.example.com
|
||||
# after v1.13.2
|
||||
# after v1.13.3
|
||||
$ ssh -p 2222 core@node1.example.com
|
||||
```
|
||||
|
||||
@ -315,9 +315,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready controller,master 10m v1.13.2
|
||||
node2.example.com Ready node 10m v1.13.2
|
||||
node3.example.com Ready node 10m v1.13.2
|
||||
node1.example.com Ready controller,master 10m v1.13.3
|
||||
node2.example.com Ready node 10m v1.13.3
|
||||
node3.example.com Ready node 10m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Digital Ocean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.13.2 cluster on DigitalOcean with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.13.3 cluster on DigitalOcean with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
@ -86,7 +86,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
@ -160,9 +160,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
nemo-controller-0 Ready controller,master 10m v1.13.2
|
||||
nemo-worker-0 Ready node 10m v1.13.2
|
||||
nemo-worker-1 Ready node 10m v1.13.2
|
||||
nemo-controller-0 Ready controller,master 10m v1.13.3
|
||||
nemo-worker-0 Ready node 10m v1.13.3
|
||||
nemo-worker-1 Ready node 10m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -214,9 +214,9 @@ Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to contro
|
||||
You'll need a registered domain name or delegated subdomain in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
# Declare a DigitalOcean record to also create a zone file
|
||||
resource "digitalocean_domain" "zone-for-clusters" {
|
||||
name = "do.example.com"
|
||||
# Digital Ocean oddly requires an IP here. You may have to delete the A record it makes. :(
|
||||
ip_address = "8.8.8.8"
|
||||
}
|
||||
```
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.13.2 cluster on Google Compute Engine with Container Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.13.3 cluster on Google Compute Engine with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
@ -93,7 +93,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -168,9 +168,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.13.2
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.13.2
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.13.2
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.13.3
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.13.3
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -11,11 +11,11 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, or other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
* Advanced features like [worker pools](advanced/worker-pools/), [preemptible](cl/google-cloud/#preemption) workers, and [snippets](advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, or other [addons](addons/overview/)
|
||||
|
||||
## Modules
|
||||
|
||||
@ -23,20 +23,20 @@ Typhoon provides a Terraform Module for each supported operating system and plat
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Container Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
|
||||
| AWS | Container Linux | [aws/container-linux/kubernetes](cl/aws.md) | stable |
|
||||
| Azure | Container Linux | [azure/container-linux/kubernetes](cl/azure.md) | alpha |
|
||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable |
|
||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](cl/bare-metal.md) | stable |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | beta |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | stable |
|
||||
|
||||
Fedora Atomic support is alpha and will evolve as Fedora Atomic is replaced by Fedora CoreOS.
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](aws/fedora-atomic/kubernetes) | alpha |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](atomic/aws.md) | alpha |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](atomic/bare-metal.md) | alpha |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](atomic/digital-ocean.md) | alpha |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](atomic/google-cloud.md) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
@ -49,7 +49,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.13.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -90,9 +90,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.13.2
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.13.2
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.13.2
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.13.3
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.13.3
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.13.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -18,7 +18,7 @@ module "google-cloud-yavin" {
|
||||
}
|
||||
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.13.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.13.3"
|
||||
...
|
||||
}
|
||||
```
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=bcbdddd8d07c99ab88b2e9ebfb662de4c104de0a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c12a11c8006606b59335ecc994abe22358aaf68b"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.11"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -124,7 +124,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
|
@ -94,7 +94,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.13.2
|
||||
KUBELET_IMAGE_TAG=v1.13.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -112,7 +112,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.13.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.13.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.13.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.13.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=bcbdddd8d07c99ab88b2e9ebfb662de4c104de0a"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=c12a11c8006606b59335ecc994abe22358aaf68b"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -78,8 +78,8 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.2"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.11"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
|
@ -54,7 +54,7 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.13.3"
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
|
@ -1,5 +1,4 @@
|
||||
mkdocs==1.0.4
|
||||
mkdocs-material==3.2.0
|
||||
mkdocs-material==3.3.0
|
||||
pygments==2.2.0
|
||||
pymdown-extensions==5.0.0
|
||||
six==1.10.0
|
||||
|
Reference in New Issue
Block a user