Compare commits

..

50 Commits

Author SHA1 Message Date
855aec5af3 Clarify AWS module output names and changes 2018-06-23 15:29:13 -07:00
0c4d59db87 Use global HTTP/TCP proxy load balancing for Ingress on GCP
* Switch Ingress from regional network load balancers to global
HTTP/TCP Proxy load balancing
* Reduce cost by ~$19/month per cluster. Google bills the first 5
global and regional forwarding rules separately. Typhoon clusters now
use 3 global and 0 regional forwarding rules.
* Worker pools no longer include an extraneous load balancer. Remove
worker module's `ingress_static_ip` output.
* Add `ingress_static_ipv4` output variable
* Add `worker_instance_group` output to allow custom global load
balancing
* Deprecate `controllers_ipv4_public` module output
* Deprecate `ingress_static_ip` module output. Use `ingress_static_ipv4`
2018-06-23 14:37:40 -07:00
2eaf04c68b Drop hostNetwork from nginx-ingress addon
* Both flannel and Calico support host port via `portmap`
* Allows writing NetworkPolicies that reference ingress pods in `from`
or `to`. HostNetwork pods were difficult to write network policy for
since they could circumvent the CNI network to communicate with pods on
the same node.
2018-06-22 00:46:41 -07:00
0227014fa0 Fix terraform formatting 2018-06-22 00:28:36 -07:00
fb6f40051f Disable AWS detailed monitoring on worker nodes
* Basic monitoring (free) is sufficient for casual console browsing
* Detailed monitoring (paid) is not leveraged for CloudWatch anyway
* Favor Prometheus for cloud-agnostic metrics, aggregation, and alerting
2018-06-22 00:26:06 -07:00
316f06df06 Combine NLBs to use one NLB per cluster
* Simplify clusters to come with a single NLB
* Listen for apiserver traffic on port 6443 and forward
to controllers (with healthy apiserver)
* Listen for ingress traffic on ports 80/443 and forward
to workers (with healthy ingress controller)
* Reduce cost of default clusters by 1 NLB ($18.14/month)
* Keep using CNAME records to the `ingress_dns_name` NLB and
the nginx-ingress addon for Ingress (up to a few million RPS)
* Users with heavy traffic (many million RPS) can create their
own separate NLB(s) for Ingress and use the new output worker
target groups
* Fix issue where additional worker pools come with an
extraneous network load balancer
2018-06-21 23:46:57 -07:00
f4d3059b00 Update Kubernetes from v1.10.4 to v1.10.5
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1105
2018-06-21 22:51:39 -07:00
6c5a1964aa Change kube-apiserver port from 443 to 6443
* Adjust firewall rules, security groups, cloud load balancers,
and generated kubeconfig's
* Facilitates some future simplifications and cost reductions
* Bare-Metal users who exposed kube-apiserver on a WAN via their
router or load balancer will need to adjust its configuration.
This is uncommon, most apiserver are on LAN and/or behind VPN
so no routing infrastructure is configured with the port number
2018-06-19 23:48:51 -07:00
6e64634748 Update etcd from v3.3.7 to v3.3.8
* https://github.com/coreos/etcd/releases/tag/v3.3.8
2018-06-19 21:56:21 -07:00
d5de41e07a Update Grafana from 5.1.3 to 5.1.4
* https://github.com/grafana/grafana/releases/tag/v5.1.4
2018-06-19 21:45:15 -07:00
05b99178ae Update prometheus from v2.3.0 to v2.3.1
* https://github.com/prometheus/prometheus/releases/tag/v2.3.1
2018-06-19 21:43:50 -07:00
ed0b781296 Fix possible deadlock for provisioning bare-metal clusters
* Closes #235
2018-06-14 23:15:28 -07:00
51906bf398 Update etcd from v3.3.6 to v3.3.7 2018-06-14 22:46:16 -07:00
18dd7ccc09 Update CLUO from v0.6.0 to v0.7.0 2018-06-14 22:32:36 -07:00
0764bd30b5 Fix typo in AWS MTU tip for using jumbo packets 2018-06-11 18:11:50 -07:00
899424c94f Update mkdocs and docs theme 2018-06-10 12:24:29 -07:00
ca8c0a7ac0 Remove docs site analytics feature 2018-06-10 11:57:39 -07:00
cbe646fba6 Label namespaces to ease writing Network Policies 2018-06-09 11:45:11 -07:00
c166b2ba33 Update prometheus from v2.2.1 to v2.3.0 2018-06-09 11:43:10 -07:00
6676484490 Partially revert b7ed6e7bd35cee39a3f65b47e731938c3006b5cd
* Fix change that broke Google Cloud container-linux and
fedora-atomic https://github.com/poseidon/typhoon/pull/224
2018-06-06 23:48:37 -07:00
79260c48f6 Update Kubernetes from v1.10.3 to v1.10.4 2018-06-06 23:23:11 -07:00
589c3569b7 Update etcd from v3.3.5 to v3.3.6
* https://github.com/coreos/etcd/releases/tag/v3.3.6
2018-06-06 23:19:30 -07:00
4d75ae1373 Recommend against AWS controllers smaller than t2.small 2018-05-30 22:57:15 -07:00
d32e6797ae Annotate Grafana so Prometheus scrapes metrics 2018-05-30 22:37:47 -07:00
32a9a83190 Add Prometheus liveness and readiness probes 2018-05-30 22:34:07 -07:00
6e968cd152 Update Calico from v3.1.2 to v3.1.3
* https://github.com/projectcalico/calico/releases/tag/v3.1.3
* https://github.com/projectcalico/cni-plugin/releases/tag/v3.1.3
2018-05-30 21:32:12 -07:00
6a581ab577 Render etcd_initial_cluster using a template_file 2018-05-30 21:14:49 -07:00
4ac4d7cbaf Add docs fixes and Flatcar Linux announcement 2018-05-22 21:22:50 -07:00
4ea1fde9c5 Update Kubernetes from v1.10.2 to v1.10.3
* https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1103
* Update Calico from v3.1.1 to v3.1.2
2018-05-21 21:38:43 -07:00
1e2eec6487 Update Fedora Atomic from 27 to 28 on DigitalOcean
* Fedora Atomic 27 images disappeared from DigitalOcean and
forced this early update (there are known bugs)
2018-05-21 21:30:23 -07:00
28d0891729 Annotate nginx-ingress addon for Prometheus auto-discovery
* Add Google Cloud firewall rule to allow worker to worker access
to health and metrics
2018-05-19 13:13:14 -07:00
2ae126bf68 Fix README link to tutorial 2018-05-19 13:10:22 -07:00
714419342e Update nginx-ingress from 0.14.0 to 0.15.0
* https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.15.0
2018-05-17 21:42:55 -07:00
3701c0b1fe Update Grafana from v5.1.2 to v5.1.3
* https://github.com/grafana/grafana/releases/tag/v5.1.3
2018-05-17 21:36:09 -07:00
0c3557e68e Allow Flatcar Linux os_channel on bare-metal
* Choose the Container Linux derivative Flatcar Linux on
bare-metal by setting os_channel to flatcar-stable, flatcar-beta
or flatcar-alpha
* As with Container Linux from Red Hat, the version (os_version)
must correspond to the channel being used
* Thank you to @dongsupark from Kinvolk
2018-05-17 20:09:36 -07:00
adc6c6866d Rename container_linux_ bare-metal variables
* Allow for Container Linux derivatives
* Replace container_linux_channel variable with `os_channel`
* Replace `container_linux_version` variable with `os_version`
* Please change values `stable`, `beta`, or `alpha` to `coreos-stable`,
`coreos-beta`, `coreos-alpha` (action required!)
2018-05-16 22:40:39 -07:00
9ac7b0655f Add bare-metal network_ip_autodetection_method variable for multi-NIC
* Allow setting the Calico host IPv4 address autodetection method
* Use Calico's default "first-found" method to support single NIC
and bonded NIC nodes
* Allow methods like `can-reach=IP` or `interface=REGEX` for multi
NIC nodes
* https://docs.projectcalico.org/v3.1/reference/node/configuration#ip-autodetection-methods
2018-05-15 23:27:34 -07:00
983489bb52 Re-run terraform fmt for formatting 2018-05-14 23:38:16 -07:00
c2b719dc75 Configure Prometheus to scrape Kubelets directly
* Use Kubelet bearer token authn/authz to scrape metrics
* Drop RBAC permission from nodes/proxy to nodes/metrics
* Stop proxying kubelet scrapes through the apiserver, since
this required higher privilege (nodes/proxy) and can add
load to the apiserver on large clusters
2018-05-14 23:06:50 -07:00
37981f9fb1 Allow bearer token authn/authz to the Kubelet
* Require Webhook authorization to the Kubelet
* Switch apiserver X509 client cert org to systems:masters
to grant the apiserver admin and satisfy the authorization
requirement. kubectl commands like logs or exec that have
the apiserver make requests of a kubelet continue to work
as before
* https://kubernetes.io/docs/admin/kubelet-authentication-authorization/
* https://github.com/poseidon/typhoon/issues/215
2018-05-13 23:20:42 -07:00
5eb11f5104 Allow Flatcar Linux os_image on AWS, rename os_channel
* Replace os_channel variable with os_image to align naming
across clouds. Users who set this option to stable, beta, or
alpha should now set os_image to coreos-stable, coreos-beta,
or coreos-alpha.
* Default os_image to coreos-stable. This continues to use
the most recent image from the stable channel as always.
* Allow Container Linux derivative Flatcar Linux by setting
os_image to `flatcar-stable`, `flatcar-beta`, `flatcar-alpha`
2018-05-12 11:41:58 -07:00
f2ee75ac98 Require Terraform v0.11.x, drop v0.10.x support
* Raise minimum Terraform version to v0.11.0
* Terraform v0.11.x has been supported since Typhoon v1.9.2
and Terraform v0.10.x was last released in Nov 2017. I'd like
to stop worrying about v0.10.x and remove migration docs as
a later followup
* Migration docs docs/topics/maintenance.md#terraform-v011x
2018-05-10 02:20:46 -07:00
8b8e364915 Update etcd from v3.3.4 to v3.3.5
* https://github.com/coreos/etcd/releases/tag/v3.3.5
2018-05-10 02:12:53 -07:00
fb88113523 Disable default Google Analytics in Grafana addon
* Its come to my attention Grafana reports analytics data
by default. Typhoon's philosophy requires user permission
for data collection so the addon should have this disabled
* http://docs.grafana.org/installation/configuration/#analytics
2018-05-10 01:18:47 -07:00
1854f5c104 Update Grafana from v5.1.1 to v5.1.2
* https://github.com/grafana/grafana/releases/tag/v5.1.2
2018-05-10 01:09:08 -07:00
726b58b697 Update Grafana from v5.0.4 to v5.1.1
* https://github.com/grafana/grafana/releases/tag/v5.1.1
* https://github.com/grafana/grafana/releases/tag/v5.1.0
2018-05-07 22:05:19 -07:00
a5916da0e2 Update min AWS provider from v1.11 to v1.13 2018-05-02 15:16:03 -07:00
a54e3c0da1 Fix Prometheus data dir to /var/lib/prometheus
* A data volume (emptyDir) is mounted to /var/lib/prometheus
* Users could swap emptyDir for any desired volume if data
persistence is desired. Prometheus previously defaulted to
keeping its data in ./data relative to /prometheus. Override
this behavior to store data in /var/lib/prometheus
2018-05-01 22:05:27 -07:00
9d4cbb38f6 Rerun terraform fmt 2018-05-01 21:41:22 -07:00
cc29530ba0 Allow preemptible workers on AWS via spot instances
* Add `worker_price` to allow worker spot instances. Defaults
to empty string for the worker autoscaling group to use regular
on-demand instances.
* Add `spot_price` to internal `workers` module for spot worker
pools
* Note: Unlike GCP `preemptible` workers, spot instances require
you to pick a bid price.
2018-04-29 13:31:17 -07:00
126 changed files with 1207 additions and 588 deletions

View File

@ -4,8 +4,120 @@ Notable changes between versions.
## Latest
* [Introduce](https://typhoon.psdn.io/announce/#april-26-2018) Typhoon for Fedora Atomic ([#199](https://github.com/poseidon/typhoon/pull/199))
## v1.10.5
* Kubernetes [v1.10.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1105)
* Update etcd from v3.3.6 to v3.3.8 ([#243](https://github.com/poseidon/typhoon/pull/243), [#247](https://github.com/poseidon/typhoon/pull/247))
#### AWS
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
* Combine apiserver and ingress NLBs ([#249](https://github.com/poseidon/typhoon/pull/249))
* Reduce cost by ~$18/month per cluster. Typhoon AWS clusters now use one network load balancer.
* Ingress addon users may keep using CNAME records to the `ingress_dns_name` module output (few million RPS)
* Ingress users with heavy traffic (many million RPS) should create a separate NLB(s)
* Worker pools no longer include an extraneous load balancer. Remove worker module's `ingress_dns_name` output
* Disable detailed (paid) monitoring on worker nodes ([#251](https://github.com/poseidon/typhoon/pull/251))
* Favor Prometheus for cloud-agnostic metrics, aggregation, and alerting
* Add `worker_target_group_http` and `worker_target_group_https` module outputs to allow custom load balancing
* Add `target_group_http` and `target_group_https` worker module outputs to allow custom load balancing
#### Bare-Metal
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
* Users who exposed kube-apiserver on a WAN via their router/load-balancer will need to adjust its configuration (e.g. DNAT 6443). Most apiservers are on a LAN (internal, VPN-only, etc) so if you didn't specially configure network gear for 443, no change is needed. (possible action required)
* Fix possible deadlock when provisioning clusters larger than 10 nodes ([#244](https://github.com/poseidon/typhoon/pull/244))
#### DigitalOcean
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
* Update firewall rules and generated kubeconfig's
#### Google Cloud
* Use global HTTP and TCP proxy load balancing for Kubernetes Ingress ([#252](https://github.com/poseidon/typhoon/pull/252))
* Switch Ingress from regional network load balancers to global HTTP/TCP Proxy load balancing
* Reduce cost by ~$19/month per cluster. Google bills the first 5 global and regional forwarding rules separately. Typhoon clusters now use 3 global and 0 regional forwarding rules.
* Worker pools no longer include an extraneous load balancer. Remove worker module's `ingress_static_ip` output
* Allow using nginx-ingress addon on Fedora Atomic clusters ([#200](https://github.com/poseidon/typhoon/issues/200))
* Add `worker_instance_group` module output to allow custom global load balancing
* Add `instance_group` worker module output to allow custom global load balancing
* Deprecate `ingress_static_ip` module output. Add `ingress_static_ipv4` module output instead.
* Deprecate `controllers_ipv4_public` module output
#### Addons
* Update CLUO from v0.6.0 to v0.7.0 ([#242](https://github.com/poseidon/typhoon/pull/242))
* Update Prometheus from v2.3.0 to v2.3.1
* Update Grafana from 5.1.3 to 5.1.4
* Drop `hostNetwork` from nginx-ingress addon
* Both flannel and Calico support host port via `portmap`
* Allows writing NetworkPolicies that reference ingress pods in `from` or `to`. HostNetwork pods were difficult to write network policy for since they could circumvent the CNI network to communicate with pods on the same node.
## v1.10.4
* Kubernetes [v1.10.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1104)
* Update etcd from v3.3.5 to v3.3.6
* Update Calico from v3.1.2 to v3.1.3
#### Addons
* Update Prometheus from v2.2.1 to v2.3.0
* Add Prometheus liveness and readiness probes
* Annotate Grafana service so Prometheus scrapes metrics
* Label namespaces to ease writing Network Policies
## v1.10.3
* Kubernetes [v1.10.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1103)
* Add [Flatcar Linux](https://docs.flatcar-linux.org/) (Container Linux derivative) as an option for AWS and bare-metal (thanks @kinvolk folks)
* Allow bearer token authentication to the Kubelet ([#216](https://github.com/poseidon/typhoon/issues/216))
* Require Webhook authorization to the Kubelet
* Switch apiserver X509 client cert org to satisfy new authorization requirement
* Require Terraform v0.11.x and drop support for v0.10.x ([migration guide](https://typhoon.psdn.io/topics/maintenance/#terraform-v011x))
* Update etcd from v3.3.4 to v3.3.5 ([#213](https://github.com/poseidon/typhoon/pull/213))
* Update Calico from v3.1.1 to v3.1.2
#### AWS
* Allow Flatcar Linux by setting `os_image` to flatcar-stable (default), flatcar-beta, flatcar-alpha ([#211](https://github.com/poseidon/typhoon/pull/211))
* Replace `os_channel` variable with `os_image` to align naming across clouds
* Please change values stable, beta, or alpha to coreos-stable, coreos-beta, coreos-alpha (**action required!**)
* Allow preemptible workers via spot instances ([#202](https://github.com/poseidon/typhoon/pull/202))
* Add `worker_price` to allow worker spot instances. Default to empty string for the worker autoscaling group to use regular on-demand instances
* Add `spot_price` to internal `workers` module for spot [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
#### Bare-Metal
* Allow Flatcar Linux by setting `os_channel` to flatcar-stable, flatcar-beta, flatcar-alpha ([#220](https://github.com/poseidon/typhoon/pull/220))
* Replace `container_linux_channel` variable with `os_channel`
* Please change values stable, beta, or alpha to coreos-stable, coreos-beta, coreos-alpha (**action required!**)
* Replace `container_linux_version` variable with `os_version`
* Add `network_ip_autodetection_method` variable for Calico host IPv4 address detection
* Use Calico's default "first-found" to support single NIC and bonded NIC nodes
* Allow [alternative](https://docs.projectcalico.org/v3.1/reference/node/configuration#ip-autodetection-methods) methods for multi NIC nodes, like can-reach=IP or interface=REGEX
* Deprecate `container_linux_oem` variable
#### DigitalOcean
* Update Fedora Atomic module to use Fedora Atomic 28 ([#225](https://github.com/poseidon/typhoon/pull/225))
* Fedora Atomic 27 images disappeared from DigitalOcean and forced this early update
#### Addons
* Fix Prometheus data directory location ([#203](https://github.com/poseidon/typhoon/pull/203))
* Configure Prometheus to scrape Kubelets directly with bearer token auth instead of proxying through the apiserver ([#217](https://github.com/poseidon/typhoon/pull/217))
* Security improvement: Drop RBAC permission from `nodes/proxy` to `nodes/metrics`
* Scale: Remove per-node proxied scrape load from the apiserver
* Update Grafana from v5.04 to v5.1.3 ([#208](https://github.com/poseidon/typhoon/pull/208))
* Disable Grafana Google Analytics by default ([#214](https://github.com/poseidon/typhoon/issues/214))
* Update nginx-ingress from 0.14.0 to 0.15.0
* Annotate nginx-ingress service so Prometheus auto-discovers and scrapes service endpoints ([#222](https://github.com/poseidon/typhoon/pull/222))
## v1.10.2
* Kubernetes [v1.10.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1102)
* [Introduce](https://typhoon.psdn.io/announce/#april-26-2018) Typhoon for Fedora Atomic ([#199](https://github.com/poseidon/typhoon/pull/199))
* Update Calico from v3.0.4 to v3.1.1 ([#197](https://github.com/poseidon/typhoon/pull/197))
* https://www.projectcalico.org/announcing-calico-v3-1/
* https://github.com/projectcalico/calico/releases/tag/v3.1.0

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
@ -30,7 +30,9 @@ Typhoon provides a Terraform Module for each supported operating system and plat
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | beta |
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | very alpha |
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
The AWS and bare-metal `container-linux` modules allow picking Red Hat Container Linux (formerly CoreOS Container Linux) or Kinvolk's Flatcar Linux friendly fork.
## Documentation
@ -44,7 +46,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.5"
providers = {
google = "google.default"
@ -86,9 +88,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
```
List the pods.

View File

@ -18,7 +18,7 @@ spec:
spec:
containers:
- name: update-agent
image: quay.io/coreos/container-linux-update-operator:v0.6.0
image: quay.io/coreos/container-linux-update-operator:v0.7.0
command:
- "/bin/update-agent"
volumeMounts:

View File

@ -15,7 +15,7 @@ spec:
spec:
containers:
- name: update-operator
image: quay.io/coreos/container-linux-update-operator:v0.6.0
image: quay.io/coreos/container-linux-update-operator:v0.7.0
command:
- "/bin/update-operator"
env:

View File

@ -21,7 +21,7 @@ spec:
spec:
containers:
- name: grafana
image: grafana/grafana:5.0.4
image: grafana/grafana:5.1.4
env:
- name: GF_SERVER_HTTP_PORT
value: "8080"
@ -31,6 +31,8 @@ spec:
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Viewer
- name: GF_ANALYTICS_REPORTING_ENABLED
value: "false"
ports:
- name: http
containerPort: 8080

View File

@ -3,6 +3,9 @@ kind: Service
metadata:
name: grafana
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8080'
spec:
type: ClusterIP
selector:

View File

@ -2,3 +2,5 @@ apiVersion: v1
kind: Namespace
metadata:
name: ingress
labels:
name: ingress

View File

@ -20,10 +20,9 @@ spec:
spec:
nodeSelector:
node-role.kubernetes.io/node: ""
hostNetwork: true
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-backend
@ -67,5 +66,7 @@ spec:
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
runAsNonRoot: false
restartPolicy: Always
terminationGracePeriodSeconds: 60

View File

@ -3,6 +3,9 @@ kind: Service
metadata:
name: nginx-ingress-controller
namespace: ingress
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '10254'
spec:
type: ClusterIP
selector:

View File

@ -2,3 +2,5 @@ apiVersion: v1
kind: Namespace
metadata:
name: ingress
labels:
name: ingress

View File

@ -20,10 +20,9 @@ spec:
spec:
nodeSelector:
node-role.kubernetes.io/node: ""
hostNetwork: true
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-backend
@ -67,5 +66,7 @@ spec:
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
runAsNonRoot: false
restartPolicy: Always
terminationGracePeriodSeconds: 60

View File

@ -3,6 +3,9 @@ kind: Service
metadata:
name: nginx-ingress-controller
namespace: ingress
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '10254'
spec:
type: ClusterIP
selector:

View File

@ -2,3 +2,5 @@ apiVersion: v1
kind: Namespace
metadata:
name: ingress
labels:
name: ingress

View File

@ -20,10 +20,9 @@ spec:
spec:
nodeSelector:
node-role.kubernetes.io/node: ""
hostNetwork: true
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-backend
@ -67,5 +66,7 @@ spec:
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
runAsNonRoot: false
restartPolicy: Always
terminationGracePeriodSeconds: 60

View File

@ -3,6 +3,9 @@ kind: Service
metadata:
name: nginx-ingress-controller
namespace: ingress
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '10254'
spec:
type: ClusterIP
selector:

View File

@ -2,3 +2,5 @@ apiVersion: v1
kind: Namespace
metadata:
name: monitoring
labels:
name: monitoring

View File

@ -56,12 +56,7 @@ data:
target_label: job
# Scrape config for node (i.e. kubelet) /metrics (e.g. 'kubelet_'). Explore
# metrics from a node by scraping kubelet (127.0.0.1:10255/metrics).
#
# Rather than connecting directly to the node, the scrape is proxied though the
# Kubernetes apiserver. This means it will work if Prometheus is running out of
# cluster, or can't connect to nodes for some other reason (e.g. because of
# firewalling).
# metrics from a node by scraping kubelet (127.0.0.1:10250/metrics).
- job_name: 'kubelet'
kubernetes_sd_configs:
- role: node
@ -69,50 +64,34 @@ data:
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Kubelet certs don't have any fixed IP SANs
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# Scrape config for Kubelet cAdvisor. Explore metrics from a node by
# scraping kubelet (127.0.0.1:10255/metrics/cadvisor).
#
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
# (those whose names begin with 'container_') have been removed from the
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# Rather than connecting directly to the node, the scrape is proxied though the
# Kubernetes apiserver. This means it will work if Prometheus is running out of
# cluster, or can't connect to nodes for some other reason (e.g. because of
# firewalling).
# scraping kubelet (127.0.0.1:10250/metrics/cadvisor).
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
metrics_path: /metrics/cadvisor
tls_config:
# Kubelet certs don't have any fixed IP SANs
insecure_skip_verify: true
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# Scrap etcd metrics from controllers
# Scrap etcd metrics from controllers via listen-metrics-urls
- job_name: 'etcd'
kubernetes_sd_configs:
- role: node

View File

@ -17,29 +17,41 @@ spec:
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: quay.io/prometheus/prometheus:v2.2.1
args:
- '--config.file=/etc/prometheus/prometheus.yaml'
ports:
- name: web
containerPort: 9090
volumeMounts:
- name: config
mountPath: /etc/prometheus
- name: rules
mountPath: /etc/prometheus/rules
- name: data
mountPath: /var/lib/prometheus
dnsPolicy: ClusterFirst
restartPolicy: Always
- name: prometheus
image: quay.io/prometheus/prometheus:v2.3.1
args:
- --web.listen-address=0.0.0.0:9090
- --config.file=/etc/prometheus/prometheus.yaml
- --storage.tsdb.path=/var/lib/prometheus
ports:
- name: web
containerPort: 9090
volumeMounts:
- name: config
mountPath: /etc/prometheus
- name: rules
mountPath: /etc/prometheus/rules
- name: data
mountPath: /var/lib/prometheus
readinessProbe:
httpGet:
path: /-/ready
port: 9090
initialDelaySeconds: 10
timeoutSeconds: 10
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
initialDelaySeconds: 10
timeoutSeconds: 10
terminationGracePeriodSeconds: 30
volumes:
- name: config
configMap:
name: prometheus-config
- name: rules
configMap:
name: prometheus-rules
- name: data
emptyDir: {}
- name: config
configMap:
name: prometheus-config
- name: rules
configMap:
name: prometheus-rules
- name: data
emptyDir: {}

View File

@ -6,7 +6,7 @@ rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- nodes/metrics
- services
- endpoints
- pods

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
@ -19,5 +19,5 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Docs
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/aws/).
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/cl/aws/).

View File

@ -1,3 +1,13 @@
locals {
# Pick a CoreOS Container Linux derivative
# coreos-stable -> Container Linux AMI
# flatcar-stable -> Flatcar Linux AMI
ami_id = "${local.flavor == "flatcar" ? data.aws_ami.flatcar.image_id : data.aws_ami.coreos.image_id}"
flavor = "${element(split("-", var.os_image), 0)}"
channel = "${element(split("-", var.os_image), 1)}"
}
data "aws_ami" "coreos" {
most_recent = true
owners = ["595879546273"]
@ -14,6 +24,26 @@ data "aws_ami" "coreos" {
filter {
name = "name"
values = ["CoreOS-${var.os_channel}-*"]
values = ["CoreOS-${local.channel}-*"]
}
}
data "aws_ami" "flatcar" {
most_recent = true
owners = ["075585003325"]
filter {
name = "architecture"
values = ["x86_64"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "name"
values = ["Flatcar-${local.channel}-*"]
}
}

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.3.4"
Environment="ETCD_IMAGE_TAG=v3.3.8"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
@ -76,6 +76,8 @@ systemd:
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -121,7 +123,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.10.2
KUBELET_IMAGE_TAG=v1.10.5
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:

View File

@ -23,7 +23,7 @@ resource "aws_instance" "controllers" {
instance_type = "${var.controller_type}"
ami = "${data.aws_ami.coreos.image_id}"
ami = "${local.ami_id}"
user_data = "${element(data.ct_config.controller_ign.*.rendered, count.index)}"
# storage
@ -54,7 +54,7 @@ data "template_file" "controller_config" {
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
kubeconfig = "${indent(10, module.bootkube.kubeconfig)}"
ssh_authorized_key = "${var.ssh_authorized_key}"
@ -63,14 +63,14 @@ data "template_file" "controller_config" {
}
}
# Horrible hack to generate a Terraform list of a desired length without dependencies.
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
resource null_resource "repeat" {
count = "${var.controller_count}"
data "template_file" "etcds" {
count = "${var.controller_count}"
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
triggers {
name = "etcd${count.index}"
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
vars {
index = "${count.index}"
cluster_name = "${var.cluster_name}"
dns_zone = "${var.dns_zone}"
}
}

View File

@ -7,15 +7,15 @@ resource "aws_route53_record" "apiserver" {
# AWS recommends their special "alias" records for ELBs
alias {
name = "${aws_lb.apiserver.dns_name}"
zone_id = "${aws_lb.apiserver.zone_id}"
name = "${aws_lb.nlb.dns_name}"
zone_id = "${aws_lb.nlb.zone_id}"
evaluate_target_health = true
}
}
# Network Load Balancer for apiservers
resource "aws_lb" "apiserver" {
name = "${var.cluster_name}-apiserver"
# Network Load Balancer for apiservers and ingress
resource "aws_lb" "nlb" {
name = "${var.cluster_name}-nlb"
load_balancer_type = "network"
internal = false
@ -24,11 +24,11 @@ resource "aws_lb" "apiserver" {
enable_cross_zone_load_balancing = true
}
# Forward TCP traffic to controllers
# Forward TCP apiserver traffic to controllers
resource "aws_lb_listener" "apiserver-https" {
load_balancer_arn = "${aws_lb.apiserver.arn}"
load_balancer_arn = "${aws_lb.nlb.arn}"
protocol = "TCP"
port = "443"
port = "6443"
default_action {
type = "forward"
@ -36,6 +36,30 @@ resource "aws_lb_listener" "apiserver-https" {
}
}
# Forward HTTP ingress traffic to workers
resource "aws_lb_listener" "ingress-http" {
load_balancer_arn = "${aws_lb.nlb.arn}"
protocol = "TCP"
port = 80
default_action {
type = "forward"
target_group_arn = "${module.workers.target_group_http}"
}
}
# Forward HTTPS ingress traffic to workers
resource "aws_lb_listener" "ingress-https" {
load_balancer_arn = "${aws_lb.nlb.arn}"
protocol = "TCP"
port = 443
default_action {
type = "forward"
target_group_arn = "${module.workers.target_group_https}"
}
}
# Target group of controllers
resource "aws_lb_target_group" "controllers" {
name = "${var.cluster_name}-controllers"
@ -43,12 +67,12 @@ resource "aws_lb_target_group" "controllers" {
target_type = "instance"
protocol = "TCP"
port = 443
port = 6443
# TCP health check for apiserver
health_check {
protocol = "TCP"
port = 443
port = 6443
# NLBs required to use same healthy and unhealthy thresholds
healthy_threshold = 3
@ -65,5 +89,5 @@ resource "aws_lb_target_group_attachment" "controllers" {
target_group_arn = "${aws_lb_target_group.controllers.arn}"
target_id = "${element(aws_instance.controllers.*.id, count.index)}"
port = 443
port = 6443
}

View File

@ -1,5 +1,7 @@
# Outputs for Kubernetes Ingress
output "ingress_dns_name" {
value = "${module.workers.ingress_dns_name}"
value = "${aws_lb.nlb.dns_name}"
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
}
@ -23,3 +25,15 @@ output "worker_security_groups" {
output "kubeconfig" {
value = "${module.bootkube.kubeconfig}"
}
# Outputs for custom load balancing
output "worker_target_group_http" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${module.workers.target_group_http}"
}
output "worker_target_group_https" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${module.workers.target_group_https}"
}

View File

@ -1,11 +1,11 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.10.4"
required_version = ">= 0.11.0"
}
provider "aws" {
version = "~> 1.11"
version = "~> 1.13"
}
provider "local" {

View File

@ -36,8 +36,8 @@ resource "aws_security_group_rule" "controller-apiserver" {
type = "ingress"
protocol = "tcp"
from_port = 443
to_port = 443
from_port = 6443
to_port = 6443
cidr_blocks = ["0.0.0.0/0"]
}
@ -91,6 +91,16 @@ resource "aws_security_group_rule" "controller-node-exporter" {
source_security_group_id = "${aws_security_group.worker.id}"
}
resource "aws_security_group_rule" "controller-kubelet" {
security_group_id = "${aws_security_group.controller.id}"
type = "ingress"
protocol = "tcp"
from_port = 10250
to_port = 10250
source_security_group_id = "${aws_security_group.worker.id}"
}
resource "aws_security_group_rule" "controller-kubelet-self" {
security_group_id = "${aws_security_group.controller.id}"

View File

@ -41,10 +41,10 @@ variable "worker_type" {
description = "EC2 instance type for workers"
}
variable "os_channel" {
variable "os_image" {
type = "string"
default = "stable"
description = "Container Linux AMI channel (stable, beta, alpha)"
default = "coreos-stable"
description = "AMI channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
}
variable "disk_size" {
@ -59,6 +59,12 @@ variable "disk_type" {
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
}
variable "worker_price" {
type = "string"
default = ""
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
}
variable "controller_clc_snippets" {
type = "list"
description = "Controller Container Linux Config snippets"

View File

@ -8,8 +8,9 @@ module "workers" {
security_groups = ["${aws_security_group.worker.id}"]
count = "${var.worker_count}"
instance_type = "${var.worker_type}"
os_channel = "${var.os_channel}"
os_image = "${var.os_image}"
disk_size = "${var.disk_size}"
spot_price = "${var.worker_price}"
# configuration
kubeconfig = "${module.bootkube.kubeconfig}"

View File

@ -1,3 +1,13 @@
locals {
# Pick a CoreOS Container Linux derivative
# coreos-stable -> Container Linux AMI
# flatcar-stable -> Flatcar Linux AMI
ami_id = "${local.flavor == "flatcar" ? data.aws_ami.flatcar.image_id : data.aws_ami.coreos.image_id}"
flavor = "${element(split("-", var.os_image), 0)}"
channel = "${element(split("-", var.os_image), 1)}"
}
data "aws_ami" "coreos" {
most_recent = true
owners = ["595879546273"]
@ -14,6 +24,26 @@ data "aws_ami" "coreos" {
filter {
name = "name"
values = ["CoreOS-${var.os_channel}-*"]
values = ["CoreOS-${local.channel}-*"]
}
}
data "aws_ami" "flatcar" {
most_recent = true
owners = ["075585003325"]
filter {
name = "architecture"
values = ["x86_64"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "name"
values = ["Flatcar-${local.channel}-*"]
}
}

View File

@ -49,6 +49,8 @@ systemd:
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -91,7 +93,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.10.2
KUBELET_IMAGE_TAG=v1.10.5
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -109,7 +111,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.10.2 \
docker://k8s.gcr.io/hyperkube:v1.10.5 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -1,39 +1,4 @@
# Network Load Balancer for Ingress
resource "aws_lb" "ingress" {
name = "${var.name}-ingress"
load_balancer_type = "network"
internal = false
subnets = ["${var.subnet_ids}"]
enable_cross_zone_load_balancing = true
}
# Forward HTTP traffic to workers
resource "aws_lb_listener" "ingress-http" {
load_balancer_arn = "${aws_lb.ingress.arn}"
protocol = "TCP"
port = 80
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.workers-http.arn}"
}
}
# Forward HTTPS traffic to workers
resource "aws_lb_listener" "ingress-https" {
load_balancer_arn = "${aws_lb.ingress.arn}"
protocol = "TCP"
port = 443
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.workers-https.arn}"
}
}
# Network Load Balancer target groups of instances
# Target groups of instances for use with load balancers
resource "aws_lb_target_group" "workers-http" {
name = "${var.name}-workers-http"

View File

@ -1,4 +1,9 @@
output "ingress_dns_name" {
value = "${aws_lb.ingress.dns_name}"
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
output "target_group_http" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${aws_lb_target_group.workers-http.arn}"
}
output "target_group_https" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${aws_lb_target_group.workers-https.arn}"
}

View File

@ -34,10 +34,10 @@ variable "instance_type" {
description = "EC2 instance type"
}
variable "os_channel" {
variable "os_image" {
type = "string"
default = "stable"
description = "Container Linux AMI channel (stable, beta, alpha)"
default = "coreos-stable"
description = "AMI channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
}
variable "disk_size" {
@ -52,6 +52,12 @@ variable "disk_type" {
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
}
variable "spot_price" {
type = "string"
default = ""
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
}
variable "clc_snippets" {
type = "list"
description = "Container Linux Config snippets"

View File

@ -26,6 +26,12 @@ resource "aws_autoscaling_group" "workers" {
create_before_destroy = true
}
# Waiting for instance creation delays adding the ASG to state. If instances
# can't be created (e.g. spot price too low), the ASG will be orphaned.
# Orphaned ASGs escape cleanup, can't be updated, and keep bidding if spot is
# used. Disable wait to avoid issues and align with other clouds.
wait_for_capacity_timeout = "0"
tags = [{
key = "Name"
value = "${var.name}-worker"
@ -35,8 +41,10 @@ resource "aws_autoscaling_group" "workers" {
# Worker template
resource "aws_launch_configuration" "worker" {
image_id = "${data.aws_ami.coreos.image_id}"
instance_type = "${var.instance_type}"
image_id = "${local.ami_id}"
instance_type = "${var.instance_type}"
spot_price = "${var.spot_price}"
enable_monitoring = false
user_data = "${data.ct_config.worker_ign.rendered}"

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
@ -19,5 +19,5 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Docs
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/aws/).
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/cl/aws/).

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -53,6 +53,8 @@ write_files:
content: |
ARGS="--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -91,8 +93,8 @@ bootcmd:
runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
- [systemctl, start, --no-block, etcd.service]
- [systemctl, enable, cloud-metadata.service]

View File

@ -54,7 +54,7 @@ data "template_file" "controller-cloudinit" {
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
kubeconfig = "${indent(6, module.bootkube.kubeconfig)}"
ssh_authorized_key = "${var.ssh_authorized_key}"
@ -63,13 +63,13 @@ data "template_file" "controller-cloudinit" {
}
}
# Horrible hack to generate a Terraform list of a desired length without dependencies.
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
resource null_resource "repeat" {
count = "${var.controller_count}"
data "template_file" "etcds" {
count = "${var.controller_count}"
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
triggers {
name = "etcd${count.index}"
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
vars {
index = "${count.index}"
cluster_name = "${var.cluster_name}"
dns_zone = "${var.dns_zone}"
}
}

View File

@ -7,15 +7,15 @@ resource "aws_route53_record" "apiserver" {
# AWS recommends their special "alias" records for ELBs
alias {
name = "${aws_lb.apiserver.dns_name}"
zone_id = "${aws_lb.apiserver.zone_id}"
name = "${aws_lb.nlb.dns_name}"
zone_id = "${aws_lb.nlb.zone_id}"
evaluate_target_health = true
}
}
# Network Load Balancer for apiservers
resource "aws_lb" "apiserver" {
name = "${var.cluster_name}-apiserver"
# Network Load Balancer for apiservers and ingress
resource "aws_lb" "nlb" {
name = "${var.cluster_name}-nlb"
load_balancer_type = "network"
internal = false
@ -24,11 +24,11 @@ resource "aws_lb" "apiserver" {
enable_cross_zone_load_balancing = true
}
# Forward TCP traffic to controllers
# Forward TCP apiserver traffic to controllers
resource "aws_lb_listener" "apiserver-https" {
load_balancer_arn = "${aws_lb.apiserver.arn}"
load_balancer_arn = "${aws_lb.nlb.arn}"
protocol = "TCP"
port = "443"
port = "6443"
default_action {
type = "forward"
@ -36,6 +36,30 @@ resource "aws_lb_listener" "apiserver-https" {
}
}
# Forward HTTP ingress traffic to workers
resource "aws_lb_listener" "ingress-http" {
load_balancer_arn = "${aws_lb.nlb.arn}"
protocol = "TCP"
port = 80
default_action {
type = "forward"
target_group_arn = "${module.workers.target_group_http}"
}
}
# Forward HTTPS ingress traffic to workers
resource "aws_lb_listener" "ingress-https" {
load_balancer_arn = "${aws_lb.nlb.arn}"
protocol = "TCP"
port = 443
default_action {
type = "forward"
target_group_arn = "${module.workers.target_group_https}"
}
}
# Target group of controllers
resource "aws_lb_target_group" "controllers" {
name = "${var.cluster_name}-controllers"
@ -43,12 +67,12 @@ resource "aws_lb_target_group" "controllers" {
target_type = "instance"
protocol = "TCP"
port = 443
port = 6443
# TCP health check for apiserver
health_check {
protocol = "TCP"
port = 443
port = 6443
# NLBs required to use same healthy and unhealthy thresholds
healthy_threshold = 3
@ -65,5 +89,5 @@ resource "aws_lb_target_group_attachment" "controllers" {
target_group_arn = "${aws_lb_target_group.controllers.arn}"
target_id = "${element(aws_instance.controllers.*.id, count.index)}"
port = 443
port = 6443
}

View File

@ -1,5 +1,7 @@
# Outputs for Kubernetes Ingress
output "ingress_dns_name" {
value = "${module.workers.ingress_dns_name}"
value = "${aws_lb.nlb.dns_name}"
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
}
@ -23,3 +25,15 @@ output "worker_security_groups" {
output "kubeconfig" {
value = "${module.bootkube.kubeconfig}"
}
# Outputs for custom load balancing
output "worker_target_group_http" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${module.workers.target_group_http}"
}
output "worker_target_group_https" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${module.workers.target_group_https}"
}

View File

@ -1,11 +1,11 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.10.4"
required_version = ">= 0.11.0"
}
provider "aws" {
version = "~> 1.11"
version = "~> 1.13"
}
provider "local" {

View File

@ -36,8 +36,8 @@ resource "aws_security_group_rule" "controller-apiserver" {
type = "ingress"
protocol = "tcp"
from_port = 443
to_port = 443
from_port = 6443
to_port = 6443
cidr_blocks = ["0.0.0.0/0"]
}
@ -91,6 +91,16 @@ resource "aws_security_group_rule" "controller-node-exporter" {
source_security_group_id = "${aws_security_group.worker.id}"
}
resource "aws_security_group_rule" "controller-kubelet" {
security_group_id = "${aws_security_group.controller.id}"
type = "ingress"
protocol = "tcp"
from_port = 10250
to_port = 10250
source_security_group_id = "${aws_security_group.worker.id}"
}
resource "aws_security_group_rule" "controller-kubelet-self" {
security_group_id = "${aws_security_group.controller.id}"

View File

@ -53,6 +53,12 @@ variable "disk_type" {
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
}
variable "worker_price" {
type = "string"
default = ""
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
}
# configuration
variable "ssh_authorized_key" {

View File

@ -9,6 +9,7 @@ module "workers" {
count = "${var.worker_count}"
instance_type = "${var.worker_type}"
disk_size = "${var.disk_size}"
spot_price = "${var.worker_price}"
# configuration
kubeconfig = "${module.bootkube.kubeconfig}"

View File

@ -32,6 +32,8 @@ write_files:
content: |
ARGS="--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -68,7 +70,7 @@ runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- [systemctl, enable, cloud-metadata.service]
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
- [systemctl, start, --no-block, kubelet.service]
users:
- default

View File

@ -1,39 +1,4 @@
# Network Load Balancer for Ingress
resource "aws_lb" "ingress" {
name = "${var.name}-ingress"
load_balancer_type = "network"
internal = false
subnets = ["${var.subnet_ids}"]
enable_cross_zone_load_balancing = true
}
# Forward HTTP traffic to workers
resource "aws_lb_listener" "ingress-http" {
load_balancer_arn = "${aws_lb.ingress.arn}"
protocol = "TCP"
port = 80
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.workers-http.arn}"
}
}
# Forward HTTPS traffic to workers
resource "aws_lb_listener" "ingress-https" {
load_balancer_arn = "${aws_lb.ingress.arn}"
protocol = "TCP"
port = 443
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.workers-https.arn}"
}
}
# Network Load Balancer target groups of instances
# Target groups of instances for use with load balancers
resource "aws_lb_target_group" "workers-http" {
name = "${var.name}-workers-http"

View File

@ -1,4 +1,9 @@
output "ingress_dns_name" {
value = "${aws_lb.ingress.dns_name}"
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
output "target_group_http" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${aws_lb_target_group.workers-http.arn}"
}
output "target_group_https" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${aws_lb_target_group.workers-https.arn}"
}

View File

@ -46,6 +46,12 @@ variable "disk_type" {
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
}
variable "spot_price" {
type = "string"
default = ""
description = "Spot price in USD for autoscaling group spot instances. Leave as default empty string for autoscaling group to use on-demand instances. Note, switching in-place from spot to on-demand is not possible: https://github.com/terraform-providers/terraform-provider-aws/issues/4320"
}
# configuration
variable "kubeconfig" {

View File

@ -26,6 +26,12 @@ resource "aws_autoscaling_group" "workers" {
create_before_destroy = true
}
# Waiting for instance creation delays adding the ASG to state. If instances
# can't be created (e.g. spot price too low), the ASG will be orphaned.
# Orphaned ASGs escape cleanup, can't be updated, and keep bidding if spot is
# used. Disable wait to avoid issues and align with other clouds.
wait_for_capacity_timeout = "0"
tags = [{
key = "Name"
value = "${var.name}-worker"
@ -35,8 +41,10 @@ resource "aws_autoscaling_group" "workers" {
# Worker template
resource "aws_launch_configuration" "worker" {
image_id = "${data.aws_ami.fedora.image_id}"
instance_type = "${var.instance_type}"
image_id = "${data.aws_ami.fedora.image_id}"
instance_type = "${var.instance_type}"
spot_price = "${var.spot_price}"
enable_monitoring = false
user_data = "${data.template_file.worker-cloudinit.rendered}"

View File

@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
## Docs
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/bare-metal/).
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/cl/bare-metal/).

View File

@ -1,14 +1,15 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]
etcd_servers = ["${var.controller_domains}"]
asset_dir = "${var.asset_dir}"
networking = "${var.networking}"
network_mtu = "${var.network_mtu}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]
etcd_servers = ["${var.controller_domains}"]
asset_dir = "${var.asset_dir}"
networking = "${var.networking}"
network_mtu = "${var.network_mtu}"
network_ip_autodetection_method = "${var.network_ip_autodetection_method}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
}

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.3.4"
Environment="ETCD_IMAGE_TAG=v3.3.8"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
@ -84,6 +84,8 @@ systemd:
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -122,7 +124,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.10.2
KUBELET_IMAGE_TAG=v1.10.5
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -31,10 +31,10 @@ storage:
inline: |
#!/bin/bash -ex
curl --retry 10 "${ignition_endpoint}?{{.request.raw_query}}&os=installed" -o ignition.json
coreos-install \
${os_flavor}-install \
-d ${install_disk} \
-C ${container_linux_channel} \
-V ${container_linux_version} \
-C ${os_channel} \
-V ${os_version} \
-o "${container_linux_oem}" \
${baseurl_flag} \
-i ignition.json

View File

@ -57,6 +57,8 @@ systemd:
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -83,7 +85,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.10.2
KUBELET_IMAGE_TAG=v1.10.5
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -1,9 +1,9 @@
// Install Container Linux to disk
resource "matchbox_group" "container-linux-install" {
resource "matchbox_group" "install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("container-linux-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
profile = "${var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
name = "${format("install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
profile = "${local.flavor == "flatcar" ? element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
selector {
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"

View File

@ -1,12 +1,20 @@
locals {
# coreos-stable -> coreos flavor, stable channel
# flatcar-stable -> flatcar flavor, stable channel
flavor = "${element(split("-", var.os_channel), 0)}"
channel = "${element(split("-", var.os_channel), 1)}"
}
// Container Linux Install profile (from release.core-os.net)
resource "matchbox_profile" "container-linux-install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("%s-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
kernel = "http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
kernel = "http://${local.channel}.release.core-os.net/amd64-usr/${var.os_version}/coreos_production_pxe.vmlinuz"
initrd = [
"http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
"http://${local.channel}.release.core-os.net/amd64-usr/${var.os_version}/coreos_production_pxe_image.cpio.gz",
]
args = [
@ -24,15 +32,16 @@ resource "matchbox_profile" "container-linux-install" {
data "template_file" "container-linux-install-configs" {
count = "${length(var.controller_names) + length(var.worker_names)}"
template = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
template = "${file("${path.module}/cl/install.yaml.tmpl")}"
vars {
container_linux_channel = "${var.container_linux_channel}"
container_linux_version = "${var.container_linux_version}"
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
install_disk = "${var.install_disk}"
container_linux_oem = "${var.container_linux_oem}"
ssh_authorized_key = "${var.ssh_authorized_key}"
os_flavor = "${local.flavor}"
os_channel = "${local.channel}"
os_version = "${var.os_version}"
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
install_disk = "${var.install_disk}"
container_linux_oem = "${var.container_linux_oem}"
ssh_authorized_key = "${var.ssh_authorized_key}"
# only cached-container-linux profile adds -b baseurl
baseurl_flag = ""
@ -40,15 +49,15 @@ data "template_file" "container-linux-install-configs" {
}
// Container Linux Install profile (from matchbox /assets cache)
// Note: Admin must have downloaded container_linux_version into matchbox assets.
// Note: Admin must have downloaded os_version into matchbox assets.
resource "matchbox_profile" "cached-container-linux-install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("%s-cached-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
kernel = "/assets/coreos/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
kernel = "/assets/coreos/${var.os_version}/coreos_production_pxe.vmlinuz"
initrd = [
"/assets/coreos/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
"/assets/coreos/${var.os_version}/coreos_production_pxe_image.cpio.gz",
]
args = [
@ -66,21 +75,45 @@ resource "matchbox_profile" "cached-container-linux-install" {
data "template_file" "cached-container-linux-install-configs" {
count = "${length(var.controller_names) + length(var.worker_names)}"
template = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
template = "${file("${path.module}/cl/install.yaml.tmpl")}"
vars {
container_linux_channel = "${var.container_linux_channel}"
container_linux_version = "${var.container_linux_version}"
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
install_disk = "${var.install_disk}"
container_linux_oem = "${var.container_linux_oem}"
ssh_authorized_key = "${var.ssh_authorized_key}"
os_flavor = "${local.flavor}"
os_channel = "${local.channel}"
os_version = "${var.os_version}"
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
install_disk = "${var.install_disk}"
container_linux_oem = "${var.container_linux_oem}"
ssh_authorized_key = "${var.ssh_authorized_key}"
# profile uses -b baseurl to install from matchbox cache
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/coreos"
}
}
// Flatcar Linux install profile (from release.flatcar-linux.net)
resource "matchbox_profile" "flatcar-install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("%s-flatcar-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
kernel = "http://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe.vmlinuz"
initrd = [
"http://${local.channel}.release.flatcar-linux.net/amd64-usr/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
]
args = [
"initrd=flatcar_production_pxe_image.cpio.gz",
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"flatcar.first_boot=yes",
"console=tty0",
"console=ttyS0",
"${var.kernel_args}",
]
container_linux_config = "${element(data.template_file.container-linux-install-configs.*.rendered, count.index)}"
}
// Kubernetes Controller profiles
resource "matchbox_profile" "controllers" {
count = "${length(var.controller_names)}"

View File

@ -1,7 +1,7 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.10.4"
required_version = ">= 0.11.0"
}
provider "local" {

View File

@ -2,6 +2,14 @@
resource "null_resource" "copy-controller-secrets" {
count = "${length(var.controller_names)}"
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
"matchbox_group.install",
"matchbox_group.controller",
"matchbox_group.worker",
]
connection {
type = "ssh"
host = "${element(var.controller_domains, count.index)}"
@ -70,6 +78,14 @@ resource "null_resource" "copy-controller-secrets" {
resource "null_resource" "copy-worker-secrets" {
count = "${length(var.worker_names)}"
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
"matchbox_group.install",
"matchbox_group.controller",
"matchbox_group.worker",
]
connection {
type = "ssh"
host = "${element(var.worker_domains, count.index)}"

View File

@ -10,14 +10,14 @@ variable "matchbox_http_endpoint" {
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "container_linux_channel" {
variable "os_channel" {
type = "string"
description = "Container Linux channel corresponding to the container_linux_version"
description = "Channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
}
variable "container_linux_version" {
variable "os_version" {
type = "string"
description = "Container Linux version of the kernel/initrd to PXE or the image to install"
description = "Version for a Container Linux derivative to PXE and install (coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha)"
}
# machines
@ -76,6 +76,12 @@ variable "network_mtu" {
default = "1480"
}
variable "network_ip_autodetection_method" {
description = "Method to autodetect the host IPv4 address (applies to calico only)"
type = "string"
default = "first-found"
}
variable "pod_cidr" {
description = "CIDR IPv4 range to assign Kubernetes pods"
type = "string"
@ -103,7 +109,7 @@ variable "cluster_domain_suffix" {
variable "cached_install" {
type = "string"
default = "false"
description = "Whether Container Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the container_linux_version into matchbox assets."
description = "Whether Container Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the os_version into matchbox assets."
}
variable "install_disk" {
@ -115,7 +121,7 @@ variable "install_disk" {
variable "container_linux_oem" {
type = "string"
default = ""
description = "Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
description = "DEPRECATED: Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
}
variable "kernel_args" {

View File

@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
## Docs
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/bare-metal/).
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/cl/bare-metal/).

View File

@ -1,7 +1,7 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]
etcd_servers = ["${var.controller_domains}"]

View File

@ -38,6 +38,8 @@ write_files:
content: |
ARGS="--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -82,8 +84,8 @@ runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- [hostnamectl, set-hostname, ${domain_name}]
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
- [systemctl, start, --no-block, etcd.service]
- [systemctl, enable, kubelet.path]

View File

@ -17,6 +17,8 @@ write_files:
content: |
ARGS="--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -58,7 +60,7 @@ runcmd:
- [systemctl, daemon-reload]
- [systemctl, restart, NetworkManager]
- [hostnamectl, set-hostname, ${domain_name}]
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
- [systemctl, enable, kubelet.path]
- [systemctl, start, --no-block, kubelet.path]
users:

View File

@ -1,6 +1,6 @@
locals {
default_assets_endpoint = "${var.matchbox_http_endpoint}/assets/fedora/27"
atomic_assets_endpoint = "${var.atomic_assets_endpoint != "" ? var.atomic_assets_endpoint : local.default_assets_endpoint}"
atomic_assets_endpoint = "${var.atomic_assets_endpoint != "" ? var.atomic_assets_endpoint : local.default_assets_endpoint}"
}
// Cached Fedora Install profile (from matchbox /assets cache)
@ -36,14 +36,15 @@ data "template_file" "install-kickstarts" {
vars {
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
atomic_assets_endpoint = "${local.atomic_assets_endpoint}"
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
}
}
// Kubernetes Controller profiles
resource "matchbox_profile" "controllers" {
count = "${length(var.controller_names)}"
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
count = "${length(var.controller_names)}"
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
# cloud-init
generic_config = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
}
@ -65,8 +66,9 @@ data "template_file" "controller-configs" {
// Kubernetes Worker profiles
resource "matchbox_profile" "workers" {
count = "${length(var.worker_names)}"
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
count = "${length(var.worker_names)}"
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
# cloud-init
generic_config = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
}

View File

@ -1,7 +1,7 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.10.4"
required_version = ">= 0.11.0"
}
provider "local" {

View File

@ -2,6 +2,14 @@
resource "null_resource" "copy-controller-secrets" {
count = "${length(var.controller_names)}"
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
"matchbox_group.install",
"matchbox_group.controller",
"matchbox_group.worker",
]
connection {
type = "ssh"
host = "${element(var.controller_domains, count.index)}"
@ -68,6 +76,14 @@ resource "null_resource" "copy-controller-secrets" {
resource "null_resource" "copy-worker-secrets" {
count = "${length(var.worker_names)}"
# Without depends_on, remote-exec could start and wait for machines before
# matchbox groups are written, causing a deadlock.
depends_on = [
"matchbox_group.install",
"matchbox_group.controller",
"matchbox_group.worker",
]
connection {
type = "ssh"
host = "${element(var.worker_domains, count.index)}"

View File

@ -11,8 +11,9 @@ variable "matchbox_http_endpoint" {
}
variable "atomic_assets_endpoint" {
type = "string"
type = "string"
default = ""
description = <<EOD
HTTP endpoint serving the Fedora Atomic Host vmlinuz, initrd, os repo, and ostree repo (.e.g `http://example.com/some/path`).

View File

@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
## Docs
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/digital-ocean/).
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/cl/digital-ocean/).

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -7,7 +7,7 @@ systemd:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.3.4"
Environment="ETCD_IMAGE_TAG=v3.3.8"
Environment="ETCD_NAME=${etcd_name}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
@ -87,6 +87,8 @@ systemd:
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -127,7 +129,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.10.2
KUBELET_IMAGE_TAG=v1.10.5
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:

View File

@ -60,6 +60,8 @@ systemd:
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -97,7 +99,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
KUBELET_IMAGE_TAG=v1.10.2
KUBELET_IMAGE_TAG=v1.10.5
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -115,7 +117,7 @@ storage:
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
--insecure-options=image \
docker://k8s.gcr.io/hyperkube:v1.10.2 \
docker://k8s.gcr.io/hyperkube:v1.10.5 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -69,20 +69,20 @@ data "template_file" "controller_config" {
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
}
}
# Horrible hack to generate a Terraform list of a desired length without dependencies.
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
resource null_resource "repeat" {
count = "${var.controller_count}"
data "template_file" "etcds" {
count = "${var.controller_count}"
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
triggers {
name = "etcd${count.index}"
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
vars {
index = "${count.index}"
cluster_name = "${var.cluster_name}"
dns_zone = "${var.dns_zone}"
}
}

View File

@ -3,7 +3,7 @@ resource "digitalocean_firewall" "rules" {
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
# allow ssh, http/https ingress, and peer-to-peer traffic
# allow ssh, apiserver, http/https ingress, and peer-to-peer traffic
inbound_rule = [
{
protocol = "tcp"
@ -20,6 +20,11 @@ resource "digitalocean_firewall" "rules" {
port_range = "443"
source_addresses = ["0.0.0.0/0", "::/0"]
},
{
protocol = "tcp"
port_range = "6443"
source_addresses = ["0.0.0.0/0", "::/0"]
},
{
protocol = "udp"
port_range = "1-65535"

View File

@ -1,7 +1,7 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.10.4"
required_version = ">= 0.11.0"
}
provider "digitalocean" {

View File

@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
## Docs
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/digital-ocean/).
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/cl/digital-ocean/).

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=1d4db824f09246266a6b9e54df5d4df5dcd4477a"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -53,6 +53,8 @@ write_files:
content: |
ARGS="--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -88,8 +90,8 @@ bootcmd:
- [modprobe, ip_vs]
runcmd:
- [systemctl, daemon-reload]
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.8"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
- [systemctl, start, --no-block, etcd.service]
- [systemctl, enable, cloud-metadata.service]

View File

@ -32,6 +32,8 @@ write_files:
content: |
ARGS="--allow-privileged \
--anonymous-auth=false \
--authentication-token-webhook \
--authorization-mode=Webhook \
--client-ca-file=/etc/kubernetes/ca.crt \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=${cluster_domain_suffix} \
@ -65,7 +67,7 @@ bootcmd:
runcmd:
- [systemctl, daemon-reload]
- [systemctl, enable, cloud-metadata.service]
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.5"
- [systemctl, enable, kubelet.path]
- [systemctl, start, --no-block, kubelet.path]
users:

View File

@ -46,7 +46,7 @@ resource "digitalocean_droplet" "controllers" {
user_data = "${element(data.template_file.controller-cloudinit.*.rendered, count.index)}"
ssh_keys = ["${var.ssh_fingerprints}"]
tags = [
"${digitalocean_tag.controllers.id}",
]
@ -69,7 +69,7 @@ data "template_file" "controller-cloudinit" {
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
ssh_authorized_key = "${var.ssh_authorized_key}"
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
@ -77,13 +77,13 @@ data "template_file" "controller-cloudinit" {
}
}
# Horrible hack to generate a Terraform list of a desired length without dependencies.
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
resource null_resource "repeat" {
count = "${var.controller_count}"
data "template_file" "etcds" {
count = "${var.controller_count}"
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
triggers {
name = "etcd${count.index}"
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
vars {
index = "${count.index}"
cluster_name = "${var.cluster_name}"
dns_zone = "${var.dns_zone}"
}
}

View File

@ -3,7 +3,7 @@ resource "digitalocean_firewall" "rules" {
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
# allow ssh, http/https ingress, and peer-to-peer traffic
# allow ssh, apiserver, http/https ingress, and peer-to-peer traffic
inbound_rule = [
{
protocol = "tcp"
@ -20,6 +20,11 @@ resource "digitalocean_firewall" "rules" {
port_range = "443"
source_addresses = ["0.0.0.0/0", "::/0"]
},
{
protocol = "tcp"
port_range = "6443"
source_addresses = ["0.0.0.0/0", "::/0"]
},
{
protocol = "udp"
port_range = "1-65535"

View File

@ -1,7 +1,7 @@
# Terraform version and plugin versions
terraform {
required_version = ">= 0.10.4"
required_version = ">= 0.11.0"
}
provider "digitalocean" {

View File

@ -43,8 +43,8 @@ variable "worker_type" {
variable "image" {
type = "string"
default = "fedora-27-x64-atomic"
description = "OS image from which to initialize the disk (e.g. fedora-27-x64-atomic)"
default = "fedora-28-x64-atomic"
description = "OS image from which to initialize the disk (e.g. fedora-28-x64-atomic)"
}
# configuration

View File

@ -4,7 +4,7 @@ Nginx Ingress controller pods accept and demultiplex HTTP, HTTPS, TCP, or UDP tr
## AWS
On AWS, an elastic load balancer distributes traffic across worker nodes (i.e. an auto-scaling group) running an Ingress controller deployment on host ports 80 and 443. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a health Ingress controller receive traffic.
On AWS, a network load balancer (NLB) distributes traffic across a target group of worker nodes running an Ingress controller deployment on host ports 80 and 443. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a health Ingress controller receive traffic.
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, default backend, and namespace.

View File

@ -15,7 +15,7 @@ Create a cluster following the AWS [tutorial](../cl/aws.md#cluster). Define a wo
```tf
module "tempest-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.10.5"
providers = {
aws = "aws.default"
@ -33,7 +33,7 @@ module "tempest-worker-pool" {
count = 2
instance_type = "m5.large"
os_channel = "beta"
os_image = "coreos-beta"
}
```
@ -66,12 +66,13 @@ The AWS internal `workers` module supports a number of [variables](https://githu
|:-----|:------------|:--------|:--------|
| count | Number of instances | 1 | 3 |
| instance_type | EC2 instance type | "t2.small" | "t2.medium" |
| os_channel | Container Linux AMI channel | stable| "beta", "alpha" |
| os_image | AMI channel for a Container Linux derivative | coreos-stable | coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha |
| disk_size | Size of the disk in GB | 40 | 100 |
| spot_price | Spot price in USD for workers. Leave as default empty string for regular on-demand instances | "" | "0.10" |
| service_cidr | Must match `service_cidr` of cluster | "10.3.0.0/16" | "10.3.0.0/24" |
| cluster_domain_suffix | Must match `cluster_domain_suffix` of cluster | "cluster.local" | "k8s.example.com" |
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/) or per-region and per-type [spot prices](https://aws.amazon.com/ec2/spot/pricing/).
## Google Cloud
@ -79,7 +80,7 @@ Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#clu
```tf
module "yavin-worker-pool" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.10.5"
providers = {
google = "google.default"
@ -113,11 +114,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
```
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.10.2
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.10.2
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.10.5
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.10.5
```
### Variables

View File

@ -1,5 +1,15 @@
# Announce <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo-small.png">
## May 23, 2018
Starting in v1.10.3, Typhoon AWS and bare-metal `container-linux` modules allow picking between the Red Hat [Container Linux](https://coreos.com/os/docs/latest/) (formerly CoreOS Container Linux) and Kinvolk [Flatcar Linux](https://www.flatcar-linux.org/) operating system. Flatcar Linux serves as a drop-in compatible "friendly fork" of Container Linux. Flatcar Linux publishes the same channels and versions as Container Linux and gets provisioned, managed, and operated in an identical way (e.g. login as user "core").
On AWS, pick the Container Linux derivative channel by setting `os_image` to coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, or flatcar-alpha.
On bare-metal, pick the Container Linux derivative channel by setting `os_channel` to coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, or flatcar-alpha. Set the `os_version` number to PXE boot and install. Variables `container_linux_channel` and `container_linux_version` have been dropped.
Flatcar Linux provides a familar Container Linux experience, with support from Kinvolk as an alternative to Red Hat. Typhoon offers the choice of Container Linux vendor to satisfy differing preferences and to diversify technology underpinnings, while providing a consistent Kubernetes experience across operating systems, clouds, and on-premise.
## April 26, 2018
Introducing Typhoon Kubernetes clusters for Fedora Atomic!

View File

@ -3,9 +3,9 @@
!!! danger
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on AWS with Fedora Atomic.
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on AWS with Fedora Atomic.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. Instances are provisioned on first boot with cloud-init.
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
@ -44,7 +44,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
```tf
provider "aws" {
version = "~> 1.11.0"
version = "~> 1.13.0"
alias = "default"
region = "eu-central-1"
@ -83,7 +83,7 @@ Define a Kubernetes cluster using the module `aws/fedora-atomic/kubernetes`.
```tf
module "aws-tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.10.5"
providers = {
aws = "aws.default"
@ -156,9 +156,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
ip-10-0-12-221 Ready 34m v1.10.2
ip-10-0-19-112 Ready 34m v1.10.2
ip-10-0-4-22 Ready 34m v1.10.2
ip-10-0-12-221 Ready 34m v1.10.5
ip-10-0-19-112 Ready 34m v1.10.5
ip-10-0-4-22 Ready 34m v1.10.5
```
List the pods.
@ -227,6 +227,7 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
| worker_type | EC2 instance type for workers | "t2.small" | See below |
| disk_size | Size of the EBS volume in GB | "40" | "100" |
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
| worker_price | Spot price in USD for workers. Leave as default empty string for regular on-demand instances | "" | "0.10" |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
@ -236,6 +237,8 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
!!! tip "MTU"
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8991! You will get better pod-to-pod bandwidth.
!!! warning
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller.
!!! tip "MTU"
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
In this tutorial, we'll network boot and provision a Kubernetes v1.10.2 cluster on bare-metal with Fedora Atomic.
In this tutorial, we'll network boot and provision a Kubernetes v1.10.5 cluster on bare-metal with Fedora Atomic.
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora Atomic via kickstart, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via cloud-init.
@ -234,7 +234,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-atomic/kubernete
```tf
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.10.5"
providers = {
local = "local.default"
@ -360,9 +360,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
node1.example.com Ready 11m v1.10.2
node2.example.com Ready 11m v1.10.2
node3.example.com Ready 11m v1.10.2
node1.example.com Ready 11m v1.10.5
node2.example.com Ready 11m v1.10.5
node3.example.com Ready 11m v1.10.5
```
List the pods.

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on DigitalOcean with Fedora Atomic.
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on DigitalOcean with Fedora Atomic.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. Instances are provisioned on first boot with cloud-init.
@ -77,7 +77,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-atomic/kubern
```tf
module "digital-ocean-nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.10.5"
providers = {
digitalocean = "digitalocean.default"
@ -152,9 +152,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
10.132.110.130 Ready 10m v1.10.2
10.132.115.81 Ready 10m v1.10.2
10.132.124.107 Ready 10m v1.10.2
10.132.110.130 Ready 10m v1.10.5
10.132.115.81 Ready 10m v1.10.5
10.132.124.107 Ready 10m v1.10.5
```
List the pods.

View File

@ -3,7 +3,7 @@
!!! danger
Typhoon for Fedora Atomic is very alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Some addons don't work yet. Expect rough edges and changes.
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on Google Compute Engine with Fedora Atomic.
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on Google Compute Engine with Fedora Atomic.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
@ -119,7 +119,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-atomic/kuberne
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.10.5"
providers = {
google = "google.default"
@ -195,9 +195,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
```
List the pods.

View File

@ -1,8 +1,8 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on AWS with Container Linux.
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on AWS with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
@ -57,7 +57,7 @@ Configure the AWS provider to use your access key credentials in a `providers.tf
```tf
provider "aws" {
version = "~> 1.11.0"
version = "~> 1.13.0"
alias = "default"
region = "eu-central-1"
@ -96,7 +96,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
```tf
module "aws-tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.10.5"
providers = {
aws = "aws.default"
@ -169,9 +169,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
ip-10-0-12-221 Ready 34m v1.10.2
ip-10-0-19-112 Ready 34m v1.10.2
ip-10-0-4-22 Ready 34m v1.10.2
ip-10-0-12-221 Ready 34m v1.10.5
ip-10-0-19-112 Ready 34m v1.10.5
ip-10-0-4-22 Ready 34m v1.10.5
```
List the pods.
@ -241,9 +241,10 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
| worker_count | Number of workers | 1 | 3 |
| controller_type | EC2 instance type for controllers | "t2.small" | See below |
| worker_type | EC2 instance type for workers | "t2.small" | See below |
| os_channel | Container Linux AMI channel | stable | stable, beta, alpha |
| os_image | AMI channel for a Container Linux derivative | coreos-stable | coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha |
| disk_size | Size of the EBS volume in GB | "40" | "100" |
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
| worker_price | Spot price in USD for workers. Leave as default empty string for regular on-demand instances | "" | "0.10" |
| controller_clc_snippets | Controller Container Linux Config snippets | [] | |
| worker_clc_snippets | Worker Container Linux Config snippets | [] | |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
@ -255,6 +256,8 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
!!! tip "MTU"
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8991! You will get better pod-to-pod bandwidth.
!!! warning
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller.
!!! tip "MTU"
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.

View File

@ -1,6 +1,6 @@
# Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.10.2 cluster on bare-metal with Container Linux.
In this tutorial, we'll network boot and provision a Kubernetes v1.10.5 cluster on bare-metal with Container Linux.
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
@ -174,7 +174,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
```tf
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.5"
providers = {
local = "local.default"
@ -186,8 +186,8 @@ module "bare-metal-mercury" {
# bare-metal
cluster_name = "mercury"
matchbox_http_endpoint = "http://matchbox.example.com"
container_linux_channel = "stable"
container_linux_version = "1632.3.0"
os_channel = "coreos-stable"
os_version = "1632.3.0"
# configuration
k8s_domain_name = "node1.example.com"
@ -283,9 +283,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
```
# before v1.10.2
# before v1.10.5
$ ssh debug@node1.example.com
# after v1.10.2
# after v1.10.5
$ ssh -p 2222 core@node1.example.com
```
@ -310,9 +310,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
node1.example.com Ready 11m v1.10.2
node2.example.com Ready 11m v1.10.2
node3.example.com Ready 11m v1.10.2
node1.example.com Ready 11m v1.10.5
node2.example.com Ready 11m v1.10.5
node3.example.com Ready 11m v1.10.5
```
List the pods.
@ -353,8 +353,8 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|:-----|:------------|:--------|
| cluster_name | Unique cluster name | mercury |
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | http://matchbox.example.com:port |
| container_linux_channel | Container Linux channel | stable, beta, alpha |
| container_linux_version | Container Linux version of the kernel/initrd to PXE and the image to install | 1632.3.0 |
| os_channel | Channel for a Container Linux derivative | coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, flatcar-alpha |
| os_version | Version for a Container Linux derivative to PXE and install | 1632.3.0 |
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
@ -369,11 +369,11 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| cached_install | Whether machines should PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux images into the cache to use this | false | true |
| cached_install | Whether machines should PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux images into the cache to use this (coreos only for now) | false | true |
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |
| container_linux_oem | Specify alternative OEM image ids for the disk install | "" | "vmware_raw", "xen" |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
| network_ip_autodetection_method | Method to detect host IPv4 address (calico-only) | first-found | can-reach=10.0.0.1 |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |

View File

@ -1,6 +1,6 @@
# Digital Ocean
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on DigitalOcean with Container Linux.
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on DigitalOcean with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
@ -90,7 +90,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
```tf
module "digital-ocean-nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.10.5"
providers = {
digitalocean = "digitalocean.default"
@ -164,9 +164,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
10.132.110.130 Ready 10m v1.10.2
10.132.115.81 Ready 10m v1.10.2
10.132.124.107 Ready 10m v1.10.2
10.132.110.130 Ready 10m v1.10.5
10.132.115.81 Ready 10m v1.10.5
10.132.124.107 Ready 10m v1.10.5
```
List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on Google Compute Engine with Container Linux.
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on Google Compute Engine with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
@ -97,7 +97,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.5"
providers = {
google = "google.default"
@ -172,9 +172,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
```
List the pods.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
@ -30,7 +30,9 @@ Typhoon provides a Terraform Module for each supported operating system and plat
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | beta |
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](atomic/digital-ocean.md) | alpha |
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | beta |
| Google Cloud | Fedora Atomic | [google-cloud/container-linux/kubernetes](atomic/google-cloud.md) | very alpha |
| Google Cloud | Fedora Atomic | [google-cloud/container-linux/kubernetes](atomic/google-cloud.md) | alpha |
The AWS and bare-metal `container-linux` modules allow picking Red Hat Container Linux (formerly CoreOS Container Linux) or Kinvolk's Flatcar Linux friendly fork.
## Documentation
@ -43,7 +45,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.5"
providers = {
google = "google.default"
@ -84,9 +86,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
yavin-controller-0.c.example-com.internal Ready 6m v1.10.5
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.5
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.5
```
List the pods.

View File

@ -18,7 +18,7 @@ module "google-cloud-yavin" {
}
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.2"
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.5"
...
}
```

View File

@ -9,7 +9,7 @@ Provisioning times vary based on the operating system and platform. Sampling the
| AWS | 6 min | 5 min |
| Bare-Metal | 10-15 min | NA |
| Digital Ocean | 3 min 30 sec | 20 sec |
| Google Cloud | 9 min | 4 min 30 sec |
| Google Cloud | 10 min | 4 min 30 sec |
Notes:
@ -25,7 +25,7 @@ Network performance varies based on the platform and CNI plugin. `iperf` was use
|----------------------------|-------:|-------------:|-------------:|
| AWS (flannel) | ? | 976 MB/s | 900-999 MB/s |
| AWS (calico, MTU 1480) | ? | 976 MB/s | 100-350 MB/s |
| AWS (calico, MTU 8991) | ? | 976 MB/s | 900-999 MB/s |
| AWS (calico, MTU 8981) | ? | 976 MB/s | 900-999 MB/s |
| Bare-Metal (flannel) | 1 GB/s | 934 MB/s | 903 MB/s |
| Bare-Metal (calico) | 1 GB/s | 941 MB/s | 931 MB/s |
| Bare-Metal (flannel, bond) | 3 GB/s | 2.3 GB/s | 1.17 GB/s |

View File

@ -11,12 +11,12 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.10.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
## Docs
Please see the [official docs](https://typhoon.psdn.io) and the Google Cloud [tutorial](https://typhoon.psdn.io/google-cloud/).
Please see the [official docs](https://typhoon.psdn.io) and the Google Cloud [tutorial](https://typhoon.psdn.io/cl/google-cloud/).

View File

@ -14,43 +14,45 @@ resource "google_dns_record_set" "apiserver" {
# Static IPv4 address for the TCP Proxy Load Balancer
resource "google_compute_global_address" "apiserver-ipv4" {
name = "${var.cluster_name}-apiserver-ip"
name = "${var.cluster_name}-apiserver-ip"
ip_version = "IPV4"
}
# Forward IPv4 TCP traffic to the TCP proxy load balancer
resource "google_compute_global_forwarding_rule" "apiserver" {
name = "${var.cluster_name}-apiserver"
ip_address = "${google_compute_global_address.apiserver-ipv4.address}"
name = "${var.cluster_name}-apiserver"
ip_address = "${google_compute_global_address.apiserver-ipv4.address}"
ip_protocol = "TCP"
port_range = "443"
target = "${google_compute_target_tcp_proxy.apiserver.self_link}"
port_range = "443"
target = "${google_compute_target_tcp_proxy.apiserver.self_link}"
}
# Global TCP Proxy Load Balancer for apiservers
resource "google_compute_target_tcp_proxy" "apiserver" {
name = "${var.cluster_name}-apiserver"
description = "Distribute TCP load across ${var.cluster_name} controllers"
name = "${var.cluster_name}-apiserver"
description = "Distribute TCP load across ${var.cluster_name} controllers"
backend_service = "${google_compute_backend_service.apiserver.self_link}"
}
# Global backend service backed by unmanaged instance groups
resource "google_compute_backend_service" "apiserver" {
name = "${var.cluster_name}-apiserver"
name = "${var.cluster_name}-apiserver"
description = "${var.cluster_name} apiserver service"
protocol = "TCP"
port_name = "apiserver"
protocol = "TCP"
port_name = "apiserver"
session_affinity = "NONE"
timeout_sec = "60"
timeout_sec = "60"
# controller(s) spread across zonal instance groups
backend {
group = "${google_compute_instance_group.controllers.0.self_link}"
}
backend {
group = "${google_compute_instance_group.controllers.1.self_link}"
}
backend {
group = "${google_compute_instance_group.controllers.2.self_link}"
}
@ -74,22 +76,22 @@ resource "google_compute_instance_group" "controllers" {
instances = [
"${matchkeys(google_compute_instance.controllers.*.self_link,
google_compute_instance.controllers.*.zone,
list(element(local.zones, count.index)))}"
list(element(local.zones, count.index)))}",
]
}
# TCP health check for apiserver
resource "google_compute_health_check" "apiserver" {
name = "${var.cluster_name}-apiserver-tcp-health"
name = "${var.cluster_name}-apiserver-tcp-health"
description = "TCP health check for kube-apiserver"
timeout_sec = 5
timeout_sec = 5
check_interval_sec = 5
healthy_threshold = 1
healthy_threshold = 1
unhealthy_threshold = 3
tcp_health_check {
port = "443"
port = "443"
}
}

Some files were not shown because too many files have changed in this diff Show More