mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 20:11:33 +02:00
Compare commits
45 Commits
Author | SHA1 | Date | |
---|---|---|---|
385584b712 | |||
731a6ec23a | |||
e889430926 | |||
d81a091756 | |||
32ddfa94e1 | |||
681450aa0d | |||
fafa028052 | |||
86e5adf348 | |||
a89f25e31a | |||
2e4bf4d7ae | |||
b6a51d0b68 | |||
567e18f015 | |||
0a7fab56e2 | |||
d784b0fca6 | |||
cd913986df | |||
af54efec28 | |||
7198b9016c | |||
f36c890234 | |||
233ec6dcb0 | |||
3f2978821b | |||
9b88d4bbfd | |||
3dde4ba8ba | |||
e148552220 | |||
d8d1468f03 | |||
2b74aba564 | |||
24d230505a | |||
cf22e70b46 | |||
b3cf9508b6 | |||
5212684472 | |||
f990473cde | |||
8523a086e2 | |||
19bc5aea9e | |||
8d7cfc1a45 | |||
9969c357da | |||
4e43b2ff48 | |||
ddc75e99ac | |||
b80a2eb8a0 | |||
3610da8b71 | |||
485586e5d8 | |||
a54f76db2a | |||
e0d9e9979c | |||
ad2e4311d1 | |||
490b628e2d | |||
23a8156bdf | |||
9789881243 |
2
.github/ISSUE_TEMPLATE.md
vendored
2
.github/ISSUE_TEMPLATE.md
vendored
@ -5,7 +5,7 @@
|
||||
### Environment
|
||||
|
||||
* Platform: aws, bare-metal, google-cloud, digital-ocean
|
||||
* OS: container-linux, fedora-cloud
|
||||
* OS: container-linux, fedora-atomic
|
||||
* Terraform: `terraform version`
|
||||
* Plugins: Provider plugin versions
|
||||
* Ref: Git SHA (if applicable)
|
||||
|
23
CHANGES.md
23
CHANGES.md
@ -4,12 +4,33 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
* [Introduce](https://typhoon.psdn.io/announce/#april-26-2018) Typhoon for Fedora Atomic ([#199](https://github.com/poseidon/typhoon/pull/199))
|
||||
* Kubernetes [v1.10.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1102)
|
||||
* Update Calico from v3.0.4 to v3.1.1 ([#197](https://github.com/poseidon/typhoon/pull/197))
|
||||
* https://www.projectcalico.org/announcing-calico-v3-1/
|
||||
* https://github.com/projectcalico/calico/releases/tag/v3.1.0
|
||||
* Update etcd from v3.3.3 to v3.3.4
|
||||
* Update kube-dns from v1.14.9 to v1.14.10
|
||||
|
||||
#### Google Cloud
|
||||
|
||||
* Add support for multi-controller clusters (i.e. multi-master) ([#54](https://github.com/poseidon/typhoon/issues/54), [#190](https://github.com/poseidon/typhoon/pull/190))
|
||||
* Switch from Google Cloud network load balancer to a TCP proxy load balancer. Avoid a [bug](https://issuetracker.google.com/issues/67366622) in Google network load balancers that limited clusters to only bootstrapping one controller node.
|
||||
* Add TCP health check for apiserver pods on controllers. Replace kubelet check approximation.
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from 0.12.0 to 0.14.0
|
||||
* Update kube-state-metrics from v1.3.0 to v1.3.1
|
||||
|
||||
## v1.10.1
|
||||
|
||||
* Kubernetes [v1.10.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1101)
|
||||
* Enable etcd v3.3 metrics endpoint ([#175](https://github.com/poseidon/typhoon/pull/175))
|
||||
* Use `k8s.gcr.io` instead of `gcr.io/google_containers` ([#180](https://github.com/poseidon/typhoon/pull/180))
|
||||
* Kubernetes [recommends](https://groups.google.com/forum/#!msg/kubernetes-dev/ytjk_rNrTa0/3EFUHvovCAAJ) using the alias to pull from the nearest regional mirror and to abstract the backing container registry
|
||||
* Update kube-dns from v1.14.8 to v1.14.9
|
||||
* Update etcd from v3.3.2 to v3.3.3
|
||||
* Update kube-dns from v1.14.8 to v1.14.9
|
||||
* Use kubernetes-incubator/bootkube v0.12.0
|
||||
|
||||
#### Bare-Metal
|
||||
|
26
README.md
26
README.md
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.1 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
|
||||
@ -24,27 +24,27 @@ Typhoon provides a Terraform Module for each supported operating system and plat
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Container Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](aws/fedora-atomic/kubernetes) | alpha |
|
||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | beta |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | very alpha |
|
||||
|
||||
## Usage
|
||||
## Documentation
|
||||
|
||||
* [Docs](https://typhoon.psdn.io)
|
||||
* [Concepts](https://typhoon.psdn.io/concepts/)
|
||||
* Tutorials
|
||||
* [AWS](https://typhoon.psdn.io/aws/)
|
||||
* [Bare-Metal](https://typhoon.psdn.io/bare-metal/)
|
||||
* [Digital Ocean](https://typhoon.psdn.io/digital-ocean/)
|
||||
* [Google-Cloud](https://typhoon.psdn.io/google-cloud/)
|
||||
* Architecture [concepts](https://typhoon.psdn.io/architecture/concepts/) and [operating systems](https://typhoon.psdn.io/architecture/operating-systems/)
|
||||
* Tutorials for [AWS](https://typhoon.psdn.io/cl/aws/), [Bare-Metal](https://typhoon.psdn.io/cl/bare-metal/), [Digital Ocean](https://typhoon.psdn.io/cl/digital-ocean/), and [Google-Cloud](https://typhoon.psdn.io/cl/google-cloud/)
|
||||
|
||||
## Example
|
||||
## Usage
|
||||
|
||||
Define a Kubernetes cluster by using the Terraform module for your chosen platform and operating system. Here's a minimal example:
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -86,9 +86,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.1
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.1
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.1
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -23,7 +23,7 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -22,7 +22,7 @@ spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: quay.io/coreos/kube-state-metrics:v1.3.0
|
||||
image: quay.io/coreos/kube-state-metrics:v1.3.1
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.1 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
|
@ -1,4 +1,4 @@
|
||||
# kube-apiserver Network Load Balancer DNS Record
|
||||
# Network Load Balancer DNS Record
|
||||
resource "aws_route53_record" "apiserver" {
|
||||
zone_id = "${var.dns_zone_id}"
|
||||
|
||||
@ -24,7 +24,7 @@ resource "aws_lb" "apiserver" {
|
||||
enable_cross_zone_load_balancing = true
|
||||
}
|
||||
|
||||
# Forward HTTP traffic to controllers
|
||||
# Forward TCP traffic to controllers
|
||||
resource "aws_lb_listener" "apiserver-https" {
|
||||
load_balancer_arn = "${aws_lb.apiserver.arn}"
|
||||
protocol = "TCP"
|
||||
@ -45,7 +45,7 @@ resource "aws_lb_target_group" "controllers" {
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
# Kubelet HTTP health check
|
||||
# TCP health check for apiserver
|
||||
health_check {
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=db36b92abced3c4b0af279adfd5ed4bf0cf8c39f"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.3"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -56,6 +56,8 @@ systemd:
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
@ -67,6 +69,7 @@ systemd:
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -118,7 +121,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.1
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
|
@ -31,6 +31,8 @@ systemd:
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
@ -40,6 +42,7 @@ systemd:
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -88,7 +91,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.1
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -106,7 +109,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.1 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.2 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -43,7 +43,7 @@ resource "aws_lb_target_group" "workers-http" {
|
||||
protocol = "TCP"
|
||||
port = 80
|
||||
|
||||
# Ingress Controller HTTP health check
|
||||
# HTTP health check for ingress
|
||||
health_check {
|
||||
protocol = "HTTP"
|
||||
port = 10254
|
||||
@ -66,7 +66,7 @@ resource "aws_lb_target_group" "workers-https" {
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
# Ingress Controller HTTP health check
|
||||
# HTTP health check for ingress
|
||||
health_check {
|
||||
protocol = "HTTP"
|
||||
port = 10254
|
||||
|
23
aws/fedora-atomic/kubernetes/LICENSE
Normal file
23
aws/fedora-atomic/kubernetes/LICENSE
Normal file
@ -0,0 +1,23 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2017 Typhoon Authors
|
||||
Copyright (c) 2017 Dalton Hubble
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
23
aws/fedora-atomic/kubernetes/README.md
Normal file
23
aws/fedora-atomic/kubernetes/README.md
Normal file
@ -0,0 +1,23 @@
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
* Minimal, stable base Kubernetes distribution
|
||||
* Declarative infrastructure and configuration
|
||||
* Free (freedom and cost) and privacy-respecting
|
||||
* Practical for labs, datacenters, and clouds
|
||||
|
||||
Typhoon distributes upstream Kubernetes, architectural conventions, and cluster addons, much like a GNU/Linux distribution provides the Linux kernel and userspace components.
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the AWS [tutorial](https://typhoon.psdn.io/aws/).
|
||||
|
19
aws/fedora-atomic/kubernetes/ami.tf
Normal file
19
aws/fedora-atomic/kubernetes/ami.tf
Normal file
@ -0,0 +1,19 @@
|
||||
data "aws_ami" "fedora" {
|
||||
most_recent = true
|
||||
owners = ["125523088429"]
|
||||
|
||||
filter {
|
||||
name = "architecture"
|
||||
values = ["x86_64"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Fedora-Atomic-27-20180419.0.x86_64-*-gp2-*"]
|
||||
}
|
||||
}
|
69
aws/fedora-atomic/kubernetes/apiserver.tf
Normal file
69
aws/fedora-atomic/kubernetes/apiserver.tf
Normal file
@ -0,0 +1,69 @@
|
||||
# Network Load Balancer DNS Record
|
||||
resource "aws_route53_record" "apiserver" {
|
||||
zone_id = "${var.dns_zone_id}"
|
||||
|
||||
name = "${format("%s.%s.", var.cluster_name, var.dns_zone)}"
|
||||
type = "A"
|
||||
|
||||
# AWS recommends their special "alias" records for ELBs
|
||||
alias {
|
||||
name = "${aws_lb.apiserver.dns_name}"
|
||||
zone_id = "${aws_lb.apiserver.zone_id}"
|
||||
evaluate_target_health = true
|
||||
}
|
||||
}
|
||||
|
||||
# Network Load Balancer for apiservers
|
||||
resource "aws_lb" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver"
|
||||
load_balancer_type = "network"
|
||||
internal = false
|
||||
|
||||
subnets = ["${aws_subnet.public.*.id}"]
|
||||
|
||||
enable_cross_zone_load_balancing = true
|
||||
}
|
||||
|
||||
# Forward TCP traffic to controllers
|
||||
resource "aws_lb_listener" "apiserver-https" {
|
||||
load_balancer_arn = "${aws_lb.apiserver.arn}"
|
||||
protocol = "TCP"
|
||||
port = "443"
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${aws_lb_target_group.controllers.arn}"
|
||||
}
|
||||
}
|
||||
|
||||
# Target group of controllers
|
||||
resource "aws_lb_target_group" "controllers" {
|
||||
name = "${var.cluster_name}-controllers"
|
||||
vpc_id = "${aws_vpc.network.id}"
|
||||
target_type = "instance"
|
||||
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
# TCP health check for apiserver
|
||||
health_check {
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
# NLBs required to use same healthy and unhealthy thresholds
|
||||
healthy_threshold = 3
|
||||
unhealthy_threshold = 3
|
||||
|
||||
# Interval between health checks required to be 10 or 30
|
||||
interval = 10
|
||||
}
|
||||
}
|
||||
|
||||
# Attach controller instances to apiserver NLB
|
||||
resource "aws_lb_target_group_attachment" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
target_group_arn = "${aws_lb_target_group.controllers.arn}"
|
||||
target_id = "${element(aws_instance.controllers.*.id, count.index)}"
|
||||
port = 443
|
||||
}
|
17
aws/fedora-atomic/kubernetes/bootkube.tf
Normal file
17
aws/fedora-atomic/kubernetes/bootkube.tf
Normal file
@ -0,0 +1,17 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
etcd_servers = ["${aws_route53_record.etcds.*.fqdn}"]
|
||||
asset_dir = "${var.asset_dir}"
|
||||
networking = "${var.networking}"
|
||||
network_mtu = "${var.network_mtu}"
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
}
|
107
aws/fedora-atomic/kubernetes/cloudinit/controller.yaml.tmpl
Normal file
107
aws/fedora-atomic/kubernetes/cloudinit/controller.yaml.tmpl
Normal file
@ -0,0 +1,107 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/etcd/etcd.conf
|
||||
content: |
|
||||
ETCD_NAME=${etcd_name}
|
||||
ETCD_DATA_DIR=/var/lib/etcd
|
||||
ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379
|
||||
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380
|
||||
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
|
||||
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
|
||||
ETCD_LISTEN_METRICS_URLS=http://0.0.0.0:2381
|
||||
ETCD_INITIAL_CLUSTER=${etcd_initial_cluster}
|
||||
ETCD_STRICT_RECONFIG_CHECK=true
|
||||
ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/server-ca.crt
|
||||
ETCD_CERT_FILE=/etc/ssl/certs/etcd/server.crt
|
||||
ETCD_KEY_FILE=/etc/ssl/certs/etcd/server.key
|
||||
ETCD_CLIENT_CERT_AUTH=true
|
||||
ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/peer-ca.crt
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/latest/meta-data/local-ipv4\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
permissions: '0644'
|
||||
content: |
|
||||
${kubeconfig}
|
||||
- path: /var/lib/bootkube/.keep
|
||||
- path: /etc/NetworkManager/conf.d/typhoon.conf
|
||||
content: |
|
||||
[main]
|
||||
plugins=keyfile
|
||||
[keyfile]
|
||||
unmanaged-devices=interface-name:cali*;interface-name:tunl*
|
||||
- path: /etc/selinux/config
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
content: |
|
||||
SELINUX=permissive
|
||||
SELINUXTYPE=targeted
|
||||
bootcmd:
|
||||
- [setenforce, Permissive]
|
||||
- [systemctl, disable, firewalld, --now]
|
||||
# https://github.com/kubernetes/kubernetes/issues/60869
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
- name: fedora
|
||||
gecos: Fedora Admin
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
groups: wheel,adm,systemd-journal,docker
|
||||
ssh-authorized-keys:
|
||||
- "${ssh_authorized_key}"
|
75
aws/fedora-atomic/kubernetes/controllers.tf
Normal file
75
aws/fedora-atomic/kubernetes/controllers.tf
Normal file
@ -0,0 +1,75 @@
|
||||
# Discrete DNS records for each controller's private IPv4 for etcd usage
|
||||
resource "aws_route53_record" "etcds" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
# DNS Zone where record should be created
|
||||
zone_id = "${var.dns_zone_id}"
|
||||
|
||||
name = "${format("%s-etcd%d.%s.", var.cluster_name, count.index, var.dns_zone)}"
|
||||
type = "A"
|
||||
ttl = 300
|
||||
|
||||
# private IPv4 address for etcd
|
||||
records = ["${element(aws_instance.controllers.*.private_ip, count.index)}"]
|
||||
}
|
||||
|
||||
# Controller instances
|
||||
resource "aws_instance" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
tags = {
|
||||
Name = "${var.cluster_name}-controller-${count.index}"
|
||||
}
|
||||
|
||||
instance_type = "${var.controller_type}"
|
||||
|
||||
ami = "${data.aws_ami.fedora.image_id}"
|
||||
user_data = "${element(data.template_file.controller-cloudinit.*.rendered, count.index)}"
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
}
|
||||
|
||||
# network
|
||||
associate_public_ip_address = true
|
||||
subnet_id = "${element(aws_subnet.public.*.id, count.index)}"
|
||||
vpc_security_group_ids = ["${aws_security_group.controller.id}"]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = ["ami"]
|
||||
}
|
||||
}
|
||||
|
||||
# Controller Cloud-Init
|
||||
data "template_file" "controller-cloudinit" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
template = "${file("${path.module}/cloudinit/controller.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
||||
|
||||
kubeconfig = "${indent(6, module.bootkube.kubeconfig)}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
||||
|
||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
||||
resource null_resource "repeat" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
triggers {
|
||||
name = "etcd${count.index}"
|
||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
}
|
||||
}
|
57
aws/fedora-atomic/kubernetes/network.tf
Normal file
57
aws/fedora-atomic/kubernetes/network.tf
Normal file
@ -0,0 +1,57 @@
|
||||
data "aws_availability_zones" "all" {}
|
||||
|
||||
# Network VPC, gateway, and routes
|
||||
|
||||
resource "aws_vpc" "network" {
|
||||
cidr_block = "${var.host_cidr}"
|
||||
assign_generated_ipv6_cidr_block = true
|
||||
enable_dns_support = true
|
||||
enable_dns_hostnames = true
|
||||
|
||||
tags = "${map("Name", "${var.cluster_name}")}"
|
||||
}
|
||||
|
||||
resource "aws_internet_gateway" "gateway" {
|
||||
vpc_id = "${aws_vpc.network.id}"
|
||||
|
||||
tags = "${map("Name", "${var.cluster_name}")}"
|
||||
}
|
||||
|
||||
resource "aws_route_table" "default" {
|
||||
vpc_id = "${aws_vpc.network.id}"
|
||||
|
||||
route {
|
||||
cidr_block = "0.0.0.0/0"
|
||||
gateway_id = "${aws_internet_gateway.gateway.id}"
|
||||
}
|
||||
|
||||
route {
|
||||
ipv6_cidr_block = "::/0"
|
||||
gateway_id = "${aws_internet_gateway.gateway.id}"
|
||||
}
|
||||
|
||||
tags = "${map("Name", "${var.cluster_name}")}"
|
||||
}
|
||||
|
||||
# Subnets (one per availability zone)
|
||||
|
||||
resource "aws_subnet" "public" {
|
||||
count = "${length(data.aws_availability_zones.all.names)}"
|
||||
|
||||
vpc_id = "${aws_vpc.network.id}"
|
||||
availability_zone = "${data.aws_availability_zones.all.names[count.index]}"
|
||||
|
||||
cidr_block = "${cidrsubnet(var.host_cidr, 4, count.index)}"
|
||||
ipv6_cidr_block = "${cidrsubnet(aws_vpc.network.ipv6_cidr_block, 8, count.index)}"
|
||||
map_public_ip_on_launch = true
|
||||
assign_ipv6_address_on_creation = true
|
||||
|
||||
tags = "${map("Name", "${var.cluster_name}-public-${count.index}")}"
|
||||
}
|
||||
|
||||
resource "aws_route_table_association" "public" {
|
||||
count = "${length(data.aws_availability_zones.all.names)}"
|
||||
|
||||
route_table_id = "${aws_route_table.default.id}"
|
||||
subnet_id = "${element(aws_subnet.public.*.id, count.index)}"
|
||||
}
|
25
aws/fedora-atomic/kubernetes/outputs.tf
Normal file
25
aws/fedora-atomic/kubernetes/outputs.tf
Normal file
@ -0,0 +1,25 @@
|
||||
output "ingress_dns_name" {
|
||||
value = "${module.workers.ingress_dns_name}"
|
||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||
}
|
||||
|
||||
# Outputs for worker pools
|
||||
|
||||
output "vpc_id" {
|
||||
value = "${aws_vpc.network.id}"
|
||||
description = "ID of the VPC for creating worker instances"
|
||||
}
|
||||
|
||||
output "subnet_ids" {
|
||||
value = ["${aws_subnet.public.*.id}"]
|
||||
description = "List of subnet IDs for creating worker instances"
|
||||
}
|
||||
|
||||
output "worker_security_groups" {
|
||||
value = ["${aws_security_group.worker.id}"]
|
||||
description = "List of worker security group IDs"
|
||||
}
|
||||
|
||||
output "kubeconfig" {
|
||||
value = "${module.bootkube.kubeconfig}"
|
||||
}
|
25
aws/fedora-atomic/kubernetes/require.tf
Normal file
25
aws/fedora-atomic/kubernetes/require.tf
Normal file
@ -0,0 +1,25 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.10.4"
|
||||
}
|
||||
|
||||
provider "aws" {
|
||||
version = "~> 1.11"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
}
|
395
aws/fedora-atomic/kubernetes/security.tf
Normal file
395
aws/fedora-atomic/kubernetes/security.tf
Normal file
@ -0,0 +1,395 @@
|
||||
# Security Groups (instance firewalls)
|
||||
|
||||
# Controller security group
|
||||
|
||||
resource "aws_security_group" "controller" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
description = "${var.cluster_name} controller security group"
|
||||
|
||||
vpc_id = "${aws_vpc.network.id}"
|
||||
|
||||
tags = "${map("Name", "${var.cluster_name}-controller")}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-icmp" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-ssh" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 22
|
||||
to_port = 22
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-apiserver" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 443
|
||||
to_port = 443
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-etcd" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 2379
|
||||
to_port = 2380
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-etcd-metrics" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 2381
|
||||
to_port = 2381
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-flannel" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-flannel-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9100
|
||||
to_port = 9100
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10250
|
||||
to_port = 10250
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-bgp" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 179
|
||||
to_port = 179
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-bgp-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 179
|
||||
to_port = 179
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-ipip" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = 4
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-ipip-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = 4
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-ipip-legacy" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = 94
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-ipip-legacy-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = 94
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-egress" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "egress"
|
||||
protocol = "-1"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
ipv6_cidr_blocks = ["::/0"]
|
||||
}
|
||||
|
||||
# Worker security group
|
||||
|
||||
resource "aws_security_group" "worker" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
description = "${var.cluster_name} worker security group"
|
||||
|
||||
vpc_id = "${aws_vpc.network.id}"
|
||||
|
||||
tags = "${map("Name", "${var.cluster_name}-worker")}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-icmp" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-ssh" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 22
|
||||
to_port = 22
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-http" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 80
|
||||
to_port = 80
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-https" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 443
|
||||
to_port = 443
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-flannel" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-flannel-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "udp"
|
||||
from_port = 8472
|
||||
to_port = 8472
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-node-exporter" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9100
|
||||
to_port = 9100
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "ingress-health" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10254
|
||||
to_port = 10254
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10250
|
||||
to_port = 10250
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10250
|
||||
to_port = 10250
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-bgp" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 179
|
||||
to_port = 179
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-bgp-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 179
|
||||
to_port = 179
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-ipip" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = 4
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-ipip-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = 4
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-ipip-legacy" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = 94
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-ipip-legacy-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = 94
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-egress" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "egress"
|
||||
protocol = "-1"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
ipv6_cidr_blocks = ["::/0"]
|
||||
}
|
89
aws/fedora-atomic/kubernetes/ssh.tf
Normal file
89
aws/fedora-atomic/kubernetes/ssh.tf
Normal file
@ -0,0 +1,89 @@
|
||||
# Secure copy etcd TLS assets to controllers.
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(aws_instance.controllers.*.public_ip, count.index)}"
|
||||
user = "fedora"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_ca_cert}"
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_client_cert}"
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_client_key}"
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_server_cert}"
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_server_key}"
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_peer_cert}"
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_peer_key}"
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mkdir -p /etc/ssl/etcd/etcd",
|
||||
"sudo mv etcd-client* /etc/ssl/etcd/",
|
||||
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/server-ca.crt",
|
||||
"sudo mv etcd-server.crt /etc/ssl/etcd/etcd/server.crt",
|
||||
"sudo mv etcd-server.key /etc/ssl/etcd/etcd/server.key",
|
||||
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/peer-ca.crt",
|
||||
"sudo mv etcd-peer.crt /etc/ssl/etcd/etcd/peer.crt",
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
depends_on = [
|
||||
"null_resource.copy-controller-secrets",
|
||||
"module.workers",
|
||||
"aws_route53_record.apiserver",
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${aws_instance.controllers.0.public_ip}"
|
||||
user = "fedora"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = "${var.asset_dir}"
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 4; done",
|
||||
"sudo mv $HOME/assets /var/lib/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
]
|
||||
}
|
||||
}
|
106
aws/fedora-atomic/kubernetes/variables.tf
Normal file
106
aws/fedora-atomic/kubernetes/variables.tf
Normal file
@ -0,0 +1,106 @@
|
||||
variable "cluster_name" {
|
||||
type = "string"
|
||||
description = "Unique cluster name (prepended to dns_zone)"
|
||||
}
|
||||
|
||||
# AWS
|
||||
|
||||
variable "dns_zone" {
|
||||
type = "string"
|
||||
description = "AWS DNS Zone (e.g. aws.example.com)"
|
||||
}
|
||||
|
||||
variable "dns_zone_id" {
|
||||
type = "string"
|
||||
description = "AWS DNS Zone ID (e.g. Z3PAABBCFAKEC0)"
|
||||
}
|
||||
|
||||
# instances
|
||||
|
||||
variable "controller_count" {
|
||||
type = "string"
|
||||
default = "1"
|
||||
description = "Number of controllers (i.e. masters)"
|
||||
}
|
||||
|
||||
variable "worker_count" {
|
||||
type = "string"
|
||||
default = "1"
|
||||
description = "Number of workers"
|
||||
}
|
||||
|
||||
variable "controller_type" {
|
||||
type = "string"
|
||||
default = "t2.small"
|
||||
description = "EC2 instance type for controllers"
|
||||
}
|
||||
|
||||
variable "worker_type" {
|
||||
type = "string"
|
||||
default = "t2.small"
|
||||
description = "EC2 instance type for workers"
|
||||
}
|
||||
|
||||
variable "disk_size" {
|
||||
type = "string"
|
||||
default = "40"
|
||||
description = "Size of the EBS volume in GB"
|
||||
}
|
||||
|
||||
variable "disk_type" {
|
||||
type = "string"
|
||||
default = "gp2"
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = "string"
|
||||
description = "SSH public key for user 'fedora'"
|
||||
}
|
||||
|
||||
variable "asset_dir" {
|
||||
description = "Path to a directory where generated assets should be placed (contains secrets)"
|
||||
type = "string"
|
||||
}
|
||||
|
||||
variable "networking" {
|
||||
description = "Choice of networking provider (calico or flannel)"
|
||||
type = "string"
|
||||
default = "calico"
|
||||
}
|
||||
|
||||
variable "network_mtu" {
|
||||
description = "CNI interface MTU (applies to calico only). Use 8981 if using instances types with Jumbo frames."
|
||||
type = "string"
|
||||
default = "1480"
|
||||
}
|
||||
|
||||
variable "host_cidr" {
|
||||
description = "CIDR IPv4 range to assign to EC2 nodes"
|
||||
type = "string"
|
||||
default = "10.0.0.0/16"
|
||||
}
|
||||
|
||||
variable "pod_cidr" {
|
||||
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||
type = "string"
|
||||
default = "10.2.0.0/16"
|
||||
}
|
||||
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
18
aws/fedora-atomic/kubernetes/workers.tf
Normal file
18
aws/fedora-atomic/kubernetes/workers.tf
Normal file
@ -0,0 +1,18 @@
|
||||
module "workers" {
|
||||
source = "workers"
|
||||
name = "${var.cluster_name}"
|
||||
|
||||
# AWS
|
||||
vpc_id = "${aws_vpc.network.id}"
|
||||
subnet_ids = ["${aws_subnet.public.*.id}"]
|
||||
security_groups = ["${aws_security_group.worker.id}"]
|
||||
count = "${var.worker_count}"
|
||||
instance_type = "${var.worker_type}"
|
||||
disk_size = "${var.disk_size}"
|
||||
|
||||
# configuration
|
||||
kubeconfig = "${module.bootkube.kubeconfig}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
19
aws/fedora-atomic/kubernetes/workers/ami.tf
Normal file
19
aws/fedora-atomic/kubernetes/workers/ami.tf
Normal file
@ -0,0 +1,19 @@
|
||||
data "aws_ami" "fedora" {
|
||||
most_recent = true
|
||||
owners = ["125523088429"]
|
||||
|
||||
filter {
|
||||
name = "architecture"
|
||||
values = ["x86_64"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "virtualization-type"
|
||||
values = ["hvm"]
|
||||
}
|
||||
|
||||
filter {
|
||||
name = "name"
|
||||
values = ["Fedora-Atomic-27-20180419.0.x86_64-*-gp2-*"]
|
||||
}
|
||||
}
|
@ -0,0 +1,80 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/latest/meta-data/local-ipv4\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
permissions: '0644'
|
||||
content: |
|
||||
${kubeconfig}
|
||||
- path: /etc/NetworkManager/conf.d/typhoon.conf
|
||||
content: |
|
||||
[main]
|
||||
plugins=keyfile
|
||||
[keyfile]
|
||||
unmanaged-devices=interface-name:cali*;interface-name:tunl*
|
||||
- path: /etc/selinux/config
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
content: |
|
||||
SELINUX=permissive
|
||||
SELINUXTYPE=targeted
|
||||
bootcmd:
|
||||
- [setenforce, Permissive]
|
||||
- [systemctl, disable, firewalld, --now]
|
||||
# https://github.com/kubernetes/kubernetes/issues/60869
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
- name: fedora
|
||||
gecos: Fedora Admin
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
groups: wheel,adm,systemd-journal,docker
|
||||
ssh-authorized-keys:
|
||||
- "${ssh_authorized_key}"
|
82
aws/fedora-atomic/kubernetes/workers/ingress.tf
Normal file
82
aws/fedora-atomic/kubernetes/workers/ingress.tf
Normal file
@ -0,0 +1,82 @@
|
||||
# Network Load Balancer for Ingress
|
||||
resource "aws_lb" "ingress" {
|
||||
name = "${var.name}-ingress"
|
||||
load_balancer_type = "network"
|
||||
internal = false
|
||||
|
||||
subnets = ["${var.subnet_ids}"]
|
||||
|
||||
enable_cross_zone_load_balancing = true
|
||||
}
|
||||
|
||||
# Forward HTTP traffic to workers
|
||||
resource "aws_lb_listener" "ingress-http" {
|
||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
||||
protocol = "TCP"
|
||||
port = 80
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${aws_lb_target_group.workers-http.arn}"
|
||||
}
|
||||
}
|
||||
|
||||
# Forward HTTPS traffic to workers
|
||||
resource "aws_lb_listener" "ingress-https" {
|
||||
load_balancer_arn = "${aws_lb.ingress.arn}"
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
default_action {
|
||||
type = "forward"
|
||||
target_group_arn = "${aws_lb_target_group.workers-https.arn}"
|
||||
}
|
||||
}
|
||||
|
||||
# Network Load Balancer target groups of instances
|
||||
|
||||
resource "aws_lb_target_group" "workers-http" {
|
||||
name = "${var.name}-workers-http"
|
||||
vpc_id = "${var.vpc_id}"
|
||||
target_type = "instance"
|
||||
|
||||
protocol = "TCP"
|
||||
port = 80
|
||||
|
||||
# HTTP health check for ingress
|
||||
health_check {
|
||||
protocol = "HTTP"
|
||||
port = 10254
|
||||
path = "/healthz"
|
||||
|
||||
# NLBs required to use same healthy and unhealthy thresholds
|
||||
healthy_threshold = 3
|
||||
unhealthy_threshold = 3
|
||||
|
||||
# Interval between health checks required to be 10 or 30
|
||||
interval = 10
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_lb_target_group" "workers-https" {
|
||||
name = "${var.name}-workers-https"
|
||||
vpc_id = "${var.vpc_id}"
|
||||
target_type = "instance"
|
||||
|
||||
protocol = "TCP"
|
||||
port = 443
|
||||
|
||||
# HTTP health check for ingress
|
||||
health_check {
|
||||
protocol = "HTTP"
|
||||
port = 10254
|
||||
path = "/healthz"
|
||||
|
||||
# NLBs required to use same healthy and unhealthy thresholds
|
||||
healthy_threshold = 3
|
||||
unhealthy_threshold = 3
|
||||
|
||||
# Interval between health checks required to be 10 or 30
|
||||
interval = 10
|
||||
}
|
||||
}
|
4
aws/fedora-atomic/kubernetes/workers/outputs.tf
Normal file
4
aws/fedora-atomic/kubernetes/workers/outputs.tf
Normal file
@ -0,0 +1,4 @@
|
||||
output "ingress_dns_name" {
|
||||
value = "${aws_lb.ingress.dns_name}"
|
||||
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
|
||||
}
|
75
aws/fedora-atomic/kubernetes/workers/variables.tf
Normal file
75
aws/fedora-atomic/kubernetes/workers/variables.tf
Normal file
@ -0,0 +1,75 @@
|
||||
variable "name" {
|
||||
type = "string"
|
||||
description = "Unique name for the worker pool"
|
||||
}
|
||||
|
||||
# AWS
|
||||
|
||||
variable "vpc_id" {
|
||||
type = "string"
|
||||
description = "Must be set to `vpc_id` output by cluster"
|
||||
}
|
||||
|
||||
variable "subnet_ids" {
|
||||
type = "list"
|
||||
description = "Must be set to `subnet_ids` output by cluster"
|
||||
}
|
||||
|
||||
variable "security_groups" {
|
||||
type = "list"
|
||||
description = "Must be set to `worker_security_groups` output by cluster"
|
||||
}
|
||||
|
||||
# instances
|
||||
|
||||
variable "count" {
|
||||
type = "string"
|
||||
default = "1"
|
||||
description = "Number of instances"
|
||||
}
|
||||
|
||||
variable "instance_type" {
|
||||
type = "string"
|
||||
default = "t2.small"
|
||||
description = "EC2 instance type"
|
||||
}
|
||||
|
||||
variable "disk_size" {
|
||||
type = "string"
|
||||
default = "40"
|
||||
description = "Size of the EBS volume in GB"
|
||||
}
|
||||
|
||||
variable "disk_type" {
|
||||
type = "string"
|
||||
default = "gp2"
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "kubeconfig" {
|
||||
type = "string"
|
||||
description = "Must be set to `kubeconfig` output by cluster"
|
||||
}
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = "string"
|
||||
description = "SSH public key for user 'fedora'"
|
||||
}
|
||||
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
69
aws/fedora-atomic/kubernetes/workers/workers.tf
Normal file
69
aws/fedora-atomic/kubernetes/workers/workers.tf
Normal file
@ -0,0 +1,69 @@
|
||||
# Workers AutoScaling Group
|
||||
resource "aws_autoscaling_group" "workers" {
|
||||
name = "${var.name}-worker ${aws_launch_configuration.worker.name}"
|
||||
|
||||
# count
|
||||
desired_capacity = "${var.count}"
|
||||
min_size = "${var.count}"
|
||||
max_size = "${var.count + 2}"
|
||||
default_cooldown = 30
|
||||
health_check_grace_period = 30
|
||||
|
||||
# network
|
||||
vpc_zone_identifier = ["${var.subnet_ids}"]
|
||||
|
||||
# template
|
||||
launch_configuration = "${aws_launch_configuration.worker.name}"
|
||||
|
||||
# target groups to which instances should be added
|
||||
target_group_arns = [
|
||||
"${aws_lb_target_group.workers-http.id}",
|
||||
"${aws_lb_target_group.workers-https.id}",
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
# override the default destroy and replace update behavior
|
||||
create_before_destroy = true
|
||||
}
|
||||
|
||||
tags = [{
|
||||
key = "Name"
|
||||
value = "${var.name}-worker"
|
||||
propagate_at_launch = true
|
||||
}]
|
||||
}
|
||||
|
||||
# Worker template
|
||||
resource "aws_launch_configuration" "worker" {
|
||||
image_id = "${data.aws_ami.fedora.image_id}"
|
||||
instance_type = "${var.instance_type}"
|
||||
|
||||
user_data = "${data.template_file.worker-cloudinit.rendered}"
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
}
|
||||
|
||||
# network
|
||||
security_groups = ["${var.security_groups}"]
|
||||
|
||||
lifecycle {
|
||||
// Override the default destroy and replace update behavior
|
||||
create_before_destroy = true
|
||||
ignore_changes = ["image_id"]
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Cloud-Init
|
||||
data "template_file" "worker-cloudinit" {
|
||||
template = "${file("${path.module}/cloudinit/worker.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
kubeconfig = "${indent(6, var.kubeconfig)}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.1 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=db36b92abced3c4b0af279adfd5ed4bf0cf8c39f"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.3"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
|
||||
@ -64,6 +64,8 @@ systemd:
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
@ -75,6 +77,7 @@ systemd:
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -119,7 +122,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.1
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
@ -39,6 +39,8 @@ systemd:
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
@ -48,6 +50,7 @@ systemd:
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -80,7 +83,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.1
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
|
23
bare-metal/fedora-atomic/kubernetes/LICENSE
Normal file
23
bare-metal/fedora-atomic/kubernetes/LICENSE
Normal file
@ -0,0 +1,23 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2017 Typhoon Authors
|
||||
Copyright (c) 2017 Dalton Hubble
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
22
bare-metal/fedora-atomic/kubernetes/README.md
Normal file
22
bare-metal/fedora-atomic/kubernetes/README.md
Normal file
@ -0,0 +1,22 @@
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
* Minimal, stable base Kubernetes distribution
|
||||
* Declarative infrastructure and configuration
|
||||
* Free (freedom and cost) and privacy-respecting
|
||||
* Practical for labs, datacenters, and clouds
|
||||
|
||||
Typhoon distributes upstream Kubernetes, architectural conventions, and cluster addons, much like a GNU/Linux distribution provides the Linux kernel and userspace components.
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the bare-metal [tutorial](https://typhoon.psdn.io/bare-metal/).
|
||||
|
17
bare-metal/fedora-atomic/kubernetes/bootkube.tf
Normal file
17
bare-metal/fedora-atomic/kubernetes/bootkube.tf
Normal file
@ -0,0 +1,17 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
etcd_servers = ["${var.controller_domains}"]
|
||||
asset_dir = "${var.asset_dir}"
|
||||
networking = "${var.networking}"
|
||||
network_mtu = "${var.network_mtu}"
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
}
|
@ -0,0 +1,98 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/etcd/etcd.conf
|
||||
content: |
|
||||
ETCD_NAME=${etcd_name}
|
||||
ETCD_DATA_DIR=/var/lib/etcd
|
||||
ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379
|
||||
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380
|
||||
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
|
||||
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
|
||||
ETCD_LISTEN_METRICS_URLS=http://0.0.0.0:2381
|
||||
ETCD_INITIAL_CLUSTER=${etcd_initial_cluster}
|
||||
ETCD_STRICT_RECONFIG_CHECK=true
|
||||
ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/server-ca.crt
|
||||
ETCD_CERT_FILE=/etc/ssl/certs/etcd/server.crt
|
||||
ETCD_KEY_FILE=/etc/ssl/certs/etcd/server.key
|
||||
ETCD_CLIENT_CERT_AUTH=true
|
||||
ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/peer-ca.crt
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
[Path]
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /var/lib/bootkube/.keep
|
||||
- path: /etc/NetworkManager/conf.d/typhoon.conf
|
||||
content: |
|
||||
[main]
|
||||
plugins=keyfile
|
||||
[keyfile]
|
||||
unmanaged-devices=interface-name:cali*;interface-name:tunl*
|
||||
- path: /etc/selinux/config
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
content: |
|
||||
SELINUX=permissive
|
||||
SELINUXTYPE=targeted
|
||||
bootcmd:
|
||||
- [setenforce, Permissive]
|
||||
- [systemctl, disable, firewalld, --now]
|
||||
# https://github.com/kubernetes/kubernetes/issues/60869
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
- default
|
||||
- name: fedora
|
||||
gecos: Fedora Admin
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
groups: wheel,adm,systemd-journal,docker
|
||||
ssh-authorized-keys:
|
||||
- "${ssh_authorized_key}"
|
@ -0,0 +1,71 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--hostname-override=${domain_name} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
[Path]
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/NetworkManager/conf.d/typhoon.conf
|
||||
content: |
|
||||
[main]
|
||||
plugins=keyfile
|
||||
[keyfile]
|
||||
unmanaged-devices=interface-name:cali*;interface-name:tunl*
|
||||
- path: /etc/selinux/config
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
content: |
|
||||
SELINUX=permissive
|
||||
SELINUXTYPE=targeted
|
||||
bootcmd:
|
||||
- [setenforce, Permissive]
|
||||
- [systemctl, disable, firewalld, --now]
|
||||
# https://github.com/kubernetes/kubernetes/issues/60869
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
- default
|
||||
- name: fedora
|
||||
gecos: Fedora Admin
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
groups: wheel,adm,systemd-journal,docker
|
||||
ssh-authorized-keys:
|
||||
- "${ssh_authorized_key}"
|
37
bare-metal/fedora-atomic/kubernetes/groups.tf
Normal file
37
bare-metal/fedora-atomic/kubernetes/groups.tf
Normal file
@ -0,0 +1,37 @@
|
||||
// Install Fedora to disk
|
||||
resource "matchbox_group" "fedora-install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
|
||||
name = "${format("fedora-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
profile = "${element(matchbox_profile.cached-fedora-install.*.name, count.index)}"
|
||||
|
||||
selector {
|
||||
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
||||
}
|
||||
|
||||
metadata {
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
}
|
||||
}
|
||||
|
||||
resource "matchbox_group" "controller" {
|
||||
count = "${length(var.controller_names)}"
|
||||
name = "${format("%s-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
||||
profile = "${element(matchbox_profile.controllers.*.name, count.index)}"
|
||||
|
||||
selector {
|
||||
mac = "${element(var.controller_macs, count.index)}"
|
||||
os = "installed"
|
||||
}
|
||||
}
|
||||
|
||||
resource "matchbox_group" "worker" {
|
||||
count = "${length(var.worker_names)}"
|
||||
name = "${format("%s-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
||||
profile = "${element(matchbox_profile.workers.*.name, count.index)}"
|
||||
|
||||
selector {
|
||||
mac = "${element(var.worker_macs, count.index)}"
|
||||
os = "installed"
|
||||
}
|
||||
}
|
@ -0,0 +1,36 @@
|
||||
# required
|
||||
lang en_US.UTF-8
|
||||
keyboard us
|
||||
timezone --utc Etc/UTC
|
||||
|
||||
# wipe disks
|
||||
zerombr
|
||||
clearpart --all --initlabel
|
||||
|
||||
# locked root and temporary user
|
||||
rootpw --lock --iscrypted locked
|
||||
user --name=none
|
||||
|
||||
# config
|
||||
autopart --type=lvm --noswap
|
||||
network --bootproto=dhcp --device=link --activate --onboot=on
|
||||
bootloader --timeout=1 --append="ds=nocloud\;seedfrom=/var/cloud-init/"
|
||||
services --enabled=cloud-init,cloud-init-local,cloud-config,cloud-final
|
||||
|
||||
ostreesetup --osname="fedora-atomic" --remote="fedora-atomic" --url="${atomic_assets_endpoint}/repo" --ref=fedora/27/x86_64/atomic-host --nogpg
|
||||
|
||||
reboot
|
||||
|
||||
%post --erroronfail
|
||||
mkdir /var/cloud-init
|
||||
curl --retry 10 "${matchbox_http_endpoint}/generic?mac=${mac}&os=installed" -o /var/cloud-init/user-data
|
||||
echo "instance-id: iid-local01" > /var/cloud-init/meta-data
|
||||
|
||||
rm -f /etc/ostree/remotes.d/fedora-atomic.conf
|
||||
ostree remote add fedora-atomic https://kojipkgs.fedoraproject.org/atomic/27 --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-primary
|
||||
|
||||
# lock root user
|
||||
passwd -l root
|
||||
# remove temporary user
|
||||
userdel -r none
|
||||
%end
|
3
bare-metal/fedora-atomic/kubernetes/outputs.tf
Normal file
3
bare-metal/fedora-atomic/kubernetes/outputs.tf
Normal file
@ -0,0 +1,3 @@
|
||||
output "kubeconfig" {
|
||||
value = "${module.bootkube.kubeconfig}"
|
||||
}
|
85
bare-metal/fedora-atomic/kubernetes/profiles.tf
Normal file
85
bare-metal/fedora-atomic/kubernetes/profiles.tf
Normal file
@ -0,0 +1,85 @@
|
||||
locals {
|
||||
default_assets_endpoint = "${var.matchbox_http_endpoint}/assets/fedora/27"
|
||||
atomic_assets_endpoint = "${var.atomic_assets_endpoint != "" ? var.atomic_assets_endpoint : local.default_assets_endpoint}"
|
||||
}
|
||||
|
||||
// Cached Fedora Install profile (from matchbox /assets cache)
|
||||
// Note: Admin must have downloaded Fedora kernel, initrd, and repo into
|
||||
// matchbox assets.
|
||||
resource "matchbox_profile" "cached-fedora-install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
name = "${format("%s-cached-fedora-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
||||
kernel = "${local.atomic_assets_endpoint}/images/pxeboot/vmlinuz"
|
||||
|
||||
initrd = [
|
||||
"${local.atomic_assets_endpoint}/images/pxeboot/initrd.img",
|
||||
]
|
||||
|
||||
args = [
|
||||
"initrd=initrd.img",
|
||||
"inst.repo=${local.atomic_assets_endpoint}",
|
||||
"inst.ks=${var.matchbox_http_endpoint}/generic?mac=${element(concat(var.controller_macs, var.worker_macs), count.index)}",
|
||||
"inst.text",
|
||||
"${var.kernel_args}",
|
||||
]
|
||||
|
||||
# kickstart
|
||||
generic_config = "${element(data.template_file.install-kickstarts.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
data "template_file" "install-kickstarts" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
|
||||
template = "${file("${path.module}/kickstart/fedora-atomic.ks.tmpl")}"
|
||||
|
||||
vars {
|
||||
matchbox_http_endpoint = "${var.matchbox_http_endpoint}"
|
||||
atomic_assets_endpoint = "${local.atomic_assets_endpoint}"
|
||||
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
||||
}
|
||||
}
|
||||
|
||||
// Kubernetes Controller profiles
|
||||
resource "matchbox_profile" "controllers" {
|
||||
count = "${length(var.controller_names)}"
|
||||
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
||||
# cloud-init
|
||||
generic_config = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
count = "${length(var.controller_names)}"
|
||||
|
||||
template = "${file("${path.module}/cloudinit/controller.yaml.tmpl")}"
|
||||
|
||||
vars {
|
||||
domain_name = "${element(var.controller_domains, count.index)}"
|
||||
etcd_name = "${element(var.controller_names, count.index)}"
|
||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", var.controller_names, var.controller_domains))}"
|
||||
k8s_dns_service_ip = "${module.bootkube.kube_dns_service_ip}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
}
|
||||
}
|
||||
|
||||
// Kubernetes Worker profiles
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = "${length(var.worker_names)}"
|
||||
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
||||
# cloud-init
|
||||
generic_config = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
data "template_file" "worker-configs" {
|
||||
count = "${length(var.worker_names)}"
|
||||
|
||||
template = "${file("${path.module}/cloudinit/worker.yaml.tmpl")}"
|
||||
|
||||
vars {
|
||||
domain_name = "${element(var.worker_domains, count.index)}"
|
||||
k8s_dns_service_ip = "${module.bootkube.kube_dns_service_ip}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
}
|
||||
}
|
21
bare-metal/fedora-atomic/kubernetes/require.tf
Normal file
21
bare-metal/fedora-atomic/kubernetes/require.tf
Normal file
@ -0,0 +1,21 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.10.4"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
}
|
120
bare-metal/fedora-atomic/kubernetes/ssh.tf
Normal file
120
bare-metal/fedora-atomic/kubernetes/ssh.tf
Normal file
@ -0,0 +1,120 @@
|
||||
# Secure copy etcd TLS assets and kubeconfig to controllers. Activates kubelet.service
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = "${length(var.controller_names)}"
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(var.controller_domains, count.index)}"
|
||||
user = "fedora"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.kubeconfig}"
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_ca_cert}"
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_client_cert}"
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_client_key}"
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_server_cert}"
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_server_key}"
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_peer_cert}"
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_peer_key}"
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mkdir -p /etc/ssl/etcd/etcd",
|
||||
"sudo mv etcd-client* /etc/ssl/etcd/",
|
||||
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/server-ca.crt",
|
||||
"sudo mv etcd-server.crt /etc/ssl/etcd/etcd/server.crt",
|
||||
"sudo mv etcd-server.key /etc/ssl/etcd/etcd/server.key",
|
||||
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/peer-ca.crt",
|
||||
"sudo mv etcd-peer.crt /etc/ssl/etcd/etcd/peer.crt",
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy kubeconfig to all workers. Activates kubelet.service
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
count = "${length(var.worker_names)}"
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(var.worker_domains, count.index)}"
|
||||
user = "fedora"
|
||||
timeout = "60m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.kubeconfig}"
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
# Without depends_on, this remote-exec may start before the kubeconfig copy.
|
||||
# Terraform only does one task at a time, so it would try to bootstrap
|
||||
# while no Kubelets are running.
|
||||
depends_on = [
|
||||
"null_resource.copy-controller-secrets",
|
||||
"null_resource.copy-worker-secrets",
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(var.controller_domains, 0)}"
|
||||
user = "fedora"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = "${var.asset_dir}"
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 4; done",
|
||||
"sudo mv $HOME/assets /var/lib/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
]
|
||||
}
|
||||
}
|
105
bare-metal/fedora-atomic/kubernetes/variables.tf
Normal file
105
bare-metal/fedora-atomic/kubernetes/variables.tf
Normal file
@ -0,0 +1,105 @@
|
||||
variable "cluster_name" {
|
||||
type = "string"
|
||||
description = "Unique cluster name"
|
||||
}
|
||||
|
||||
# bare-metal
|
||||
|
||||
variable "matchbox_http_endpoint" {
|
||||
type = "string"
|
||||
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
|
||||
}
|
||||
|
||||
variable "atomic_assets_endpoint" {
|
||||
type = "string"
|
||||
default = ""
|
||||
description = <<EOD
|
||||
HTTP endpoint serving the Fedora Atomic Host vmlinuz, initrd, os repo, and ostree repo (.e.g `http://example.com/some/path`).
|
||||
|
||||
Ensure the HTTP server directory contains `vmlinuz` and `initrd` files and `os` and `repo` directories. Leave unset to assume ${matchbox_http_endpoint}/assets/fedora/27
|
||||
EOD
|
||||
}
|
||||
|
||||
# machines
|
||||
# Terraform's crude "type system" does not properly support lists of maps so we do this.
|
||||
|
||||
variable "controller_names" {
|
||||
type = "list"
|
||||
}
|
||||
|
||||
variable "controller_macs" {
|
||||
type = "list"
|
||||
}
|
||||
|
||||
variable "controller_domains" {
|
||||
type = "list"
|
||||
}
|
||||
|
||||
variable "worker_names" {
|
||||
type = "list"
|
||||
}
|
||||
|
||||
variable "worker_macs" {
|
||||
type = "list"
|
||||
}
|
||||
|
||||
variable "worker_domains" {
|
||||
type = "list"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "k8s_domain_name" {
|
||||
description = "Controller DNS name which resolves to a controller instance. Workers and kubeconfig's will communicate with this endpoint (e.g. cluster.example.com)"
|
||||
type = "string"
|
||||
}
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = "string"
|
||||
description = "SSH public key for user 'fedora'"
|
||||
}
|
||||
|
||||
variable "asset_dir" {
|
||||
description = "Path to a directory where generated assets should be placed (contains secrets)"
|
||||
type = "string"
|
||||
}
|
||||
|
||||
variable "networking" {
|
||||
description = "Choice of networking provider (flannel or calico)"
|
||||
type = "string"
|
||||
default = "calico"
|
||||
}
|
||||
|
||||
variable "network_mtu" {
|
||||
description = "CNI interface MTU (applies to calico only)"
|
||||
type = "string"
|
||||
default = "1480"
|
||||
}
|
||||
|
||||
variable "pod_cidr" {
|
||||
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||
type = "string"
|
||||
default = "10.2.0.0/16"
|
||||
}
|
||||
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "kernel_args" {
|
||||
description = "Additional kernel arguments to provide at PXE boot."
|
||||
type = "list"
|
||||
default = []
|
||||
}
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.1 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=db36b92abced3c4b0af279adfd5ed4bf0cf8c39f"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.3"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -67,6 +67,8 @@ systemd:
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
@ -78,6 +80,7 @@ systemd:
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -124,7 +127,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.1
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
|
@ -42,6 +42,8 @@ systemd:
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
@ -51,6 +53,7 @@ systemd:
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -94,7 +97,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.1
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -112,7 +115,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.1 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.2 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
23
digital-ocean/fedora-atomic/kubernetes/LICENSE
Normal file
23
digital-ocean/fedora-atomic/kubernetes/LICENSE
Normal file
@ -0,0 +1,23 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2017 Typhoon Authors
|
||||
Copyright (c) 2017 Dalton Hubble
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
22
digital-ocean/fedora-atomic/kubernetes/README.md
Normal file
22
digital-ocean/fedora-atomic/kubernetes/README.md
Normal file
@ -0,0 +1,22 @@
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
* Minimal, stable base Kubernetes distribution
|
||||
* Declarative infrastructure and configuration
|
||||
* Free (freedom and cost) and privacy-respecting
|
||||
* Practical for labs, datacenters, and clouds
|
||||
|
||||
Typhoon distributes upstream Kubernetes, architectural conventions, and cluster addons, much like a GNU/Linux distribution provides the Linux kernel and userspace components.
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/digital-ocean/).
|
||||
|
17
digital-ocean/fedora-atomic/kubernetes/bootkube.tf
Normal file
17
digital-ocean/fedora-atomic/kubernetes/bootkube.tf
Normal file
@ -0,0 +1,17 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
etcd_servers = "${digitalocean_record.etcds.*.fqdn}"
|
||||
asset_dir = "${var.asset_dir}"
|
||||
networking = "flannel"
|
||||
network_mtu = 1440
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
}
|
@ -0,0 +1,105 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/etcd/etcd.conf
|
||||
content: |
|
||||
ETCD_NAME=${etcd_name}
|
||||
ETCD_DATA_DIR=/var/lib/etcd
|
||||
ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379
|
||||
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380
|
||||
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
|
||||
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
|
||||
ETCD_LISTEN_METRICS_URLS=http://0.0.0.0:2381
|
||||
ETCD_INITIAL_CLUSTER=${etcd_initial_cluster}
|
||||
ETCD_STRICT_RECONFIG_CHECK=true
|
||||
ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/server-ca.crt
|
||||
ETCD_CERT_FILE=/etc/ssl/certs/etcd/server.crt
|
||||
ETCD_KEY_FILE=/etc/ssl/certs/etcd/server.key
|
||||
ETCD_CLIENT_CERT_AUTH=true
|
||||
ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/peer-ca.crt
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
[Path]
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /var/lib/bootkube/.keep
|
||||
- path: /etc/selinux/config
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
content: |
|
||||
SELINUX=permissive
|
||||
SELINUXTYPE=targeted
|
||||
bootcmd:
|
||||
- [setenforce, Permissive]
|
||||
- [systemctl, disable, firewalld, --now]
|
||||
# https://github.com/kubernetes/kubernetes/issues/60869
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
- default
|
||||
- name: fedora
|
||||
gecos: Fedora Admin
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
groups: wheel,adm,systemd-journal,docker
|
||||
ssh-authorized-keys:
|
||||
- "${ssh_authorized_key}"
|
@ -0,0 +1,78 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
[Path]
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/selinux/config
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
content: |
|
||||
SELINUX=permissive
|
||||
SELINUXTYPE=targeted
|
||||
bootcmd:
|
||||
- [setenforce, Permissive]
|
||||
- [systemctl, disable, firewalld, --now]
|
||||
# https://github.com/kubernetes/kubernetes/issues/60869
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
- default
|
||||
- name: fedora
|
||||
gecos: Fedora Admin
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
groups: wheel,adm,systemd-journal,docker
|
||||
ssh-authorized-keys:
|
||||
- "${ssh_authorized_key}"
|
89
digital-ocean/fedora-atomic/kubernetes/controllers.tf
Normal file
89
digital-ocean/fedora-atomic/kubernetes/controllers.tf
Normal file
@ -0,0 +1,89 @@
|
||||
# Controller Instance DNS records
|
||||
resource "digitalocean_record" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = "${var.dns_zone}"
|
||||
|
||||
# DNS record (will be prepended to domain)
|
||||
name = "${var.cluster_name}"
|
||||
type = "A"
|
||||
ttl = 300
|
||||
|
||||
# IPv4 addresses of controllers
|
||||
value = "${element(digitalocean_droplet.controllers.*.ipv4_address, count.index)}"
|
||||
}
|
||||
|
||||
# Discrete DNS records for each controller's private IPv4 for etcd usage
|
||||
resource "digitalocean_record" "etcds" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = "${var.dns_zone}"
|
||||
|
||||
# DNS record (will be prepended to domain)
|
||||
name = "${var.cluster_name}-etcd${count.index}"
|
||||
type = "A"
|
||||
ttl = 300
|
||||
|
||||
# private IPv4 address for etcd
|
||||
value = "${element(digitalocean_droplet.controllers.*.ipv4_address_private, count.index)}"
|
||||
}
|
||||
|
||||
# Controller droplet instances
|
||||
resource "digitalocean_droplet" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
region = "${var.region}"
|
||||
|
||||
image = "${var.image}"
|
||||
size = "${var.controller_type}"
|
||||
|
||||
# network
|
||||
ipv6 = true
|
||||
private_networking = true
|
||||
|
||||
user_data = "${element(data.template_file.controller-cloudinit.*.rendered, count.index)}"
|
||||
ssh_keys = ["${var.ssh_fingerprints}"]
|
||||
|
||||
tags = [
|
||||
"${digitalocean_tag.controllers.id}",
|
||||
]
|
||||
}
|
||||
|
||||
# Tag to label controllers
|
||||
resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Cloud-Init
|
||||
data "template_file" "controller-cloudinit" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
template = "${file("${path.module}/cloudinit/controller.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
||||
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
||||
|
||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
||||
resource null_resource "repeat" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
triggers {
|
||||
name = "etcd${count.index}"
|
||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
}
|
||||
}
|
53
digital-ocean/fedora-atomic/kubernetes/network.tf
Normal file
53
digital-ocean/fedora-atomic/kubernetes/network.tf
Normal file
@ -0,0 +1,53 @@
|
||||
resource "digitalocean_firewall" "rules" {
|
||||
name = "${var.cluster_name}"
|
||||
|
||||
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
|
||||
# allow ssh, http/https ingress, and peer-to-peer traffic
|
||||
inbound_rule = [
|
||||
{
|
||||
protocol = "tcp"
|
||||
port_range = "22"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
{
|
||||
protocol = "tcp"
|
||||
port_range = "80"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
{
|
||||
protocol = "tcp"
|
||||
port_range = "443"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
{
|
||||
protocol = "udp"
|
||||
port_range = "1-65535"
|
||||
source_tags = ["${digitalocean_tag.controllers.name}", "${digitalocean_tag.workers.name}"]
|
||||
},
|
||||
{
|
||||
protocol = "tcp"
|
||||
port_range = "1-65535"
|
||||
source_tags = ["${digitalocean_tag.controllers.name}", "${digitalocean_tag.workers.name}"]
|
||||
},
|
||||
]
|
||||
|
||||
# allow all outbound traffic
|
||||
outbound_rule = [
|
||||
{
|
||||
protocol = "tcp"
|
||||
port_range = "1-65535"
|
||||
destination_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
{
|
||||
protocol = "udp"
|
||||
port_range = "1-65535"
|
||||
destination_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
{
|
||||
protocol = "icmp"
|
||||
port_range = "1-65535"
|
||||
destination_addresses = ["0.0.0.0/0", "::/0"]
|
||||
},
|
||||
]
|
||||
}
|
23
digital-ocean/fedora-atomic/kubernetes/outputs.tf
Normal file
23
digital-ocean/fedora-atomic/kubernetes/outputs.tf
Normal file
@ -0,0 +1,23 @@
|
||||
output "controllers_dns" {
|
||||
value = "${digitalocean_record.controllers.0.fqdn}"
|
||||
}
|
||||
|
||||
output "workers_dns" {
|
||||
value = "${digitalocean_record.workers.0.fqdn}"
|
||||
}
|
||||
|
||||
output "controllers_ipv4" {
|
||||
value = ["${digitalocean_droplet.controllers.*.ipv4_address}"]
|
||||
}
|
||||
|
||||
output "controllers_ipv6" {
|
||||
value = ["${digitalocean_droplet.controllers.*.ipv6_address}"]
|
||||
}
|
||||
|
||||
output "workers_ipv4" {
|
||||
value = ["${digitalocean_droplet.workers.*.ipv4_address}"]
|
||||
}
|
||||
|
||||
output "workers_ipv6" {
|
||||
value = ["${digitalocean_droplet.workers.*.ipv6_address}"]
|
||||
}
|
25
digital-ocean/fedora-atomic/kubernetes/require.tf
Normal file
25
digital-ocean/fedora-atomic/kubernetes/require.tf
Normal file
@ -0,0 +1,25 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.10.4"
|
||||
}
|
||||
|
||||
provider "digitalocean" {
|
||||
version = "~> 0.1.2"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
}
|
117
digital-ocean/fedora-atomic/kubernetes/ssh.tf
Normal file
117
digital-ocean/fedora-atomic/kubernetes/ssh.tf
Normal file
@ -0,0 +1,117 @@
|
||||
# Secure copy etcd TLS assets and kubeconfig to controllers. Activates kubelet.service
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(concat(digitalocean_droplet.controllers.*.ipv4_address), count.index)}"
|
||||
user = "fedora"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.kubeconfig}"
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_ca_cert}"
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_client_cert}"
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_client_key}"
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_server_cert}"
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_server_key}"
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_peer_cert}"
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_peer_key}"
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mkdir -p /etc/ssl/etcd/etcd",
|
||||
"sudo mv etcd-client* /etc/ssl/etcd/",
|
||||
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/server-ca.crt",
|
||||
"sudo mv etcd-server.crt /etc/ssl/etcd/etcd/server.crt",
|
||||
"sudo mv etcd-server.key /etc/ssl/etcd/etcd/server.key",
|
||||
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/peer-ca.crt",
|
||||
"sudo mv etcd-peer.crt /etc/ssl/etcd/etcd/peer.crt",
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy kubeconfig to all workers. Activates kubelet.service.
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(concat(digitalocean_droplet.workers.*.ipv4_address), count.index)}"
|
||||
user = "fedora"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.kubeconfig}"
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
depends_on = [
|
||||
"null_resource.copy-controller-secrets",
|
||||
"null_resource.copy-worker-secrets",
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${digitalocean_droplet.controllers.0.ipv4_address}"
|
||||
user = "fedora"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = "${var.asset_dir}"
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 4; done",
|
||||
"sudo mv $HOME/assets /var/lib/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
]
|
||||
}
|
||||
}
|
87
digital-ocean/fedora-atomic/kubernetes/variables.tf
Normal file
87
digital-ocean/fedora-atomic/kubernetes/variables.tf
Normal file
@ -0,0 +1,87 @@
|
||||
variable "cluster_name" {
|
||||
type = "string"
|
||||
description = "Unique cluster name (prepended to dns_zone)"
|
||||
}
|
||||
|
||||
# Digital Ocean
|
||||
|
||||
variable "region" {
|
||||
type = "string"
|
||||
description = "Digital Ocean region (e.g. nyc1, sfo2, fra1, tor1)"
|
||||
}
|
||||
|
||||
variable "dns_zone" {
|
||||
type = "string"
|
||||
description = "Digital Ocean domain (i.e. DNS zone) (e.g. do.example.com)"
|
||||
}
|
||||
|
||||
# instances
|
||||
|
||||
variable "controller_count" {
|
||||
type = "string"
|
||||
default = "1"
|
||||
description = "Number of controllers (i.e. masters)"
|
||||
}
|
||||
|
||||
variable "worker_count" {
|
||||
type = "string"
|
||||
default = "1"
|
||||
description = "Number of workers"
|
||||
}
|
||||
|
||||
variable "controller_type" {
|
||||
type = "string"
|
||||
default = "s-2vcpu-2gb"
|
||||
description = "Droplet type for controllers (e.g. s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb)"
|
||||
}
|
||||
|
||||
variable "worker_type" {
|
||||
type = "string"
|
||||
default = "s-1vcpu-1gb"
|
||||
description = "Droplet type for workers (e.g. s-1vcpu-1gb, s-1vcpu-2gb, s-2vcpu-2gb)"
|
||||
}
|
||||
|
||||
variable "image" {
|
||||
type = "string"
|
||||
default = "fedora-27-x64-atomic"
|
||||
description = "OS image from which to initialize the disk (e.g. fedora-27-x64-atomic)"
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = "string"
|
||||
description = "SSH public key for user 'fedora'"
|
||||
}
|
||||
|
||||
variable "ssh_fingerprints" {
|
||||
type = "list"
|
||||
description = "SSH public key fingerprints. (e.g. see `ssh-add -l -E md5`)"
|
||||
}
|
||||
|
||||
variable "asset_dir" {
|
||||
description = "Path to a directory where generated assets should be placed (contains secrets)"
|
||||
type = "string"
|
||||
}
|
||||
|
||||
variable "pod_cidr" {
|
||||
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||
type = "string"
|
||||
default = "10.2.0.0/16"
|
||||
}
|
||||
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by kube-dns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
50
digital-ocean/fedora-atomic/kubernetes/workers.tf
Normal file
50
digital-ocean/fedora-atomic/kubernetes/workers.tf
Normal file
@ -0,0 +1,50 @@
|
||||
# Worker DNS records
|
||||
resource "digitalocean_record" "workers" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = "${var.dns_zone}"
|
||||
|
||||
name = "${var.cluster_name}-workers"
|
||||
type = "A"
|
||||
ttl = 300
|
||||
value = "${element(digitalocean_droplet.workers.*.ipv4_address, count.index)}"
|
||||
}
|
||||
|
||||
# Worker droplet instances
|
||||
resource "digitalocean_droplet" "workers" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
name = "${var.cluster_name}-worker-${count.index}"
|
||||
region = "${var.region}"
|
||||
|
||||
image = "${var.image}"
|
||||
size = "${var.worker_type}"
|
||||
|
||||
# network
|
||||
ipv6 = true
|
||||
private_networking = true
|
||||
|
||||
user_data = "${data.template_file.worker-cloudinit.rendered}"
|
||||
ssh_keys = ["${var.ssh_fingerprints}"]
|
||||
|
||||
tags = [
|
||||
"${digitalocean_tag.workers.id}",
|
||||
]
|
||||
}
|
||||
|
||||
# Tag to label workers
|
||||
resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Cloud-Init
|
||||
data "template_file" "worker-cloudinit" {
|
||||
template = "${file("${path.module}/cloudinit/worker.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
@ -14,7 +14,7 @@ kubectl port-forward grafana-POD-ID 8080 -n monitoring
|
||||
|
||||
Visit [127.0.0.1:8080](http://127.0.0.1:8080) to view the bundled dashboards.
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
|
@ -110,7 +110,7 @@ Add a DNS record resolving to the WAN for each application.
|
||||
```tf
|
||||
resource "google_dns_record_set" "some-application" {
|
||||
# Managed DNS Zone name
|
||||
managed_zone = "dghubble-io"
|
||||
managed_zone = "zone-name"
|
||||
|
||||
# Name of the DNS record
|
||||
name = "app.example.com."
|
||||
|
@ -41,10 +41,10 @@ kubectl port-forward prometheus-POD-ID 9090 -n monitoring
|
||||
|
||||
Visit [127.0.0.1:9090](http://127.0.0.1:9090) to query [expressions](http://127.0.0.1:9090/graph), view [targets](http://127.0.0.1:9090/targets), or check [alerts](http://127.0.0.1:9090/alerts).
|
||||
|
||||

|
||||

|
||||
<br/>
|
||||

|
||||

|
||||
<br/>
|
||||

|
||||

|
||||
|
||||
Use [Grafana](/addons/grafana.md) to view or build dashboards that use Prometheus as the datasource.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Customization
|
||||
|
||||
Typhoon provides minimal Kubernetes clusters with defaults we recommend for production. Terraform variables provide easy to use and supported customizations for clusters. Advanced options are available for customizing the architecture or hosts.
|
||||
Typhoon provides Kubernetes clusters with defaults recommended for production. Terraform variables expose supported customization options. Advanced options are available for customizing the architecture or hosts as well.
|
||||
|
||||
## Variables
|
||||
|
||||
@ -115,9 +115,13 @@ Container Linux Configs (and the CoreOS Ignition system) create immutable infras
|
||||
!!! danger
|
||||
Destroying and recreating controller instances is destructive! etcd runs on controller instances and stores data there. Do not modify controller snippets. See [blue/green](https://typhoon.psdn.io/topics/maintenance/#upgrades) clusters.
|
||||
|
||||
### Fedora Atomic
|
||||
|
||||
Cloud-Init and kickstart (bare-metal only) declare how a Fedora Atomic instance should be provisioned. Customizing these declarations in ways beyond the provided Terraform variables is unsupported.
|
||||
|
||||
## Architecture
|
||||
|
||||
To customize clusters in ways that aren't supported by input variables, fork Typhoon and maintain a repository with customizations. Reference the repository by changing the username.
|
||||
Typhoon chooses variables to expose with purpose. If you must customize clusters in ways that aren't supported by input variables, fork Typhoon and maintain a repository with customizations. Reference the repository by changing the username.
|
||||
|
||||
```
|
||||
module "digital-ocean-nemo" {
|
||||
|
@ -5,15 +5,17 @@ Typhoon AWS and Google Cloud allow additional groups of workers to be defined an
|
||||
Internal Terraform Modules:
|
||||
|
||||
* `aws/container-linux/kubernetes/workers`
|
||||
* `aws/fedora-atomic/kubernetes/workers`
|
||||
* `google-cloud/container-linux/kubernetes/workers`
|
||||
* `google-cloud/fedora-atomic/kubernetes/workers`
|
||||
|
||||
## AWS
|
||||
|
||||
Create a cluster following the AWS [tutorial](../aws.md#cluster). Define a worker pool using the AWS internal `workers` module.
|
||||
Create a cluster following the AWS [tutorial](../cl/aws.md#cluster). Define a worker pool using the AWS internal `workers` module.
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes/workers?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -73,11 +75,11 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
|
||||
|
||||
## Google Cloud
|
||||
|
||||
Create a cluster following the Google Cloud [tutorial](../google-cloud.md#cluster). Define a worker pool using the Google Cloud internal `workers` module.
|
||||
Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#cluster). Define a worker pool using the Google Cloud internal `workers` module.
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -111,11 +113,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.1
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.1
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.1
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.10.1
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.10.1
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.10.2
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.10.2
|
||||
```
|
||||
|
||||
### Variables
|
||||
|
34
docs/announce.md
Normal file
34
docs/announce.md
Normal file
@ -0,0 +1,34 @@
|
||||
# Announce <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo-small.png">
|
||||
|
||||
## April 26, 2018
|
||||
|
||||
Introducing Typhoon Kubernetes clusters for Fedora Atomic!
|
||||
|
||||
Fedora Atomic is a container-optimized operating system designed for large-scale clustered operation, immutable infrastructure, and atomic operating system upgrades. Its part of [Fedora](https://getfedora.org/en/atomic/download/) and [Project Atomic](http://www.projectatomic.io/docs/introduction/), a Red Hat sponsored project working on rpm-ostree, buildah, skopeo, CRI-O, and the related CentOS/RHEL Atomic.
|
||||
|
||||
For newcomers, Typhoon is a free (cost and freedom) Kubernetes distribution providing upstream Kubernetes, declarative configuration via [Terraform](https://www.terraform.io/intro/index.html), and support for AWS, Google Cloud, DigitalOcean, and bare-metal. Typhoon clusters use a [self-hosted](https://github.com/kubernetes-incubator/bootkube) control plane, support [Calico](https://www.projectcalico.org/blog/) and [flannel](https://coreos.com/flannel/docs/latest/) CNI networking, and enable etcd TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/), and network policy.
|
||||
|
||||
Typhoon for Fedora Atomic reflects many of the same principles that created Typhoon for Container Linux. Clusters are declared using plain Terraform configs that can be versioned. In lieu of Ignition, instances are declaratively provisioned with Cloud-Init and kickstart (bare-metal only). TLS assets are generated. Hosts run only a kubelet service, other components are scheduled (i.e. self-hosted). The upstream hyperkube is used directly[^1]. And clusters are kept minimal by offering optional addons for [Ingress](https://typhoon.psdn.io/addons/ingress/), [Prometheus](https://typhoon.psdn.io/addons/prometheus/), and [Grafana](https://typhoon.psdn.io/addons/grafana/). Typhoon compliments and enhances Fedora Atomic as a choice of operating system for Kubernetes.
|
||||
|
||||
Meanwhile, Fedora Atomic adds some promising new low-level technologies:
|
||||
|
||||
* [ostree](https://github.com/ostreedev/ostree) & [rpm-ostree](https://github.com/projectatomic/rpm-ostree) - a hybrid, layered, image and package system that lets you perform atomic updates and rollbacks, layer on packages, "rebase" your system, or manage a remote tree repo. See Dusty Mabe's great [intro](https://dustymabe.com/2017/09/01/atomic-host-101-lab-part-3-rebase-upgrade-rollback/).
|
||||
|
||||
* [system containers](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/) - OCI container images that embed systemd and runc metadata for starting low-level host services before container runtimes are ready. Typhoon uses system containers under runc for `etcd`, `kubelet`, and `bootkube` on Fedora Atomic (instead of rkt-fly).
|
||||
|
||||
* [CRI-O](https://github.com/kubernetes-incubator/cri-o) - CRI-O is a kubernetes-incubator implementation of the Kubernetes Container Runtime Interface. Typhoon uses Docker as the container runtime today, but its a goal to gradually introduce CRI-O as an alternative runtime as it matures.
|
||||
|
||||
Typhoon has long [aspired](https://github.com/poseidon/typhoon/blob/2faacc6a50993038c98789dfa96430a757bdf545/docs/faq.md#operating-systems) to add a dissimilar operating system to compliment Container Linux. Operating Typhoon clusters across colocations and multiple clouds was driven by our own real need and has provided healthy perspective and clear direction. Adding Fedora Atomic is exciting for the same reasons. Fedora Atomic diversifies Typhoon's technology underpinnings, uniting the Container Linux and Fedora Atomic ecosystems to provide a consistent Kubernetes experience across operating systems, clouds, and on-premise.
|
||||
|
||||
Get started with the [basics](https://typhoon.psdn.io/architecture/concepts/) or read the OS [comparison](https://typhoon.psdn.io/architecture/operating-systems/). If you're familiar with Terraform, follow the new tutorials for Fedora Atomic on [AWS](https://typhoon.psdn.io/atomic/aws/), [Google Cloud](https://typhoon.psdn.io/atomic/google-cloud/), [DigitalOcean](https://typhoon.psdn.io/atomic/digital-ocean/), and [bare-metal](https://typhoon.psdn.io/atomic/bare-metal/).
|
||||
|
||||
*Typhoon is not affiliated with Red Hat or Project Atomic.*
|
||||
|
||||
!!! warning
|
||||
Heed the warnings. Typhoon for Fedora Atomic is still alpha. Container Linux continues to be the recommended flavor for production clusters. Atomic is not meant to detract from efforts on Container Linux or its derivatives.
|
||||
|
||||
!!! tip
|
||||
For bare-metal, you may continue to use your v0.7+ [Matchbox](https://github.com/coreos/matchbox) service and `terraform-provider-matchbox` plugin to provision both Container Linux and Fedora Atomic clusters. No changes needed.
|
||||
|
||||
[^1]: Using `etcd`, `kubelet`, and `bootkube` as system containers required metadata files be added in [system-containers](https://github.com/poseidon/system-containers)
|
||||
|
89
docs/architecture/operating-systems.md
Normal file
89
docs/architecture/operating-systems.md
Normal file
@ -0,0 +1,89 @@
|
||||
# Operating Systems
|
||||
|
||||
Typhoon supports [Container Linux](https://coreos.com/why/) and Fedora [Atomic](https://www.projectatomic.io/) 27. These two operating systems were chosen because they offer:
|
||||
|
||||
* Minimalism and focus on clustered operation
|
||||
* Automated and atomic operating system upgrades
|
||||
* Declarative and immutable configuration
|
||||
* Optimization for containerized applications
|
||||
|
||||
Together, they diversify Typhoon to support a range of container technologies.
|
||||
|
||||
* Container Linux: Gentoo core, rkt-fly, docker
|
||||
* Fedora Atomic: RHEL core, rpm-ostree, system containers (i.e. runc), CRI-O (future)
|
||||
|
||||
## Host Properties
|
||||
|
||||
| Property | Container Linux | Fedora Atomic |
|
||||
|-------------------|-----------------|---------------|
|
||||
| host spec (bare-metal) | Container Linux Config | kickstart, cloud-init |
|
||||
| host spec (cloud) | Container Linux Config | cloud-init |
|
||||
| container runtime | docker | docker (CRIO planned) |
|
||||
| cgroup driver | cgroupfs | systemd |
|
||||
| logging driver | json-file | journald |
|
||||
| storage driver | overlay2 | overlay2 |
|
||||
|
||||
## Kubernetes Properties
|
||||
|
||||
| Property | Container Linux | Fedora Atomic |
|
||||
|-------------------|-----------------|---------------|
|
||||
| single-master | all platforms | all platforms |
|
||||
| multi-master | all platforms | all platforms |
|
||||
| control plane | self-hosted | self-hosted |
|
||||
| kubelet image | upstream hyperkube | upstream hyperkube via [system container](https://github.com/poseidon/system-containers) |
|
||||
| control plane images | upstream hyperkube | upstream hyperkube |
|
||||
| on-host etcd | rkt-fly | system container (runc) |
|
||||
| on-host kubelet | rkt-fly | system container (runc) |
|
||||
| CNI plugins | calico or flannel | calico or flannel |
|
||||
| coordinated drain & OS update | [CLUO](https://github.com/coreos/container-linux-update-operator) addon | manual (planned) |
|
||||
|
||||
## Directory Locations
|
||||
|
||||
Typhoon conventional directories.
|
||||
|
||||
| Kubelet setting | Host location |
|
||||
|-------------------|--------------------------------|
|
||||
| cni-conf-dir | /etc/kubernetes/cni/net.d |
|
||||
| pod-manifest-path | /etc/kubernetes/manifests |
|
||||
| volume-plugin-dir | /var/lib/kubelet/volumeplugins |
|
||||
|
||||
## Kubelet Mounts
|
||||
|
||||
### Container Linux
|
||||
|
||||
| Mount location | Host location | Options |
|
||||
|-------------------|-------------------|---------|
|
||||
| /etc/kubernetes | /etc/kubernetes | ro |
|
||||
| /etc/ssl/certs | /etc/ssl/certs | ro |
|
||||
| /usr/share/ca-certificates | /usr/share/ca-certificates | ro |
|
||||
| /var/lib/kubelet | /var/lib/kubelet | recursive |
|
||||
| /var/lib/docker | /var/lib/docker | |
|
||||
| /var/lib/cni | /var/lib/cni | |
|
||||
| /var/lib/calico | /var/lib/calico | |
|
||||
| /var/log | /var/log | |
|
||||
| /etc/os-release | /usr/lib/os-release | ro |
|
||||
| /run | /run | |
|
||||
| /lib/modules | /lib/modules | ro |
|
||||
| /etc/resolv.conf | /etc/resolv.conf | |
|
||||
| /opt/cni/bin | /opt/cni/bin | |
|
||||
|
||||
|
||||
### Fedora Atomic
|
||||
|
||||
| Mount location | Host location | Options |
|
||||
|--------------------|------------------|---------|
|
||||
| /rootfs | / | ro |
|
||||
| /etc/kubernetes | /etc/kubernetes | ro |
|
||||
| /etc/ssl/certs | /etc/ssl/certs | ro |
|
||||
| /etc/pki/tls/certs | /usr/share/ca-certificates | ro |
|
||||
| /var/lib | /var/lib | |
|
||||
| /var/lib/kubelet | /var/lib/kubelet | recursive |
|
||||
| /var/log | /var/log | ro |
|
||||
| /etc/os-release | /etc/os-release | ro |
|
||||
| /var/run/secrets | /var/run/secrets | |
|
||||
| /run | /run | |
|
||||
| /lib/modules | /lib/modules | ro |
|
||||
| /etc/hosts | /etc/hosts | ro |
|
||||
| /etc/resolv.conf | /etc/resolv.conf | ro |
|
||||
| /opt/cni/bin | /opt/cni/bin (changing in future) | |
|
||||
|
241
docs/atomic/aws.md
Normal file
241
docs/atomic/aws.md
Normal file
@ -0,0 +1,241 @@
|
||||
# AWS
|
||||
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on AWS with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
* AWS Account and IAM credentials
|
||||
* AWS Route53 DNS Zone (registered Domain Name or delegated subdomain)
|
||||
* Terraform v0.11.x installed locally
|
||||
|
||||
## Terraform Setup
|
||||
|
||||
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Read [concepts](../architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
```
|
||||
|
||||
## Provider
|
||||
|
||||
Login to your AWS IAM dashboard and find your IAM user. Select "Security Credentials" and create an access key. Save the id and secret to a file that can be referenced in configs.
|
||||
|
||||
```
|
||||
[default]
|
||||
aws_access_key_id = xxx
|
||||
aws_secret_access_key = yyy
|
||||
```
|
||||
|
||||
Configure the AWS provider to use your access key credentials in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "aws" {
|
||||
version = "~> 1.11.0"
|
||||
alias = "default"
|
||||
|
||||
region = "eu-central-1"
|
||||
shared_credentials_file = "/home/user/.config/aws/credentials"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
```
|
||||
|
||||
Additional configuration options are described in the `aws` provider [docs](https://www.terraform.io/docs/providers/aws/).
|
||||
|
||||
!!! tip
|
||||
Regions are listed in [docs](http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region) or with `aws ec2 describe-regions`.
|
||||
|
||||
## Cluster
|
||||
|
||||
Define a Kubernetes cluster using the module `aws/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
local = "local.default"
|
||||
null = "null.default"
|
||||
template = "template.default"
|
||||
tls = "tls.default"
|
||||
}
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
dns_zone = "aws.example.com"
|
||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
asset_dir = "/home/user/.secrets/clusters/tempest"
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
worker_type = "t2.medium"
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
|
||||
```sh
|
||||
terraform init
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 106 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
|
||||
```sh
|
||||
$ terraform apply
|
||||
...
|
||||
module.aws-tempest.null_resource.bootkube-start: Still creating... (4m50s elapsed)
|
||||
module.aws-tempest.null_resource.bootkube-start: Still creating... (5m0s elapsed)
|
||||
module.aws-tempest.null_resource.bootkube-start: Creation complete after 11m8s (ID: 3961816482286168143)
|
||||
|
||||
Apply complete! Resources: 106 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
|
||||
## Verify
|
||||
|
||||
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
|
||||
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
ip-10-0-12-221 Ready 34m v1.10.2
|
||||
ip-10-0-19-112 Ready 34m v1.10.2
|
||||
ip-10-0-4-22 Ready 34m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
||||
```
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system calico-node-1m5bf 2/2 Running 0 34m
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
|
||||
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
|
||||
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
|
||||
kube-system kube-dns-1187388186-wx1lg 3/3 Running 0 34m
|
||||
kube-system kube-proxy-14wxv 1/1 Running 0 34m
|
||||
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
|
||||
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
|
||||
kube-system kube-scheduler-3359497473-5plhf 1/1 Running 0 34m
|
||||
kube-system kube-scheduler-3359497473-r7zg7 1/1 Running 1 34m
|
||||
kube-system pod-checkpointer-4kxtl 1/1 Running 0 34m
|
||||
kube-system pod-checkpointer-4kxtl-ip-10-0-12-221 1/1 Running 0 33m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name (prepended to dns_zone) | "tempest" |
|
||||
| dns_zone | AWS Route53 DNS zone | "aws.example.com" |
|
||||
| dns_zone_id | AWS Route53 DNS zone id | "Z3PAABBCFAKEC0" |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/tempest" |
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `tempest.aws.example.com`.
|
||||
|
||||
You'll need a registered domain name or delegated subdomain on AWS Route53. You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "aws_route53_zone" "zone-for-clusters" {
|
||||
name = "aws.example.com."
|
||||
}
|
||||
```
|
||||
|
||||
Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`.
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Route53 (e.g. aws.mydomain.com) and [update nameservers](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html).
|
||||
|
||||
### Optional
|
||||
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | EC2 instance type for controllers | "t2.small" | See below |
|
||||
| worker_type | EC2 instance type for workers | "t2.small" | See below |
|
||||
| disk_size | Size of the EBS volume in GB | "40" | "100" |
|
||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
|
||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
||||
|
||||
!!! tip "MTU"
|
||||
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8991! You will get better pod-to-pod bandwidth.
|
||||
|
424
docs/atomic/bare-metal.md
Normal file
424
docs/atomic/bare-metal.md
Normal file
@ -0,0 +1,424 @@
|
||||
# Bare-Metal
|
||||
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.2 cluster on bare-metal with Fedora Atomic.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora Atomic via kickstart, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via cloud-init.
|
||||
|
||||
Controllers are provisioned to run `etcd` and `kubelet` [system containers](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/). Workers run just a `kubelet` system container. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
* Machines with 2GB RAM, 30GB disk, PXE-enabled NIC, IPMI
|
||||
* PXE-enabled [network boot](https://coreos.com/matchbox/docs/latest/network-setup.html) environment
|
||||
* Matchbox v0.7+ deployment with API enabled
|
||||
* HTTP server for Fedora install assets and ostree repo
|
||||
* Matchbox credentials `client.crt`, `client.key`, `ca.crt`
|
||||
* Terraform v0.11.x and [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) installed locally
|
||||
|
||||
## Machines
|
||||
|
||||
Collect a MAC address from each machine. For machines with multiple PXE-enabled NICs, pick one of the MAC addresses. MAC addresses will be used to match machines to profiles during network boot.
|
||||
|
||||
* 52:54:00:a1:9c:ae (node1)
|
||||
* 52:54:00:b2:2f:86 (node2)
|
||||
* 52:54:00:c3:61:77 (node3)
|
||||
|
||||
Configure each machine to boot from the disk through IPMI or the BIOS menu.
|
||||
|
||||
```
|
||||
ipmitool -H node1 -U USER -P PASS chassis bootdev disk options=persistent
|
||||
```
|
||||
|
||||
During provisioning, you'll explicitly set the boot device to `pxe` for the next boot only. Machines will install (overwrite) the operating system to disk on PXE boot and reboot into the disk install.
|
||||
|
||||
!!! tip ""
|
||||
Ask your hardware vendor to provide MACs and preconfigure IPMI, if possible. With it, you can rack new servers, `terraform apply` with new info, and power on machines that network boot and provision into clusters.
|
||||
|
||||
## DNS
|
||||
|
||||
Create a DNS A (or AAAA) record for each node's default interface. Create a record that resolves to each controller node (or re-use the node record if there's one controller).
|
||||
|
||||
* node1.example.com (node1)
|
||||
* node2.example.com (node2)
|
||||
* node3.example.com (node3)
|
||||
* myk8s.example.com (node1)
|
||||
|
||||
Cluster nodes will be configured to refer to the control plane and themselves by these fully qualified names and they'll be used in generated TLS certificates.
|
||||
|
||||
## Matchbox
|
||||
|
||||
Matchbox is an open-source app that matches network-booted bare-metal machines (based on labels like MAC, UUID, etc.) to profiles to automate cluster provisioning.
|
||||
|
||||
Install Matchbox on a Kubernetes cluster or dedicated server.
|
||||
|
||||
* Installing on [Kubernetes](https://coreos.com/matchbox/docs/latest/deployment.html#kubernetes) (recommended)
|
||||
* Installing on a [server](https://coreos.com/matchbox/docs/latest/deployment.html#download)
|
||||
|
||||
!!! tip
|
||||
Deploy Matchbox as service that can be accessed by all of your bare-metal machines globally. This provides a single endpoint to use Terraform to manage bare-metal clusters at different sites. Typhoon will never include secrets in provisioning user-data so you may even deploy matchbox publicly.
|
||||
|
||||
Matchbox provides a TLS client-authenticated API that clients, like Terraform, can use to manage machine matching and profiles. Think of it like a cloud provider API, but for creating bare-metal instances.
|
||||
|
||||
[Generate TLS](https://coreos.com/matchbox/docs/latest/deployment.html#generate-tls-certificates) client credentials. Save the `ca.crt`, `client.crt`, and `client.key` where they can be referenced in Terraform configs.
|
||||
|
||||
```sh
|
||||
mv ca.crt client.crt client.key ~/.config/matchbox/
|
||||
```
|
||||
|
||||
Verify the matchbox read-only HTTP endpoints are accessible (port is configurable).
|
||||
|
||||
```sh
|
||||
$ curl http://matchbox.example.com:8080
|
||||
matchbox
|
||||
```
|
||||
|
||||
Verify your TLS client certificate and key can be used to access the Matchbox API (port is configurable).
|
||||
|
||||
```sh
|
||||
$ openssl s_client -connect matchbox.example.com:8081 \
|
||||
-CAfile ~/.config/matchbox/ca.crt \
|
||||
-cert ~/.config/matchbox/client.crt \
|
||||
-key ~/.config/matchbox/client.key
|
||||
```
|
||||
|
||||
## PXE Environment
|
||||
|
||||
Create a iPXE-enabled network boot environment. Configure PXE clients to chainload [iPXE](http://ipxe.org/cmd) and instruct iPXE clients to chainload from your Matchbox service's `/boot.ipxe` endpoint.
|
||||
|
||||
For networks already supporting iPXE clients, you can add a `default.ipxe` config.
|
||||
|
||||
```ini
|
||||
# /var/www/html/ipxe/default.ipxe
|
||||
chain http://matchbox.foo:8080/boot.ipxe
|
||||
```
|
||||
|
||||
For networks with Ubiquiti Routers, you can [configure the router](/topics/hardware.md#ubiquiti) itself to chainload machines to iPXE and Matchbox.
|
||||
|
||||
For a small lab, you may wish to checkout the [quay.io/coreos/dnsmasq](https://quay.io/repository/coreos/dnsmasq) container image and [copy-paste examples](https://github.com/coreos/matchbox/blob/master/Documentation/network-setup.md#coreosdnsmasq).
|
||||
|
||||
Read about the [many ways](https://coreos.com/matchbox/docs/latest/network-setup.html) to setup a compliant iPXE-enabled network. There is quite a bit of flexibility:
|
||||
|
||||
* Continue using existing DHCP, TFTP, or DNS services
|
||||
* Configure specific machines, subnets, or architectures to chainload from Matchbox
|
||||
* Place Matchbox behind a menu entry (timeout and default to Matchbox)
|
||||
|
||||
!!! note ""
|
||||
TFTP chainloading to modern boot firmware, like iPXE, avoids issues with old NICs and allows faster transfer protocols like HTTP to be used.
|
||||
|
||||
## Atomic Assets
|
||||
|
||||
Fedora Atomic network installations require a local mirror of assets. Configure an HTTP server to serve the Atomic install tree and ostree repo.
|
||||
|
||||
```
|
||||
sudo dnf install -y httpd
|
||||
sudo firewall-cmd --permenant --add-port=80/tcp
|
||||
sudo systemctl enable httpd --now
|
||||
```
|
||||
|
||||
Download the [Fedora Atomic](https://getfedora.org/en/atomic/download/) ISO which contains install files and add them to the serve directory.
|
||||
|
||||
```
|
||||
sudo mount -o loop,ro Fedora-Atomic-ostree-*.iso /mnt
|
||||
sudo mkdir -p /var/www/html/fedora/27
|
||||
sudo cp -av /mnt/* /var/www/html/fedora/27/
|
||||
```
|
||||
|
||||
Checkout the [fedora-atomic](https://pagure.io/fedora-atomic) ostree manifest repo.
|
||||
|
||||
```
|
||||
git clone https://pagure.io/fedora-atomic.git && cd fedora-atomic
|
||||
git checkout f27
|
||||
```
|
||||
|
||||
Compose an ostree repo from RPM sources.
|
||||
|
||||
```
|
||||
mkdir repo
|
||||
ostree init --repo=repo --mode=archive
|
||||
sudo dnf install rpm-ostree
|
||||
sudo rpm-ostree compose tree --repo=repo fedora-atomic-host.json
|
||||
```
|
||||
|
||||
Serve the ostree `repo` as well.
|
||||
|
||||
```
|
||||
sudo cp -r repo /var/www/html/fedora/27/
|
||||
tree /var/www/html/fedora/27/
|
||||
├── images
|
||||
│ ├── pxeboot
|
||||
│ ├── initrd.img
|
||||
│ └── vmlinuz
|
||||
├── isolinux/
|
||||
├── repo/
|
||||
```
|
||||
|
||||
Verify `vmlinuz`, `initrd.img`, and `repo` are accessible from the HTTP server (i.e. `atomic_assets_endpoint`).
|
||||
|
||||
```
|
||||
curl http://example.com/fedora/27/
|
||||
```
|
||||
|
||||
!!! note
|
||||
It is possible to use the Matchbox `/assets` [cache](https://github.com/coreos/matchbox/blob/master/Documentation/matchbox.md#assets) as an HTTP server.
|
||||
|
||||
## Terraform Setup
|
||||
|
||||
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) plugin binary for your system.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-matchbox/releases/download/v0.2.2/terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-matchbox-v0.2.2-linux-amd64/terraform-provider-matchbox /usr/local/bin/
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
providers {
|
||||
matchbox = "/usr/local/bin/terraform-provider-matchbox"
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](../architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
```
|
||||
|
||||
## Provider
|
||||
|
||||
Configure the Matchbox provider to use your Matchbox API endpoint and client certificate in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "matchbox" {
|
||||
endpoint = "matchbox.example.com:8081"
|
||||
client_cert = "${file("~/.config/matchbox/client.crt")}"
|
||||
client_key = "${file("~/.config/matchbox/client.key")}"
|
||||
ca = "${file("~/.config/matchbox/ca.crt")}"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
```
|
||||
|
||||
## Cluster
|
||||
|
||||
Define a Kubernetes cluster using the module `bare-metal/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
null = "null.default"
|
||||
template = "template.default"
|
||||
tls = "tls.default"
|
||||
}
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
matchbox_http_endpoint = "http://matchbox.example.com"
|
||||
atomic_assets_endpoint = "http://example.com/fedora/27"
|
||||
|
||||
# configuration
|
||||
k8s_domain_name = "node1.example.com"
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
asset_dir = "/home/user/.secrets/clusters/mercury"
|
||||
|
||||
# machines
|
||||
controller_names = ["node1"]
|
||||
controller_macs = ["52:54:00:a1:9c:ae"]
|
||||
controller_domains = ["node1.example.com"]
|
||||
worker_names = [
|
||||
"node2",
|
||||
"node3",
|
||||
]
|
||||
worker_macs = [
|
||||
"52:54:00:b2:2f:86",
|
||||
"52:54:00:c3:61:77",
|
||||
]
|
||||
worker_domains = [
|
||||
"node2.example.com",
|
||||
"node3.example.com",
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
|
||||
```sh
|
||||
terraform init
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 58 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes. Terraform will generate bootkube assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
|
||||
|
||||
```sh
|
||||
$ terraform apply
|
||||
module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Provisioning with 'file'...
|
||||
module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Provisioning with 'file'...
|
||||
module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Still creating... (10s elapsed)
|
||||
module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Still creating... (10s elapsed)
|
||||
...
|
||||
```
|
||||
|
||||
Apply will then loop until it can successfully copy credentials to each machine and start the one-time Kubernetes bootstrap service. Proceed to the next step while this loops.
|
||||
|
||||
### Power
|
||||
|
||||
Power on each machine with the boot device set to `pxe` for the next boot only.
|
||||
|
||||
```sh
|
||||
ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe
|
||||
ipmitool -H node1.example.com -U USER -P PASS power on
|
||||
```
|
||||
|
||||
Machines will network boot, install Fedora Atomic to disk via kickstart, reboot into the disk install, and provision themselves as controllers or workers via cloud-init.
|
||||
|
||||
!!! tip ""
|
||||
If this is the first test of your PXE-enabled network boot environment, watch the SOL console of a machine to spot any misconfigurations.
|
||||
|
||||
### Bootstrap
|
||||
|
||||
Wait for the `bootkube-start` step to finish bootstrapping the Kubernetes control plane. This may take 5-15 minutes depending on your network.
|
||||
|
||||
```
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m10s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m20s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m30s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m40s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Creation complete (ID: 5441741360626669024)
|
||||
|
||||
Apply complete! Resources: 58 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
To watch the bootstrap process in detail, SSH to the first controller and journal the logs.
|
||||
|
||||
```
|
||||
$ ssh fedora@node1.example.com
|
||||
$ journalctl -f -u bootkube
|
||||
bootkube[5]: Pod Status: pod-checkpointer Running
|
||||
bootkube[5]: Pod Status: kube-apiserver Running
|
||||
bootkube[5]: Pod Status: kube-scheduler Running
|
||||
bootkube[5]: Pod Status: kube-controller-manager Running
|
||||
bootkube[5]: All self-hosted control plane components successfully started
|
||||
bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
```
|
||||
|
||||
## Verify
|
||||
|
||||
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
|
||||
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
node1.example.com Ready 11m v1.10.2
|
||||
node2.example.com Ready 11m v1.10.2
|
||||
node3.example.com Ready 11m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
||||
```
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system calico-node-6qp7f 2/2 Running 1 11m
|
||||
kube-system calico-node-gnjrm 2/2 Running 0 11m
|
||||
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||
kube-system kube-apiserver-7336w 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-v30js 1/1 Running 1 11m
|
||||
kube-system kube-dns-1187388186-mx9rt 3/3 Running 0 11m
|
||||
kube-system kube-proxy-50sd4 1/1 Running 0 11m
|
||||
kube-system kube-proxy-bczhp 1/1 Running 0 11m
|
||||
kube-system kube-proxy-mp2fw 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-fd3l7 1/1 Running 1 11m
|
||||
kube-system kube-scheduler-3895335239-hfjv0 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-wf65d 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0 11m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name | mercury |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | "http://matchbox.example.com:port" |
|
||||
| atomic_assets_endpoint | HTTP endpoint serving the Fedora Atomic vmlinuz, initrd.img, and ostree repo | "http://example.com/fedora/27" |
|
||||
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3Nz..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
|
||||
| controller_names | Ordered list of controller short names | ["node1"] |
|
||||
| controller_macs | Ordered list of controller identifying MAC addresses | ["52:54:00:a1:9c:ae"] |
|
||||
| controller_domains | Ordered list of controller FQDNs | ["node1.example.com"] |
|
||||
| worker_names | Ordered list of worker short names | ["node2", "node3"] |
|
||||
| worker_macs | Ordered list of worker identifying MAC addresses | ["52:54:00:b2:2f:86", "52:54:00:c3:61:77"] |
|
||||
| worker_domains | Ordered list of worker FQDNs | ["node2.example.com", "node3.example.com"] |
|
||||
|
||||
### Optional
|
||||
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
| kernel_args | Additional kernel args to provide at PXE boot | [] | "kvm-intel.nested=1" |
|
||||
|
249
docs/atomic/digital-ocean.md
Normal file
249
docs/atomic/digital-ocean.md
Normal file
@ -0,0 +1,249 @@
|
||||
# Digital Ocean
|
||||
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on DigitalOcean with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
* Digital Ocean Account and Token
|
||||
* Digital Ocean Domain (registered Domain Name or delegated subdomain)
|
||||
* Terraform v0.11.x installed locally
|
||||
|
||||
## Terraform Setup
|
||||
|
||||
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Read [concepts](../architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
```
|
||||
|
||||
## Provider
|
||||
|
||||
Login to [DigitalOcean](https://cloud.digitalocean.com) or create an [account](https://cloud.digitalocean.com/registrations/new), if you don't have one.
|
||||
|
||||
Generate a Personal Access Token with read/write scope from the [API tab](https://cloud.digitalocean.com/settings/api/tokens). Write the token to a file that can be referenced in configs.
|
||||
|
||||
```sh
|
||||
mkdir -p ~/.config/digital-ocean
|
||||
echo "TOKEN" > ~/.config/digital-ocean/token
|
||||
```
|
||||
|
||||
Configure the DigitalOcean provider to use your token in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "digitalocean" {
|
||||
version = "0.1.3"
|
||||
token = "${chomp(file("~/.config/digital-ocean/token"))}"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
```
|
||||
|
||||
## Cluster
|
||||
|
||||
Define a Kubernetes cluster using the module `digital-ocean/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
local = "local.default"
|
||||
null = "null.default"
|
||||
template = "template.default"
|
||||
tls = "tls.default"
|
||||
}
|
||||
|
||||
# Digital Ocean
|
||||
cluster_name = "nemo"
|
||||
region = "nyc3"
|
||||
dns_zone = "digital-ocean.example.com"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
|
||||
asset_dir = "/home/user/.secrets/clusters/nemo"
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
worker_type = "s-1vcpu-1gb"
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
|
||||
```sh
|
||||
terraform init
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 54 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
|
||||
```sh
|
||||
$ terraform apply
|
||||
module.digital-ocean-nemo.null_resource.bootkube-start: Still creating... (30s elapsed)
|
||||
module.digital-ocean-nemo.null_resource.bootkube-start: Provisioning with 'remote-exec'...
|
||||
...
|
||||
module.digital-ocean-nemo.null_resource.bootkube-start: Still creating... (6m20s elapsed)
|
||||
module.digital-ocean-nemo.null_resource.bootkube-start: Creation complete (ID: 7599298447329218468)
|
||||
|
||||
Apply complete! Resources: 54 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
|
||||
## Verify
|
||||
|
||||
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
|
||||
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
10.132.110.130 Ready 10m v1.10.2
|
||||
10.132.115.81 Ready 10m v1.10.2
|
||||
10.132.124.107 Ready 10m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system kube-apiserver-n10qr 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-3271970485-37gtw 1/1 Running 1 11m
|
||||
kube-system kube-controller-manager-3271970485-p52t5 1/1 Running 0 11m
|
||||
kube-system kube-dns-1187388186-ld1j7 3/3 Running 0 11m
|
||||
kube-system kube-flannel-1cq1v 2/2 Running 0 11m
|
||||
kube-system kube-flannel-hq9t0 2/2 Running 1 11m
|
||||
kube-system kube-flannel-v0g9w 2/2 Running 0 11m
|
||||
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
|
||||
kube-system kube-proxy-fh3td 1/1 Running 0 11m
|
||||
kube-system kube-proxy-k35rc 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-2bc4c 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-3895335239-b7q47 1/1 Running 1 11m
|
||||
kube-system pod-checkpointer-pr1lq 1/1 Running 0 11m
|
||||
kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name (prepended to dns_zone) | nemo |
|
||||
| region | Digital Ocean region | nyc1, sfo2, fra1, tor1 |
|
||||
| dns_zone | Digital Ocean domain (i.e. DNS zone) | do.example.com |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." |
|
||||
| ssh_fingerprints | SSH public key fingerprints | ["d7:9d..."] |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | /home/user/.secrets/nemo |
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
|
||||
|
||||
You'll need a registered domain name or delegated subdomain in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "digitalocean_domain" "zone-for-clusters" {
|
||||
name = "do.example.com"
|
||||
# Digital Ocean oddly requires an IP here. You may have to delete the A record it makes. :(
|
||||
ip_address = "8.8.8.8"
|
||||
}
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean).
|
||||
|
||||
#### SSH Fingerprints
|
||||
|
||||
DigitalOcean droplets are created with your SSH public key "fingerprint" (i.e. MD5 hash) to allow access. If your SSH public key is at `~/.ssh/id_rsa`, find the fingerprint with,
|
||||
|
||||
```bash
|
||||
ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}'
|
||||
MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7
|
||||
```
|
||||
|
||||
If you use `ssh-agent` (e.g. Yubikey for SSH), find the fingerprint with,
|
||||
|
||||
```
|
||||
ssh-add -l -E md5
|
||||
2048 MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7 cardno:000603633110 (RSA)
|
||||
```
|
||||
|
||||
Digital Ocean requires the SSH public key be uploaded to your account, so you may also find the fingerprint under Settings -> Security. Finally, if you don't have an SSH key, [create one now](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/).
|
||||
|
||||
### Optional
|
||||
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Droplet type for controllers | s-2vcpu-2gb | s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb, ... |
|
||||
| worker_type | Droplet type for workers | s-1vcpu-1gb | s-1vcpu-1gb, s-1vcpu-2gb, s-2vcpu-2gb, ... |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [droplet types](https://developers.digitalocean.com/documentation/changelog/api-v2/new-size-slugs-for-droplet-plan-changes/) or use `doctl compute size list`.
|
||||
|
||||
!!! warning
|
||||
Do not choose a `controller_type` smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail.
|
282
docs/atomic/google-cloud.md
Normal file
282
docs/atomic/google-cloud.md
Normal file
@ -0,0 +1,282 @@
|
||||
# Google Cloud
|
||||
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is very alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Some addons don't work yet. Expect rough edges and changes.
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on Google Compute Engine with Fedora Atomic.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
* Google Cloud Account and Service Account
|
||||
* Google Cloud DNS Zone (registered Domain Name or delegated subdomain)
|
||||
* Terraform v0.11.x installed locally
|
||||
* `gcloud` and `gsutil` for uploading a disk image to Google Cloud (temporary)
|
||||
|
||||
## Terraform Setup
|
||||
|
||||
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Read [concepts](../architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
```
|
||||
|
||||
## Provider
|
||||
|
||||
Login to your Google Console [API Manager](https://console.cloud.google.com/apis/dashboard) and select a project, or [signup](https://cloud.google.com/free/) if you don't have an account.
|
||||
|
||||
Select "Credentials" and create a service account key. Choose the "Compute Engine Admin" role and save the JSON private key to a file that can be referenced in configs.
|
||||
|
||||
```sh
|
||||
mv ~/Downloads/project-id-43048204.json ~/.config/google-cloud/terraform.json
|
||||
```
|
||||
|
||||
Configure the Google Cloud provider to use your service account key, project-id, and region in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "google" {
|
||||
version = "1.6"
|
||||
alias = "default"
|
||||
|
||||
credentials = "${file("~/.config/google-cloud/terraform.json")}"
|
||||
project = "project-id"
|
||||
region = "us-central1"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
alias = "default"
|
||||
}
|
||||
```
|
||||
|
||||
Additional configuration options are described in the `google` provider [docs](https://www.terraform.io/docs/providers/google/index.html).
|
||||
|
||||
!!! tip
|
||||
Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`. A project may container multiple clusters across different regions.
|
||||
|
||||
## Atomic Image
|
||||
|
||||
Project Atomic does not publish official Fedora Atomic images to Google Cloud. However, Google Cloud allows [custom boot images](https://cloud.google.com/compute/docs/images/import-existing-image) to be uploaded to a bucket and imported into your project.
|
||||
|
||||
Download the Fedora Atomic 27 [raw image](https://getfedora.org/en/atomic/download/) and decompress the file.
|
||||
|
||||
```
|
||||
xz -d Fedora-Atomic-27-20180326.1.x86_64.raw.xz
|
||||
```
|
||||
|
||||
Rename the image `disk.raw`. Gzip compress and tar the image.
|
||||
|
||||
```
|
||||
mv Fedora-Atomic-27-20180326.1.x86_64.raw disk.raw
|
||||
tar cvzf fedora-atomic-27.tar.gz disk.raw
|
||||
```
|
||||
|
||||
List available storage buckets and upload the tar.gz.
|
||||
|
||||
```
|
||||
gsutil list
|
||||
gsutil cp fedora-atomic-27.tar.gz gs://BUCKET_NAME
|
||||
```
|
||||
|
||||
Create a Google Compute Engine image from the bucket file.
|
||||
|
||||
```
|
||||
gcloud compute images list
|
||||
gcloud compute images create fedora-atomic-27 --source-uri gs://BUCKET/fedora-atomic-27.tar.gz
|
||||
```
|
||||
|
||||
Note your project id and the image name for setting `os_image` later (e.g. proj-id/fedora-atomic-27).
|
||||
|
||||
|
||||
## Cluster
|
||||
|
||||
Define a Kubernetes cluster using the module `google-cloud/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
local = "local.default"
|
||||
null = "null.default"
|
||||
template = "template.default"
|
||||
tls = "tls.default"
|
||||
}
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
region = "us-central1"
|
||||
dns_zone = "example.com"
|
||||
dns_zone_name = "example-zone"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
asset_dir = "/home/user/.secrets/clusters/yavin"
|
||||
os_image = "MY-PROJECT_ID/fedora-atomic-27"
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
|
||||
```sh
|
||||
terraform init
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 73 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
|
||||
```sh
|
||||
$ terraform apply
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (10s elapsed)
|
||||
...
|
||||
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m30s elapsed)
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m40s elapsed)
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Creation complete (ID: 5768638456220583358)
|
||||
|
||||
Apply complete! Resources: 73 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
|
||||
## Verify
|
||||
|
||||
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
|
||||
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
||||
```
|
||||
$ kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
kube-system kube-dns-1187388186-zj5dl 3/3 Running 0 6m
|
||||
kube-system kube-proxy-117v6 1/1 Running 0 6m
|
||||
kube-system kube-proxy-9886n 1/1 Running 0 6m
|
||||
kube-system kube-proxy-njn47 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-5x87r 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-bzrrt 1/1 Running 1 6m
|
||||
kube-system pod-checkpointer-l6lrt 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name (prepended to dns_zone) | "yavin" |
|
||||
| region | Google Cloud region | "us-central1" |
|
||||
| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" |
|
||||
| dns_zone_name | Google Cloud DNS zone name | "example-zone" |
|
||||
| os_image | Custom uploaded Fedora Atomic 27 image | "PROJECT-ID/fedora-atomic-27" |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/yavin" |
|
||||
|
||||
Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-zones/regions-zones).
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`.
|
||||
|
||||
You'll need a registered domain name or delegated subdomain on Google Cloud DNS. You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
dns_name = "google-cloud.example.com."
|
||||
name = "example-zone"
|
||||
description = "Production DNS zone"
|
||||
}
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers).
|
||||
|
||||
### Optional
|
||||
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
|
||||
| worker_type | Machine type for workers | "n1-standard-1" | See below |
|
||||
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
|
||||
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
|
||||
|
||||
#### Preemption
|
||||
|
||||
Add `worker_preemeptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`
|
||||
|
@ -1,10 +1,10 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on AWS.
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on AWS with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a VPC, gateway, subnets, auto-scaling groups of controllers and workers, network load balancers for controllers and workers, and security groups will be created.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets.
|
||||
|
||||
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -37,7 +37,7 @@ providers {
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](../architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -96,7 +96,7 @@ Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -132,9 +132,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -143,15 +140,6 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
@ -181,9 +169,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
ip-10-0-12-221 Ready 34m v1.10.1
|
||||
ip-10-0-19-112 Ready 34m v1.10.1
|
||||
ip-10-0-4-22 Ready 34m v1.10.1
|
||||
ip-10-0-12-221 Ready 34m v1.10.2
|
||||
ip-10-0-19-112 Ready 34m v1.10.2
|
||||
ip-10-0-4-22 Ready 34m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -209,13 +197,15 @@ kube-system pod-checkpointer-4kxtl-ip-10-0-12-221 1/1 Running 0
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md).
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/container-linux/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
@ -223,14 +213,14 @@ Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md
|
||||
| cluster_name | Unique cluster name (prepended to dns_zone) | "tempest" |
|
||||
| dns_zone | AWS Route53 DNS zone | "aws.example.com" |
|
||||
| dns_zone_id | AWS Route53 DNS zone id | "Z3PAABBCFAKEC0" |
|
||||
| ssh_authorized_key | SSH public key for ~/.ssh_authorized_keys | "ssh-rsa AAAAB3NZ..." |
|
||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/tempest" |
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `tempest.aws.example.com`.
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `tempest.aws.example.com`.
|
||||
|
||||
You'll need a registered domain name or subdomain registered in a AWS Route53 DNS zone. You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain on AWS Route53. You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "aws_route53_zone" "zone-for-clusters" {
|
||||
@ -241,7 +231,7 @@ resource "aws_route53_zone" "zone-for-clusters" {
|
||||
Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`.
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on Route53 (e.g. aws.mydomain.com) and [update nameservers](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html).
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Route53 (e.g. aws.mydomain.com) and [update nameservers](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html).
|
||||
|
||||
### Optional
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.1 cluster on bare-metal.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.2 cluster on bare-metal with Container Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers.
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
Controllers are provisioned as etcd peers and run `etcd-member` (etcd3) and `kubelet`. Workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -129,7 +129,7 @@ providers {
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](../architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -174,7 +174,7 @@ Define a Kubernetes cluster using the module `bare-metal/container-linux/kuberne
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -224,9 +224,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -235,15 +232,6 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
@ -295,9 +283,9 @@ Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
|
||||
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
|
||||
|
||||
```
|
||||
# before v1.10.1
|
||||
# before v1.10.2
|
||||
$ ssh debug@node1.example.com
|
||||
# after v1.10.1
|
||||
# after v1.10.2
|
||||
$ ssh -p 2222 core@node1.example.com
|
||||
```
|
||||
|
||||
@ -322,9 +310,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
node1.example.com Ready 11m v1.10.1
|
||||
node2.example.com Ready 11m v1.10.1
|
||||
node3.example.com Ready 11m v1.10.1
|
||||
node1.example.com Ready 11m v1.10.2
|
||||
node2.example.com Ready 11m v1.10.2
|
||||
node3.example.com Ready 11m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -350,19 +338,21 @@ kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md).
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/container-linux/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name | mercury |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | http://matchbox.example.com:8080 |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | http://matchbox.example.com:port |
|
||||
| container_linux_channel | Container Linux channel | stable, beta, alpha |
|
||||
| container_linux_version | Container Linux version of the kernel/initrd to PXE and the image to install | 1632.3.0 |
|
||||
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
|
@ -1,10 +1,10 @@
|
||||
# Digital Ocean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Digital Ocean.
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on DigitalOcean with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, firewall rules, DNS records, tags, and droplets for Kubernetes controllers and workers will be created.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -37,7 +37,7 @@ providers {
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](../architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -90,7 +90,7 @@ Define a Kubernetes cluster using the module `digital-ocean/container-linux/kube
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
@ -126,9 +126,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -137,15 +134,6 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
@ -176,9 +164,9 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
10.132.110.130 Ready 10m v1.10.1
|
||||
10.132.115.81 Ready 10m v1.10.1
|
||||
10.132.124.107 Ready 10m v1.10.1
|
||||
10.132.110.130 Ready 10m v1.10.2
|
||||
10.132.115.81 Ready 10m v1.10.2
|
||||
10.132.124.107 Ready 10m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -203,13 +191,15 @@ kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md).
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/container-linux/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
@ -222,9 +212,9 @@ Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
|
||||
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
|
||||
|
||||
You'll need a registered domain name or subdomain registered in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "digitalocean_domain" "zone-for-clusters" {
|
||||
@ -235,7 +225,7 @@ resource "digitalocean_domain" "zone-for-clusters" {
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean).
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean).
|
||||
|
||||
#### SSH Fingerprints
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Google Compute Engine (not GKE).
|
||||
In this tutorial, we'll create a Kubernetes v1.10.2 cluster on Google Compute Engine with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a network, firewall rules, managed instance groups of Kubernetes controllers and workers, network load balancers for controllers and workers, and health checks will be created.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -37,7 +37,7 @@ providers {
|
||||
}
|
||||
```
|
||||
|
||||
Read [concepts](concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
Read [concepts](../architecture/concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
@ -47,7 +47,7 @@ cd infra/clusters
|
||||
|
||||
Login to your Google Console [API Manager](https://console.cloud.google.com/apis/dashboard) and select a project, or [signup](https://cloud.google.com/free/) if you don't have an account.
|
||||
|
||||
Select "Credentials", and create service account key credentials. Choose the "Compute Engine default service account" and save the JSON private key to a file that can be referenced in configs.
|
||||
Select "Credentials" and create a service account key. Choose the "Compute Engine Admin" role and save the JSON private key to a file that can be referenced in configs.
|
||||
|
||||
```sh
|
||||
mv ~/Downloads/project-id-43048204.json ~/.config/google-cloud/terraform.json
|
||||
@ -89,7 +89,7 @@ provider "tls" {
|
||||
Additional configuration options are described in the `google` provider [docs](https://www.terraform.io/docs/providers/google/index.html).
|
||||
|
||||
!!! tip
|
||||
A project may contain multiple clusters if you wish. Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`.
|
||||
Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`. A project may contain multiple clusters across different regions.
|
||||
|
||||
## Cluster
|
||||
|
||||
@ -97,7 +97,7 @@ Define a Kubernetes cluster using the module `google-cloud/container-linux/kuber
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -133,9 +133,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -144,15 +141,6 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
@ -184,9 +172,9 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.1
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.1
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.1
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -211,13 +199,15 @@ kube-system pod-checkpointer-l6lrt 1/1 Running 0
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md).
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
@ -233,9 +223,9 @@ Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`.
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a TCP proxy load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`.
|
||||
|
||||
You'll need a registered domain name or subdomain registered in a Google Cloud DNS zone. You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain on Google Cloud DNS. You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
@ -246,13 +236,13 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers).
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers).
|
||||
|
||||
### Optional
|
||||
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
|
||||
| worker_type | Machine type for workers | "n1-standard-1" | See below |
|
||||
@ -268,9 +258,6 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
|
||||
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
|
||||
|
||||
!!! warning
|
||||
Set controller_count to 1. A bug in Google Cloud network load balancer health checking prevents multiple controllers from bootstrapping. There are workarounds, but they all involve tradeoffs we're uncomfortable recommending. See [#54](https://github.com/poseidon/typhoon/issues/54).
|
||||
|
||||
#### Preemption
|
||||
|
||||
Add `worker_preemeptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`
|
@ -11,12 +11,11 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.1 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
* Provided via Terraform Modules
|
||||
|
||||
## Modules
|
||||
|
||||
@ -24,19 +23,19 @@ Typhoon provides a Terraform Module for each supported operating system and plat
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Container Linux | [aws/container-linux/kubernetes](aws.md) | stable |
|
||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal.md) | stable |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean.md) | beta |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud.md) | beta |
|
||||
| AWS | Container Linux | [aws/container-linux/kubernetes](cl/aws.md) | stable |
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](atomic/aws.md) | alpha |
|
||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](cl/bare-metal.md) | stable |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](atomic/bare-metal.md) | alpha |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | beta |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](atomic/digital-ocean.md) | alpha |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | beta |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/container-linux/kubernetes](atomic/google-cloud.md) | very alpha |
|
||||
|
||||
## Usage
|
||||
## Documentation
|
||||
|
||||
* [Concepts](concepts.md)
|
||||
* Tutorials
|
||||
* [AWS](aws.md)
|
||||
* [Bare-Metal](bare-metal.md)
|
||||
* [Digital Ocean](digital-ocean.md)
|
||||
* [Google-Cloud](google-cloud.md)
|
||||
* Architecture [concepts](architecture/concepts.md) and [operating-systems](architecture/operating-systems.md)
|
||||
* Tutorials for [AWS](cl/aws.md), [Bare-Metal](cl/bare-metal.md), [Digital Ocean](cl/digital-ocean.md), and [Google-Cloud](cl/google-cloud.md)
|
||||
|
||||
## Example
|
||||
|
||||
@ -44,7 +43,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.2"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -85,9 +84,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.1
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.1
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.1
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.10.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.2
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -8,9 +8,17 @@ Formats rise and evolve. Typhoon may choose to adapt the format over time (with
|
||||
|
||||
## Operating Systems
|
||||
|
||||
Only Container Linux is supported currently. This just due to operational familiarity, rather than intentional exclusion. It's important that another operating system be added, to reduce the risk of making narrowly-scoped design decisions.
|
||||
Typhoon supports Container Linux and Fedora Atomic 27. These two operating systems were chosen because they offer:
|
||||
|
||||
Fedora Cloud will likely be next.
|
||||
* Minimalism and focus on clustered operation
|
||||
* Automated and atomic operating system upgrades
|
||||
* Declarative and immutable configuration
|
||||
* Optimization for containerized applications
|
||||
|
||||
Together, they diversify Typhoon to support a range of container technologies.
|
||||
|
||||
* Container Linux: Gentoo core, rkt-fly, docker
|
||||
* Fedora Atomic: RHEL core, rpm-ostree, system containers (i.e. runc), CRI-O
|
||||
|
||||
## Get Help
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Hardware
|
||||
|
||||
While bare-metal Kubernetes clusters have no special hardware requirements (beyond the [min reqs](/bare-metal.md#requirements)), Typhoon does ensure certain router and server hardware integrates well with Kubernetes.
|
||||
Typhoon ensures certain networking hardware integrates well with bare-metal Kubernetes.
|
||||
|
||||
## Ubiquiti
|
||||
|
||||
|
@ -18,7 +18,7 @@ module "google-cloud-yavin" {
|
||||
}
|
||||
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.2"
|
||||
...
|
||||
}
|
||||
```
|
||||
@ -78,7 +78,7 @@ $ terraform apply
|
||||
Apply complete! Resources: 0 added, 0 changed, 55 destroyed.
|
||||
```
|
||||
|
||||
Re-provision a new cluster by following the bare-metal [tutorial](../bare-metal.md#cluster).
|
||||
Re-provision a new cluster by following the bare-metal [tutorial](../cl/bare-metal.md#cluster).
|
||||
|
||||
### Cloud
|
||||
|
||||
|
@ -2,14 +2,14 @@
|
||||
|
||||
## Provision Time
|
||||
|
||||
Provisioning times vary based on the platform. Sampling the time to create (apply) and destroy clusters with 1 controller and 2 workers shows (roughly) what to expect.
|
||||
Provisioning times vary based on the operating system and platform. Sampling the time to create (apply) and destroy clusters with 1 controller and 2 workers shows (roughly) what to expect.
|
||||
|
||||
| Platform | Apply | Destroy |
|
||||
|---------------|-------|---------|
|
||||
| AWS | 6 min | 5 min |
|
||||
| Bare-Metal | 10-14 min | NA |
|
||||
| Bare-Metal | 10-15 min | NA |
|
||||
| Digital Ocean | 3 min 30 sec | 20 sec |
|
||||
| Google Cloud | 4 min | 4 min 30 sec |
|
||||
| Google Cloud | 9 min | 4 min 30 sec |
|
||||
|
||||
Notes:
|
||||
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.1 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Kubernetes v1.10.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
@ -1,10 +1,5 @@
|
||||
# Static IPv4 address for the Network Load Balancer
|
||||
resource "google_compute_address" "controllers-ip" {
|
||||
name = "${var.cluster_name}-controllers-ip"
|
||||
}
|
||||
|
||||
# DNS record for the Network Load Balancer
|
||||
resource "google_dns_record_set" "controllers" {
|
||||
# TCP Proxy load balancer DNS record
|
||||
resource "google_dns_record_set" "apiserver" {
|
||||
# DNS Zone name where record should be created
|
||||
managed_zone = "${var.dns_zone_name}"
|
||||
|
||||
@ -13,44 +8,88 @@ resource "google_dns_record_set" "controllers" {
|
||||
type = "A"
|
||||
ttl = 300
|
||||
|
||||
# IPv4 address of controllers' network load balancer
|
||||
rrdatas = ["${google_compute_address.controllers-ip.address}"]
|
||||
# IPv4 address of apiserver TCP Proxy load balancer
|
||||
rrdatas = ["${google_compute_global_address.apiserver-ipv4.address}"]
|
||||
}
|
||||
|
||||
# Network Load Balancer for controllers
|
||||
resource "google_compute_forwarding_rule" "controller-https-rule" {
|
||||
name = "${var.cluster_name}-controller-https-rule"
|
||||
ip_address = "${google_compute_address.controllers-ip.address}"
|
||||
# Static IPv4 address for the TCP Proxy Load Balancer
|
||||
resource "google_compute_global_address" "apiserver-ipv4" {
|
||||
name = "${var.cluster_name}-apiserver-ip"
|
||||
ip_version = "IPV4"
|
||||
}
|
||||
|
||||
# Forward IPv4 TCP traffic to the TCP proxy load balancer
|
||||
resource "google_compute_global_forwarding_rule" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver"
|
||||
ip_address = "${google_compute_global_address.apiserver-ipv4.address}"
|
||||
ip_protocol = "TCP"
|
||||
port_range = "443"
|
||||
target = "${google_compute_target_pool.controllers.self_link}"
|
||||
target = "${google_compute_target_tcp_proxy.apiserver.self_link}"
|
||||
}
|
||||
|
||||
# Target pool of instances for the controller(s) Network Load Balancer
|
||||
resource "google_compute_target_pool" "controllers" {
|
||||
name = "${var.cluster_name}-controller-pool"
|
||||
# Global TCP Proxy Load Balancer for apiservers
|
||||
resource "google_compute_target_tcp_proxy" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver"
|
||||
description = "Distribute TCP load across ${var.cluster_name} controllers"
|
||||
backend_service = "${google_compute_backend_service.apiserver.self_link}"
|
||||
}
|
||||
|
||||
instances = [
|
||||
"${google_compute_instance.controllers.*.self_link}",
|
||||
]
|
||||
|
||||
health_checks = [
|
||||
"${google_compute_http_health_check.kubelet.name}",
|
||||
]
|
||||
# Global backend service backed by unmanaged instance groups
|
||||
resource "google_compute_backend_service" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver"
|
||||
description = "${var.cluster_name} apiserver service"
|
||||
|
||||
protocol = "TCP"
|
||||
port_name = "apiserver"
|
||||
session_affinity = "NONE"
|
||||
timeout_sec = "60"
|
||||
|
||||
# controller(s) spread across zonal instance groups
|
||||
backend {
|
||||
group = "${google_compute_instance_group.controllers.0.self_link}"
|
||||
}
|
||||
backend {
|
||||
group = "${google_compute_instance_group.controllers.1.self_link}"
|
||||
}
|
||||
backend {
|
||||
group = "${google_compute_instance_group.controllers.2.self_link}"
|
||||
}
|
||||
|
||||
health_checks = ["${google_compute_health_check.apiserver.self_link}"]
|
||||
}
|
||||
|
||||
# Kubelet HTTP Health Check
|
||||
resource "google_compute_http_health_check" "kubelet" {
|
||||
name = "${var.cluster_name}-kubelet-health"
|
||||
description = "Health check Kubelet health host port"
|
||||
# Instance group of heterogeneous (unmanged) controller instances
|
||||
resource "google_compute_instance_group" "controllers" {
|
||||
count = "${length(local.zones)}"
|
||||
|
||||
timeout_sec = 5
|
||||
name = "${format("%s-controllers-%s", var.cluster_name, element(local.zones, count.index))}"
|
||||
zone = "${element(local.zones, count.index)}"
|
||||
|
||||
named_port {
|
||||
name = "apiserver"
|
||||
port = "443"
|
||||
}
|
||||
|
||||
# add instances in the zone into the instance group
|
||||
instances = [
|
||||
"${matchkeys(google_compute_instance.controllers.*.self_link,
|
||||
google_compute_instance.controllers.*.zone,
|
||||
list(element(local.zones, count.index)))}"
|
||||
]
|
||||
}
|
||||
|
||||
# TCP health check for apiserver
|
||||
resource "google_compute_health_check" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver-tcp-health"
|
||||
description = "TCP health check for kube-apiserver"
|
||||
|
||||
timeout_sec = 5
|
||||
check_interval_sec = 5
|
||||
|
||||
healthy_threshold = 2
|
||||
unhealthy_threshold = 4
|
||||
healthy_threshold = 1
|
||||
unhealthy_threshold = 3
|
||||
|
||||
port = 10255
|
||||
request_path = "/healthz"
|
||||
tcp_health_check {
|
||||
port = "443"
|
||||
}
|
||||
}
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=db36b92abced3c4b0af279adfd5ed4bf0cf8c39f"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.3"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.4"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -56,6 +56,8 @@ systemd:
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
@ -68,6 +70,7 @@ systemd:
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -119,7 +122,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.1
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
|
@ -19,12 +19,19 @@ data "google_compute_zones" "all" {
|
||||
region = "${var.region}"
|
||||
}
|
||||
|
||||
locals {
|
||||
# TCP proxy load balancers require a fixed number of zonal backends. Spread
|
||||
# controllers over up to 3 zones, since all GCP regions have at least 3.
|
||||
zones = "${slice(data.google_compute_zones.all.names, 0, 3)}"
|
||||
controllers_ipv4_public = ["${google_compute_instance.controllers.*.network_interface.0.access_config.0.assigned_nat_ip}"]
|
||||
}
|
||||
|
||||
# Controller instances
|
||||
resource "google_compute_instance" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
zone = "${element(data.google_compute_zones.all.names, count.index)}"
|
||||
zone = "${element(local.zones, count.index)}"
|
||||
machine_type = "${var.controller_type}"
|
||||
|
||||
metadata {
|
||||
@ -51,10 +58,6 @@ resource "google_compute_instance" "controllers" {
|
||||
tags = ["${var.cluster_name}-controller"]
|
||||
}
|
||||
|
||||
locals {
|
||||
controllers_ipv4_public = ["${google_compute_instance.controllers.*.network_interface.0.access_config.0.assigned_nat_ip}"]
|
||||
}
|
||||
|
||||
# Controller Container Linux Config
|
||||
data "template_file" "controller_config" {
|
||||
count = "${var.controller_count}"
|
||||
|
@ -66,7 +66,7 @@ resource "null_resource" "bootkube-start" {
|
||||
depends_on = [
|
||||
"module.bootkube",
|
||||
"module.workers",
|
||||
"google_dns_record_set.controllers",
|
||||
"google_dns_record_set.apiserver",
|
||||
"null_resource.copy-controller-secrets",
|
||||
]
|
||||
|
||||
|
@ -31,6 +31,8 @@ systemd:
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
@ -41,6 +43,7 @@ systemd:
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -89,7 +92,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.10.1
|
||||
KUBELET_IMAGE_TAG=v1.10.2
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -107,7 +110,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.1 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.10.2 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
23
google-cloud/fedora-atomic/kubernetes/LICENSE
Normal file
23
google-cloud/fedora-atomic/kubernetes/LICENSE
Normal file
@ -0,0 +1,23 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2017 Typhoon Authors
|
||||
Copyright (c) 2017 Dalton Hubble
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
22
google-cloud/fedora-atomic/kubernetes/README.md
Normal file
22
google-cloud/fedora-atomic/kubernetes/README.md
Normal file
@ -0,0 +1,22 @@
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
* Minimal, stable base Kubernetes distribution
|
||||
* Declarative infrastructure and configuration
|
||||
* Free (freedom and cost) and privacy-respecting
|
||||
* Practical for labs, datacenters, and clouds
|
||||
|
||||
Typhoon distributes upstream Kubernetes, architectural conventions, and cluster addons, much like a GNU/Linux distribution provides the Linux kernel and userspace components.
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.10.0 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the Google Cloud [tutorial](https://typhoon.psdn.io/google-cloud/).
|
||||
|
95
google-cloud/fedora-atomic/kubernetes/apiserver.tf
Normal file
95
google-cloud/fedora-atomic/kubernetes/apiserver.tf
Normal file
@ -0,0 +1,95 @@
|
||||
# TCP Proxy load balancer DNS record
|
||||
resource "google_dns_record_set" "apiserver" {
|
||||
# DNS Zone name where record should be created
|
||||
managed_zone = "${var.dns_zone_name}"
|
||||
|
||||
# DNS record
|
||||
name = "${format("%s.%s.", var.cluster_name, var.dns_zone)}"
|
||||
type = "A"
|
||||
ttl = 300
|
||||
|
||||
# IPv4 address of apiserver TCP Proxy load balancer
|
||||
rrdatas = ["${google_compute_global_address.apiserver-ipv4.address}"]
|
||||
}
|
||||
|
||||
# Static IPv4 address for the TCP Proxy Load Balancer
|
||||
resource "google_compute_global_address" "apiserver-ipv4" {
|
||||
name = "${var.cluster_name}-apiserver-ip"
|
||||
ip_version = "IPV4"
|
||||
}
|
||||
|
||||
# Forward IPv4 TCP traffic to the TCP proxy load balancer
|
||||
resource "google_compute_global_forwarding_rule" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver"
|
||||
ip_address = "${google_compute_global_address.apiserver-ipv4.address}"
|
||||
ip_protocol = "TCP"
|
||||
port_range = "443"
|
||||
target = "${google_compute_target_tcp_proxy.apiserver.self_link}"
|
||||
}
|
||||
|
||||
# TCP Proxy Load Balancer for apiservers
|
||||
resource "google_compute_target_tcp_proxy" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver"
|
||||
description = "Distribute TCP load across ${var.cluster_name} controllers"
|
||||
backend_service = "${google_compute_backend_service.apiserver.self_link}"
|
||||
}
|
||||
|
||||
# Backend service backed by unmanaged instance groups
|
||||
resource "google_compute_backend_service" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver"
|
||||
description = "${var.cluster_name} apiserver service"
|
||||
|
||||
protocol = "TCP"
|
||||
port_name = "apiserver"
|
||||
session_affinity = "NONE"
|
||||
timeout_sec = "60"
|
||||
|
||||
# controller(s) spread across zonal instance groups
|
||||
backend {
|
||||
group = "${google_compute_instance_group.controllers.0.self_link}"
|
||||
}
|
||||
backend {
|
||||
group = "${google_compute_instance_group.controllers.1.self_link}"
|
||||
}
|
||||
backend {
|
||||
group = "${google_compute_instance_group.controllers.2.self_link}"
|
||||
}
|
||||
|
||||
health_checks = ["${google_compute_health_check.apiserver.self_link}"]
|
||||
}
|
||||
|
||||
# Instance group of heterogeneous (unmanged) controller instances
|
||||
resource "google_compute_instance_group" "controllers" {
|
||||
count = "${length(local.zones)}"
|
||||
|
||||
name = "${format("%s-controllers-%s", var.cluster_name, element(local.zones, count.index))}"
|
||||
zone = "${element(local.zones, count.index)}"
|
||||
|
||||
named_port {
|
||||
name = "apiserver"
|
||||
port = "443"
|
||||
}
|
||||
|
||||
# add instances in the zone into the instance group
|
||||
instances = [
|
||||
"${matchkeys(google_compute_instance.controllers.*.self_link,
|
||||
google_compute_instance.controllers.*.zone,
|
||||
list(element(local.zones, count.index)))}"
|
||||
]
|
||||
}
|
||||
|
||||
# TCP health check for apiserver
|
||||
resource "google_compute_health_check" "apiserver" {
|
||||
name = "${var.cluster_name}-apiserver-tcp-health"
|
||||
description = "TCP health check for kube-apiserver"
|
||||
|
||||
timeout_sec = 5
|
||||
check_interval_sec = 5
|
||||
|
||||
healthy_threshold = 1
|
||||
unhealthy_threshold = 3
|
||||
|
||||
tcp_health_check {
|
||||
port = "443"
|
||||
}
|
||||
}
|
17
google-cloud/fedora-atomic/kubernetes/bootkube.tf
Normal file
17
google-cloud/fedora-atomic/kubernetes/bootkube.tf
Normal file
@ -0,0 +1,17 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=911f4115088b7511f29221f64bf8e93bfa9ee567"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
etcd_servers = ["${null_resource.repeat.*.triggers.domain}"]
|
||||
asset_dir = "${var.asset_dir}"
|
||||
networking = "${var.networking}"
|
||||
network_mtu = 1440
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
}
|
@ -0,0 +1,108 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/etcd/etcd.conf
|
||||
content: |
|
||||
ETCD_NAME=${etcd_name}
|
||||
ETCD_DATA_DIR=/var/lib/etcd
|
||||
ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379
|
||||
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380
|
||||
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
|
||||
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
|
||||
ETCD_LISTEN_METRICS_URLS=http://0.0.0.0:2381
|
||||
ETCD_INITIAL_CLUSTER=${etcd_initial_cluster}
|
||||
ETCD_STRICT_RECONFIG_CHECK=true
|
||||
ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/server-ca.crt
|
||||
ETCD_CERT_FILE=/etc/ssl/certs/etcd/server.crt
|
||||
ETCD_KEY_FILE=/etc/ssl/certs/etcd/server.key
|
||||
ETCD_CLIENT_CERT_AUTH=true
|
||||
ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/peer-ca.crt
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
-H "Metadata-Flavor: Google"\
|
||||
--url http://metadata.google.internal/computeMetadata/v1/instance/hostname\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
- path: /etc/kubernetes/kubelet.conf
|
||||
content: |
|
||||
ARGS="--allow-privileged \
|
||||
--anonymous-auth=false \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
permissions: '0644'
|
||||
content: |
|
||||
${kubeconfig}
|
||||
- path: /var/lib/bootkube/.keep
|
||||
- path: /etc/NetworkManager/conf.d/typhoon.conf
|
||||
content: |
|
||||
[main]
|
||||
plugins=keyfile
|
||||
[keyfile]
|
||||
unmanaged-devices=interface-name:cali*;interface-name:tunl*
|
||||
- path: /etc/selinux/config
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
content: |
|
||||
SELINUX=permissive
|
||||
SELINUXTYPE=targeted
|
||||
bootcmd:
|
||||
- [setenforce, Permissive]
|
||||
- [systemctl, disable, firewalld, --now]
|
||||
# https://github.com/kubernetes/kubernetes/issues/60869
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.4"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.10.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.12.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
- name: fedora
|
||||
gecos: Fedora Admin
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
groups: wheel,adm,systemd-journal,docker
|
||||
ssh-authorized-keys:
|
||||
- "${ssh_authorized_key}"
|
91
google-cloud/fedora-atomic/kubernetes/controllers.tf
Normal file
91
google-cloud/fedora-atomic/kubernetes/controllers.tf
Normal file
@ -0,0 +1,91 @@
|
||||
# Discrete DNS records for each controller's private IPv4 for etcd usage
|
||||
resource "google_dns_record_set" "etcds" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
# DNS Zone name where record should be created
|
||||
managed_zone = "${var.dns_zone_name}"
|
||||
|
||||
# DNS record
|
||||
name = "${format("%s-etcd%d.%s.", var.cluster_name, count.index, var.dns_zone)}"
|
||||
type = "A"
|
||||
ttl = 300
|
||||
|
||||
# private IPv4 address for etcd
|
||||
rrdatas = ["${element(google_compute_instance.controllers.*.network_interface.0.address, count.index)}"]
|
||||
}
|
||||
|
||||
# Zones in the region
|
||||
data "google_compute_zones" "all" {
|
||||
region = "${var.region}"
|
||||
}
|
||||
|
||||
locals {
|
||||
# TCP proxy load balancers require a fixed number of zonal backends. Spread
|
||||
# controllers over up to 3 zones, since all GCP regions have at least 3.
|
||||
zones = "${slice(data.google_compute_zones.all.names, 0, 3)}"
|
||||
controllers_ipv4_public = ["${google_compute_instance.controllers.*.network_interface.0.access_config.0.assigned_nat_ip}"]
|
||||
}
|
||||
|
||||
# Controller instances
|
||||
resource "google_compute_instance" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
zone = "${element(local.zones, count.index)}"
|
||||
machine_type = "${var.controller_type}"
|
||||
|
||||
metadata {
|
||||
user-data = "${element(data.template_file.controller-cloudinit.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
boot_disk {
|
||||
auto_delete = true
|
||||
|
||||
initialize_params {
|
||||
image = "${var.os_image}"
|
||||
size = "${var.disk_size}"
|
||||
}
|
||||
}
|
||||
|
||||
network_interface {
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
# Ephemeral external IP
|
||||
access_config = {}
|
||||
}
|
||||
|
||||
can_ip_forward = true
|
||||
tags = ["${var.cluster_name}-controller"]
|
||||
}
|
||||
|
||||
# Controller Cloud-Init
|
||||
data "template_file" "controller-cloudinit" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
template = "${file("${path.module}/cloudinit/controller.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", null_resource.repeat.*.triggers.name, null_resource.repeat.*.triggers.domain))}"
|
||||
|
||||
kubeconfig = "${indent(6, module.bootkube.kubeconfig)}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
||||
|
||||
# Horrible hack to generate a Terraform list of a desired length without dependencies.
|
||||
# Ideal ${repeat("etcd", 3) -> ["etcd", "etcd", "etcd"]}
|
||||
resource null_resource "repeat" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
triggers {
|
||||
name = "etcd${count.index}"
|
||||
domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
}
|
||||
}
|
149
google-cloud/fedora-atomic/kubernetes/network.tf
Normal file
149
google-cloud/fedora-atomic/kubernetes/network.tf
Normal file
@ -0,0 +1,149 @@
|
||||
resource "google_compute_network" "network" {
|
||||
name = "${var.cluster_name}"
|
||||
description = "Network for the ${var.cluster_name} cluster"
|
||||
auto_create_subnetworks = true
|
||||
}
|
||||
|
||||
resource "google_compute_firewall" "allow-ssh" {
|
||||
name = "${var.cluster_name}-allow-ssh"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [22]
|
||||
}
|
||||
|
||||
source_ranges = ["0.0.0.0/0"]
|
||||
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
}
|
||||
|
||||
resource "google_compute_firewall" "allow-apiserver" {
|
||||
name = "${var.cluster_name}-allow-apiserver"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [443]
|
||||
}
|
||||
|
||||
source_ranges = ["0.0.0.0/0"]
|
||||
target_tags = ["${var.cluster_name}-controller"]
|
||||
}
|
||||
|
||||
resource "google_compute_firewall" "allow-ingress" {
|
||||
name = "${var.cluster_name}-allow-ingress"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [80, 443]
|
||||
}
|
||||
|
||||
source_ranges = ["0.0.0.0/0"]
|
||||
target_tags = ["${var.cluster_name}-worker"]
|
||||
}
|
||||
|
||||
resource "google_compute_firewall" "internal-etcd" {
|
||||
name = "${var.cluster_name}-internal-etcd"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [2380]
|
||||
}
|
||||
|
||||
source_tags = ["${var.cluster_name}-controller"]
|
||||
target_tags = ["${var.cluster_name}-controller"]
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape etcd metrics
|
||||
resource "google_compute_firewall" "internal-etcd-metrics" {
|
||||
name = "${var.cluster_name}-internal-etcd-metrics"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [2381]
|
||||
}
|
||||
|
||||
source_tags = ["${var.cluster_name}-worker"]
|
||||
target_tags = ["${var.cluster_name}-controller"]
|
||||
}
|
||||
|
||||
# Calico BGP and IPIP
|
||||
# https://docs.projectcalico.org/v2.5/reference/public-cloud/gce
|
||||
resource "google_compute_firewall" "internal-calico" {
|
||||
count = "${var.networking == "calico" ? 1 : 0}"
|
||||
|
||||
name = "${var.cluster_name}-internal-calico"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = ["179"]
|
||||
}
|
||||
|
||||
allow {
|
||||
protocol = "ipip"
|
||||
}
|
||||
|
||||
source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
}
|
||||
|
||||
# flannel
|
||||
resource "google_compute_firewall" "internal-flannel" {
|
||||
count = "${var.networking == "flannel" ? 1 : 0}"
|
||||
|
||||
name = "${var.cluster_name}-internal-flannel"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "udp"
|
||||
ports = [8472]
|
||||
}
|
||||
|
||||
source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "google_compute_firewall" "internal-node-exporter" {
|
||||
name = "${var.cluster_name}-internal-node-exporter"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [9100]
|
||||
}
|
||||
|
||||
source_tags = ["${var.cluster_name}-worker"]
|
||||
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
}
|
||||
|
||||
# kubelet API to allow kubectl exec and log
|
||||
resource "google_compute_firewall" "internal-kubelet" {
|
||||
name = "${var.cluster_name}-internal-kubelet"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [10250]
|
||||
}
|
||||
|
||||
source_tags = ["${var.cluster_name}-controller"]
|
||||
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
}
|
||||
|
||||
resource "google_compute_firewall" "internal-kubelet-readonly" {
|
||||
name = "${var.cluster_name}-internal-kubelet-readonly"
|
||||
network = "${google_compute_network.network.name}"
|
||||
|
||||
allow {
|
||||
protocol = "tcp"
|
||||
ports = [10255]
|
||||
}
|
||||
|
||||
source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user