mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-02 23:41:34 +02:00
Compare commits
80 Commits
Author | SHA1 | Date | |
---|---|---|---|
eec314b52f | |||
bcce02a9ce | |||
42c523e6a2 | |||
64b4c10418 | |||
872b11b948 | |||
5b27d8d889 | |||
840b73f9ba | |||
915af3c6cc | |||
c6586b69fd | |||
ea3fc6d2a7 | |||
c8c43f3991 | |||
58472438ce | |||
7f8e781ae4 | |||
56e9a82984 | |||
e95b856a22 | |||
31f48a81a8 | |||
2b3f61d1bb | |||
8fd2978c31 | |||
7de03a1279 | |||
be9f7b87d6 | |||
721c847943 | |||
78c9fdc18f | |||
884c8b39dc | |||
0e71f7e565 | |||
8c4200d425 | |||
5be5b261e2 | |||
f034ef90ae | |||
3bba1ba0dc | |||
dbe7604b67 | |||
9b405a19b2 | |||
bfa1a679eb | |||
f1da0731d8 | |||
d641a058fe | |||
99a6d5478b | |||
bc750aec33 | |||
d55bfd5589 | |||
0be4673e44 | |||
3b44972d78 | |||
0127ee82c1 | |||
a10d6977b8 | |||
05fe923c14 | |||
d10620fb58 | |||
9b6113a058 | |||
5eb4078d68 | |||
8f0d2b5db4 | |||
2e89e161e9 | |||
55bb4dfba6 | |||
43fe78a2cc | |||
5a283b6443 | |||
ff0c271d7b | |||
89088b6a99 | |||
ee569e3a59 | |||
daeaec0832 | |||
db36036c81 | |||
7653e511be | |||
f8474d68c9 | |||
032a24133b | |||
f5f442b658 | |||
ad871dbfa9 | |||
5801be53ff | |||
c1b1669cf8 | |||
dc03f7a4a9 | |||
1b8234eb91 | |||
4ba090feb0 | |||
4882fe1053 | |||
019009e9ee | |||
991a5c6cee | |||
c60ec642bc | |||
38b4ff4700 | |||
7eb09237f4 | |||
e58b424882 | |||
b8eeafe4f9 | |||
bdf1e6986e | |||
99e3721181 | |||
ea365b551a | |||
bbf2c13eef | |||
da5d2c5321 | |||
bceec9fdf5 | |||
49a9dc9b8b | |||
bec5250e73 |
8
.github/ISSUE_TEMPLATE.md
vendored
8
.github/ISSUE_TEMPLATE.md
vendored
@ -4,11 +4,11 @@
|
||||
|
||||
### Environment
|
||||
|
||||
* Platform: aws, bare-metal, google-cloud, digital-ocean
|
||||
* Platform: aws, azure, bare-metal, google-cloud, digital-ocean
|
||||
* OS: container-linux, fedora-atomic
|
||||
* Terraform: `terraform version`
|
||||
* Plugins: Provider plugin versions
|
||||
* Ref: Git SHA (if applicable)
|
||||
* Ref: Release version or Git SHA (reporting latest is **not** helpful)
|
||||
* Terraform: `terraform version` (reporting latest is **not** helpful)
|
||||
* Plugins: Provider plugin versions (reporting latest is **not** helpful)
|
||||
|
||||
### Problem
|
||||
|
||||
|
135
CHANGES.md
135
CHANGES.md
@ -2,6 +2,139 @@
|
||||
|
||||
Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
## v1.12.3
|
||||
|
||||
* Kubernetes [v1.12.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#v1123)
|
||||
* Add `enable_reporting` variable (default "false") to provide upstreams with usage data ([#345](https://github.com/poseidon/typhoon/pull/345))
|
||||
* Change kube-apiserver `--kubelet-preferred-address-types` to InternalIP,ExternalIP,Hostname
|
||||
* Update Calico from v3.3.0 to [v3.3.1](https://docs.projectcalico.org/v3.3/releases/)
|
||||
* Disable Felix usage reporting by default ([#345](https://github.com/poseidon/typhoon/pull/345))
|
||||
* Improve flannel manifests
|
||||
* [Rename](https://github.com/poseidon/terraform-render-bootkube/commit/d045a8e6b8eccfbb9d69bb51953b5a93d23f67f7) `kube-flannel` DaemonSet to `flannel` and `kube-flannel-cfg` ConfigMap to `flannel-config`
|
||||
* [Drop](https://github.com/poseidon/terraform-render-bootkube/commit/39f9afb3360ec642e5b98457c8bd07eda35b6c96) unused mounts and add a CPU resource request
|
||||
* Update CoreDNS from v1.2.4 to [v1.2.6](https://coredns.io/2018/11/05/coredns-1.2.6-release/)
|
||||
* Enable CoreDNS `loop` and `loadbalance` plugins ([#340](https://github.com/poseidon/typhoon/pull/340))
|
||||
* Fix pod-checkpointer log noise and checkpointable pods detection ([#346](https://github.com/poseidon/typhoon/pull/346))
|
||||
* Use kubernetes-incubator/bootkube v0.14.0
|
||||
* [Recommend](https://typhoon.psdn.io/topics/maintenance/#terraform-plugins-directory) switching from `~/.terraformrc` to the Terraform [third-party plugins](https://www.terraform.io/docs/configuration/providers.html#third-party-plugins) directory `~/.terraform.d/plugins/`.
|
||||
* Allows pinning `terraform-provider-ct` and `terraform-provider-matchbox` versions
|
||||
* Improves safety of later plugin version migrations
|
||||
|
||||
#### Azure
|
||||
|
||||
* Use eviction policy `Delete` for `Low` priority virtual machine scale set workers ([#343](https://github.com/poseidon/typhoon/pull/343))
|
||||
* Fix issue where Azure defaults to `Deallocate` eviction policy, which required manually restarting deallocated instances. `Delete` policy aligns Azure with AWS and GCP behavior.
|
||||
* Require `terraform-provider-azurerm` v1.19+ (action required)
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Add Kubelet `/etc/iscsi` and `iscsadm` mounts on bare-metal for iSCSI ([#103](https://github.com/poseidon/typhoon/pull/103))
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from v0.20.0 to v0.21.0
|
||||
* Update Prometheus from v2.4.3 to v2.5.0
|
||||
* Update Grafana from v5.3.2 to v5.3.4
|
||||
|
||||
## v1.12.2
|
||||
|
||||
* Kubernetes [v1.12.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#v1122)
|
||||
* Update CoreDNS from 1.2.2 to [1.2.4](https://github.com/coredns/coredns/releases/tag/v1.2.4)
|
||||
* Update Calico from v3.2.3 to [v3.3.0](https://docs.projectcalico.org/v3.3/releases/)
|
||||
* Disable Kubelet read-only port ([#324](https://github.com/poseidon/typhoon/pull/324))
|
||||
* Fix CoreDNS AntiAffinity spec to prefer spreading replicas
|
||||
* Ignore controller node user-data changes ([#335](https://github.com/poseidon/typhoon/pull/335))
|
||||
* Once all managed clusters use v1.12.2, it is possible to update `terraform-provider-ct`
|
||||
|
||||
#### AWS
|
||||
|
||||
* Add `disk_iops` variable for EBS volume IOPS ([#314](https://github.com/poseidon/typhoon/pull/314))
|
||||
|
||||
#### Azure
|
||||
|
||||
* Use new `azurerm_network_interface_backend_address_pool_association` ([#332](https://github.com/poseidon/typhoon/pull/332))
|
||||
* Require `terraform-provider-azurerm` v1.17+ (action required)
|
||||
* Add `primary` field to `ip_configuration` needed by v1.17+ ([#331](https://github.com/poseidon/typhoon/pull/331))
|
||||
|
||||
#### DigitalOcean
|
||||
|
||||
* Add AAAA DNS records resolving to worker nodes ([#333](https://github.com/poseidon/typhoon/pull/333))
|
||||
* Hosting IPv6 apps requires editing nginx-ingress with `hostNetwork: true`
|
||||
|
||||
#### Google Cloud
|
||||
|
||||
* Add an IPv6 address and IPv6 forwarding rules for load balancing IPv6 Ingress ([#334](https://github.com/poseidon/typhoon/pull/334))
|
||||
* Add `ingress_static_ipv6` output variable for use in AAAA DNS records
|
||||
* Allow serving IPv6 applications via Kubernetes Ingress
|
||||
|
||||
#### Addons
|
||||
|
||||
* Configure Heapster to scrape Kubelets with bearer token auth ([#323](https://github.com/poseidon/typhoon/pull/323))
|
||||
* Update Grafana from v5.3.1 to v5.3.2
|
||||
|
||||
## v1.12.1
|
||||
|
||||
* Kubernetes [v1.12.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#v1121)
|
||||
* Update etcd from v3.3.9 to [v3.3.10](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md#v3310-2018-10-10)
|
||||
* Update CoreDNS from 1.1.3 to [1.2.2](https://github.com/coredns/coredns/releases/tag/v1.2.2)
|
||||
* Update Calico from v3.2.1 to [v3.2.3](https://docs.projectcalico.org/v3.2/releases/)
|
||||
* Raise scheduler and controller-manager replicas to the larger of 2 or the number of controller nodes ([#312](https://github.com/poseidon/typhoon/pull/312))
|
||||
* Single-controller clusters continue to run 2 replicas as before
|
||||
* Raise default CoreDNS replicas to the larger of 2 or the number of controller nodes ([#313](https://github.com/poseidon/typhoon/pull/313))
|
||||
* Add AntiAffinity preferred rule to favor spreading CoreDNS pods
|
||||
* Annotate control plane and addon containers to use the Docker runtime seccomp profile ([#319](https://github.com/poseidon/typhoon/pull/319))
|
||||
* Override Kubernetes default behavior that starts containers with `seccomp=unconfined`
|
||||
|
||||
#### Azure
|
||||
|
||||
* Remove `admin_password` field (disabled) since it is now optional
|
||||
* Require `terraform-provider-azurerm` v1.16+ (action required)
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Add support for `cached_install` mode with Flatcar Linux ([#315](https://github.com/poseidon/typhoon/pull/315))
|
||||
|
||||
#### DigitalOcean
|
||||
|
||||
* Require `terraform-provider-digitalocean` v1.0+ (action required)
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from v0.19.0 to v0.20.0
|
||||
* Update Prometheus from v2.3.2 to v2.4.3
|
||||
* Update Grafana from v5.2.4 to v5.3.1
|
||||
|
||||
## v1.11.3
|
||||
|
||||
* Kubernetes [v1.11.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1113)
|
||||
* Introduce Typhoon for Azure as alpha ([#288](https://github.com/poseidon/typhoon/pull/288))
|
||||
* Special thanks @justaugustus for an earlier variant
|
||||
* Update Calico from v3.1.3 to v3.2.1 ([#278](https://github.com/poseidon/typhoon/pull/278))
|
||||
|
||||
#### AWS
|
||||
|
||||
* Remove firewall rule allowing ICMP packets to nodes ([#285](https://github.com/poseidon/typhoon/pull/285))
|
||||
|
||||
#### Bare-Metal
|
||||
|
||||
* Remove `controller_networkds` and `worker_networkds` variables. Use Container Linux Config snippets [#277](https://github.com/poseidon/typhoon/pull/277)
|
||||
|
||||
#### Google Cloud
|
||||
|
||||
* Fix firewall to allow etcd client port 2379 traffic between controller nodes ([#287](https://github.com/poseidon/typhoon/pull/287))
|
||||
* kube-apiservers were only able to connect to their node's local etcd peer. While master node outages were tolerated, reaching a healthy peer took longer than neccessary in some cases
|
||||
* Reduce time needed to bootstrap the cluster
|
||||
* Remove firewall rule allowing workers to access Nginx Ingress health check ([#284](https://github.com/poseidon/typhoon/pull/284))
|
||||
* Nginx Ingress addon no longer uses hostNetwork, Prometheus scrapes via CNI network
|
||||
|
||||
#### Addons
|
||||
|
||||
* Update nginx-ingress from 0.17.1 to 0.19.0
|
||||
* Update kube-state-metrics from v1.3.1 to v1.4.0
|
||||
* Update Grafana from 5.2.2 to 5.2.4
|
||||
|
||||
## v1.11.2
|
||||
|
||||
* Kubernetes [v1.11.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1112)
|
||||
@ -14,7 +147,7 @@ Notable changes between versions.
|
||||
* Introduce [Container Linux Config snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) on bare-metal
|
||||
* Validate and additively merge custom Container Linux Configs during terraform plan
|
||||
* Define files, systemd units, dropins, networkd configs, mounts, users, and more
|
||||
* [Require](https://typhoon.psdn.io/cl/bare-metal/#terraform-setup) `terraform-provider-ct` plugin v0.2.1 (action required!)
|
||||
* [Require](https://typhoon.psdn.io/cl/bare-metal/#terraform-setup) `terraform-provider-ct` plugin v0.2.1 (**action required!**)
|
||||
|
||||
#### Addons
|
||||
|
||||
|
45
README.md
45
README.md
@ -11,34 +11,38 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [preemption](https://typhoon.psdn.io/google-cloud/#preemption) (varies by platform)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, or other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Modules
|
||||
|
||||
Typhoon provides a Terraform Module for each supported operating system and platform.
|
||||
Typhoon provides a Terraform Module for each supported operating system and platform. Container Linux is a mature and reliable choice. Also, Kinvolk's Flatcar Linux fork is selectable on AWS and bare-metal.
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Container Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](aws/fedora-atomic/kubernetes) | alpha |
|
||||
| Azure | Container Linux | [azure/container-linux/kubernetes](cl/azure.md) | alpha |
|
||||
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
|
||||
|
||||
The AWS and bare-metal `container-linux` modules allow picking Red Hat Container Linux (formerly CoreOS Container Linux) or Kinvolk's Flatcar Linux friendly fork.
|
||||
Fedora Atomic support is alpha and will evolve as Fedora Atomic is replaced by Fedora CoreOS.
|
||||
|
||||
| Platform | Operating System | Terraform Module | Status |
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](aws/fedora-atomic/kubernetes) | alpha |
|
||||
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
|
||||
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
|
||||
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
* [Docs](https://typhoon.psdn.io)
|
||||
* Architecture [concepts](https://typhoon.psdn.io/architecture/concepts/) and [operating systems](https://typhoon.psdn.io/architecture/operating-systems/)
|
||||
* Tutorials for [AWS](https://typhoon.psdn.io/cl/aws/), [Bare-Metal](https://typhoon.psdn.io/cl/bare-metal/), [Digital Ocean](https://typhoon.psdn.io/cl/digital-ocean/), and [Google-Cloud](https://typhoon.psdn.io/cl/google-cloud/)
|
||||
* Tutorials for [AWS](docs/cl/aws.md), [Azure](docs/cl/azure.md), [Bare-Metal](docs/cl/bare-metal.md), [Digital Ocean](docs/cl/digital-ocean.md), and [Google-Cloud](docs/cl/google-cloud.md)
|
||||
|
||||
## Usage
|
||||
|
||||
@ -46,7 +50,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.11.2"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.12.3"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -71,15 +75,14 @@ module "google-cloud-yavin" {
|
||||
}
|
||||
```
|
||||
|
||||
Fetch modules, plan the changes to be made, and apply the changes.
|
||||
Initialize modules, plan the changes to be made, and apply the changes.
|
||||
|
||||
```sh
|
||||
$ terraform init
|
||||
$ terraform get --update
|
||||
$ terraform plan
|
||||
Plan: 37 to add, 0 to change, 0 to destroy.
|
||||
Plan: 64 to add, 0 to change, 0 to destroy.
|
||||
$ terraform apply
|
||||
Apply complete! Resources: 37 added, 0 changed, 0 destroyed.
|
||||
Apply complete! Resources: 64 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 4-8 minutes (varies by platform), the cluster will be ready. This Google Cloud example creates a `yavin.example.com` DNS record to resolve to a network load balancer across controller nodes.
|
||||
@ -87,10 +90,10 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
```sh
|
||||
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.11.2
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.11.2
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.11.2
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.12.3
|
||||
yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.12.3
|
||||
yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.12.3
|
||||
```
|
||||
|
||||
List the pods.
|
||||
@ -102,6 +105,7 @@ kube-system calico-node-1cs8z 2/2 Running 0
|
||||
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||
kube-system kube-apiserver-zppls 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
|
||||
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
|
||||
@ -111,6 +115,7 @@ kube-system kube-proxy-njn47 1/1 Running 0
|
||||
kube-system kube-scheduler-3895335239-5x87r 1/1 Running 0 6m
|
||||
kube-system kube-scheduler-3895335239-bzrrt 1/1 Running 1 6m
|
||||
kube-system pod-checkpointer-l6lrt 1/1 Running 0 6m
|
||||
kube-system pod-checkpointer-l6lrt-controller-0 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
## Non-Goals
|
||||
|
@ -15,6 +15,8 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
app: container-linux-update-agent
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: update-agent
|
||||
|
@ -12,6 +12,8 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
app: container-linux-update-operator
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: update-operator
|
||||
|
@ -18,10 +18,12 @@ spec:
|
||||
labels:
|
||||
name: grafana
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: grafana
|
||||
image: grafana/grafana:5.2.2
|
||||
image: grafana/grafana:5.3.4
|
||||
env:
|
||||
- name: GF_SERVER_HTTP_PORT
|
||||
value: "8080"
|
||||
|
@ -5,7 +5,7 @@ metadata:
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:heapster
|
||||
name: heapster
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: heapster
|
||||
|
30
addons/heapster/cluster-role.yaml
Normal file
30
addons/heapster/cluster-role.yaml
Normal file
@ -0,0 +1,30 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: heapster
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
- namespaces
|
||||
- nodes
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- deployments
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes/stats
|
||||
verbs:
|
||||
- get
|
@ -23,7 +23,7 @@ spec:
|
||||
image: k8s.gcr.io/heapster-amd64:v1.5.4
|
||||
command:
|
||||
- /heapster
|
||||
- --source=kubernetes.summary_api:''
|
||||
- --source=kubernetes.summary_api:''?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
|
@ -14,6 +14,8 @@ spec:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
|
@ -17,12 +17,14 @@ spec:
|
||||
labels:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
6
addons/nginx-ingress/azure/0-namespace.yaml
Normal file
6
addons/nginx-ingress/azure/0-namespace.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress
|
||||
labels:
|
||||
name: ingress
|
42
addons/nginx-ingress/azure/default-backend/deployment.yaml
Normal file
42
addons/nginx-ingress/azure/default-backend/deployment.yaml
Normal file
@ -0,0 +1,42 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: default-backend
|
||||
namespace: ingress
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
# Any image is permissable as long as:
|
||||
# 1. It serves a 404 page at /
|
||||
# 2. It serves 200 on a /healthz endpoint
|
||||
image: k8s.gcr.io/defaultbackend:1.4
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
resources:
|
||||
limits:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
terminationGracePeriodSeconds: 60
|
15
addons/nginx-ingress/azure/default-backend/service.yaml
Normal file
15
addons/nginx-ingress/azure/default-backend/service.yaml
Normal file
@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: default-backend
|
||||
namespace: ingress
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 8080
|
79
addons/nginx-ingress/azure/deployment.yaml
Normal file
79
addons/nginx-ingress/azure/deployment.yaml
Normal file
@ -0,0 +1,79 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
namespace: ingress
|
||||
spec:
|
||||
replicas: 2
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
- --ingress-class=public
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
hostPort: 80
|
||||
- name: https
|
||||
containerPort: 443
|
||||
hostPort: 443
|
||||
- name: health
|
||||
containerPort: 10254
|
||||
hostPort: 10254
|
||||
livenessProbe:
|
||||
failureThreshold: 3
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
drop:
|
||||
- ALL
|
||||
runAsUser: 33 # www-data
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 60
|
12
addons/nginx-ingress/azure/rbac/cluster-role-binding.yaml
Normal file
12
addons/nginx-ingress/azure/rbac/cluster-role-binding.yaml
Normal file
@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: ingress
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: ingress
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
namespace: ingress
|
||||
name: default
|
51
addons/nginx-ingress/azure/rbac/cluster-role.yaml
Normal file
51
addons/nginx-ingress/azure/rbac/cluster-role.yaml
Normal file
@ -0,0 +1,51 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: ingress
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- endpoints
|
||||
- nodes
|
||||
- pods
|
||||
- secrets
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses/status
|
||||
verbs:
|
||||
- update
|
13
addons/nginx-ingress/azure/rbac/role-binding.yaml
Normal file
13
addons/nginx-ingress/azure/rbac/role-binding.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: ingress
|
||||
namespace: ingress
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: ingress
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
namespace: ingress
|
||||
name: default
|
41
addons/nginx-ingress/azure/rbac/role.yaml
Normal file
41
addons/nginx-ingress/azure/rbac/role.yaml
Normal file
@ -0,0 +1,41 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: ingress
|
||||
namespace: ingress
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
resourceNames:
|
||||
# Defaults to "<election-id>-<ingress-class>"
|
||||
# Here: "<ingress-controller-leader>-<nginx>"
|
||||
# This has to be adapted if you change either parameter
|
||||
# when launching the nginx-ingress-controller.
|
||||
- "ingress-controller-leader-public"
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
22
addons/nginx-ingress/azure/service.yaml
Normal file
22
addons/nginx-ingress/azure/service.yaml
Normal file
@ -0,0 +1,22 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
namespace: ingress
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
prometheus.io/port: '10254'
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- name: https
|
||||
protocol: TCP
|
||||
port: 443
|
||||
targetPort: 443
|
@ -14,6 +14,8 @@ spec:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
|
@ -17,10 +17,12 @@ spec:
|
||||
labels:
|
||||
name: ingress-controller-public
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -17,12 +17,14 @@ spec:
|
||||
labels:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -14,6 +14,8 @@ spec:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
|
@ -14,6 +14,8 @@ spec:
|
||||
labels:
|
||||
name: default-backend
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: default-backend
|
||||
|
@ -17,12 +17,14 @@ spec:
|
||||
labels:
|
||||
name: nginx-ingress-controller
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/node: ""
|
||||
containers:
|
||||
- name: nginx-ingress-controller
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
|
||||
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-backend
|
||||
|
@ -102,7 +102,7 @@ data:
|
||||
regex: 'true'
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_node_name]
|
||||
- source_labels: [__meta_kubernetes_node_address_InternalIP]
|
||||
action: replace
|
||||
target_label: __address__
|
||||
replacement: '${1}:2381'
|
||||
|
@ -14,11 +14,13 @@ spec:
|
||||
labels:
|
||||
name: prometheus
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: quay.io/prometheus/prometheus:v2.3.2
|
||||
image: quay.io/prometheus/prometheus:v2.5.0
|
||||
args:
|
||||
- --web.listen-address=0.0.0.0:9090
|
||||
- --config.file=/etc/prometheus/prometheus.yaml
|
||||
|
@ -18,11 +18,13 @@ spec:
|
||||
labels:
|
||||
name: kube-state-metrics
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
serviceAccountName: kube-state-metrics
|
||||
containers:
|
||||
- name: kube-state-metrics
|
||||
image: quay.io/coreos/kube-state-metrics:v1.3.1
|
||||
image: quay.io/coreos/kube-state-metrics:v1.4.0
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8080
|
||||
|
@ -17,6 +17,8 @@ spec:
|
||||
labels:
|
||||
name: node-exporter
|
||||
phase: prod
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
|
||||
spec:
|
||||
serviceAccountName: node-exporter
|
||||
securityContext:
|
||||
|
@ -299,7 +299,7 @@ data:
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
description: Pod {{$labels.namespaces}}/{{$labels.pod}} is was restarted {{$value}}
|
||||
description: Pod {{$labels.namespaces}}/{{$labels.pod}} restarted {{$value}}
|
||||
times within the last hour
|
||||
summary: Pod is restarting frequently
|
||||
kubelet.rules.yaml: |
|
||||
|
@ -11,10 +11,10 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/cl/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
@ -11,4 +11,5 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -88,6 +88,7 @@ systemd:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -122,7 +123,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -142,17 +143,14 @@ storage:
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=$${BOOTKUBE_ASSETS} \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
$${BOOTKUBE_ACI}:$${BOOTKUBE_VERSION} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
|
@ -24,12 +24,13 @@ resource "aws_instance" "controllers" {
|
||||
instance_type = "${var.controller_type}"
|
||||
|
||||
ami = "${local.ami_id}"
|
||||
user_data = "${element(data.ct_config.controller_ign.*.rendered, count.index)}"
|
||||
user_data = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
iops = "${var.disk_iops}"
|
||||
}
|
||||
|
||||
# network
|
||||
@ -38,12 +39,23 @@ resource "aws_instance" "controllers" {
|
||||
vpc_security_group_ids = ["${aws_security_group.controller.id}"]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = ["ami"]
|
||||
ignore_changes = [
|
||||
"ami",
|
||||
"user_data",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Controller Container Linux Config
|
||||
data "template_file" "controller_config" {
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = "${var.controller_count}"
|
||||
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.controller_clc_snippets}"]
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
template = "${file("${path.module}/cl/controller.yaml.tmpl")}"
|
||||
@ -73,10 +85,3 @@ data "template_file" "etcds" {
|
||||
dns_zone = "${var.dns_zone}"
|
||||
}
|
||||
}
|
||||
|
||||
data "ct_config" "controller_ign" {
|
||||
count = "${var.controller_count}"
|
||||
content = "${element(data.template_file.controller_config.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.controller_clc_snippets}"]
|
||||
}
|
||||
|
@ -5,7 +5,7 @@ resource "aws_route53_record" "apiserver" {
|
||||
name = "${format("%s.%s.", var.cluster_name, var.dns_zone)}"
|
||||
type = "A"
|
||||
|
||||
# AWS recommends their special "alias" records for ELBs
|
||||
# AWS recommends their special "alias" records for NLBs
|
||||
alias {
|
||||
name = "${aws_lb.nlb.dns_name}"
|
||||
zone_id = "${aws_lb.nlb.zone_id}"
|
||||
|
@ -11,16 +11,6 @@ resource "aws_security_group" "controller" {
|
||||
tags = "${map("Name", "${var.cluster_name}-controller")}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-icmp" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-ssh" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -31,16 +21,6 @@ resource "aws_security_group_rule" "controller-ssh" {
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-apiserver" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 6443
|
||||
to_port = 6443
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-etcd" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -51,6 +31,7 @@ resource "aws_security_group_rule" "controller-etcd" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape etcd metrics
|
||||
resource "aws_security_group_rule" "controller-etcd-metrics" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -61,6 +42,16 @@ resource "aws_security_group_rule" "controller-etcd-metrics" {
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-apiserver" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 6443
|
||||
to_port = 6443
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-flannel" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -81,6 +72,7 @@ resource "aws_security_group_rule" "controller-flannel-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -91,6 +83,7 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelets for exec, log, port-forward
|
||||
resource "aws_security_group_rule" "controller-kubelet" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -111,26 +104,6 @@ resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-bgp" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -213,16 +186,6 @@ resource "aws_security_group" "worker" {
|
||||
tags = "${map("Name", "${var.cluster_name}-worker")}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-icmp" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-ssh" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
@ -273,6 +236,7 @@ resource "aws_security_group_rule" "worker-flannel-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "aws_security_group_rule" "worker-node-exporter" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
@ -293,6 +257,7 @@ resource "aws_security_group_rule" "ingress-health" {
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelets for exec, log, port-forward
|
||||
resource "aws_security_group_rule" "worker-kubelet" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
@ -303,6 +268,7 @@ resource "aws_security_group_rule" "worker-kubelet" {
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kubelet metrics
|
||||
resource "aws_security_group_rule" "worker-kubelet-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
@ -313,26 +279,6 @@ resource "aws_security_group_rule" "worker-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-bgp" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
|
@ -59,6 +59,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "disk_iops" {
|
||||
type = "string"
|
||||
default = "0"
|
||||
description = "IOPS of the EBS volume (e.g. 100)"
|
||||
}
|
||||
|
||||
variable "worker_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
@ -128,3 +134,9 @@ variable "cluster_domain_suffix" {
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -60,6 +60,7 @@ systemd:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
Restart=always
|
||||
@ -92,7 +93,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -110,7 +111,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.12.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -52,6 +52,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "disk_iops" {
|
||||
type = "string"
|
||||
default = "0"
|
||||
description = "IOPS of the EBS volume (required for io1)"
|
||||
}
|
||||
|
||||
variable "spot_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
|
@ -46,12 +46,13 @@ resource "aws_launch_configuration" "worker" {
|
||||
spot_price = "${var.spot_price}"
|
||||
enable_monitoring = false
|
||||
|
||||
user_data = "${data.ct_config.worker_ign.rendered}"
|
||||
user_data = "${data.ct_config.worker-ignition.rendered}"
|
||||
|
||||
# storage
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
iops = "${var.disk_iops}"
|
||||
}
|
||||
|
||||
# network
|
||||
@ -64,8 +65,15 @@ resource "aws_launch_configuration" "worker" {
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Container Linux Config
|
||||
data "template_file" "worker_config" {
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = "${data.template_file.worker-config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.clc_snippets}"]
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
@ -75,9 +83,3 @@ data "template_file" "worker_config" {
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
||||
|
||||
data "ct_config" "worker_ign" {
|
||||
content = "${data.template_file.worker_config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.clc_snippets}"]
|
||||
}
|
||||
|
@ -11,10 +11,10 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/) and [spot](https://typhoon.psdn.io/cl/aws/#spot) workers
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
@ -11,6 +11,7 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
|
@ -19,24 +19,9 @@ write_files:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/latest/meta-data/local-ipv4\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -65,6 +50,7 @@ write_files:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
@ -92,11 +78,10 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
|
@ -30,6 +30,7 @@ resource "aws_instance" "controllers" {
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
iops = "${var.disk_iops}"
|
||||
}
|
||||
|
||||
# network
|
||||
@ -38,7 +39,10 @@ resource "aws_instance" "controllers" {
|
||||
vpc_security_group_ids = ["${aws_security_group.controller.id}"]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = ["ami"]
|
||||
ignore_changes = [
|
||||
"ami",
|
||||
"user_data",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -5,7 +5,7 @@ resource "aws_route53_record" "apiserver" {
|
||||
name = "${format("%s.%s.", var.cluster_name, var.dns_zone)}"
|
||||
type = "A"
|
||||
|
||||
# AWS recommends their special "alias" records for ELBs
|
||||
# AWS recommends their special "alias" records for NLBs
|
||||
alias {
|
||||
name = "${aws_lb.nlb.dns_name}"
|
||||
zone_id = "${aws_lb.nlb.zone_id}"
|
||||
|
@ -11,16 +11,6 @@ resource "aws_security_group" "controller" {
|
||||
tags = "${map("Name", "${var.cluster_name}-controller")}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-icmp" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-ssh" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -31,16 +21,6 @@ resource "aws_security_group_rule" "controller-ssh" {
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-apiserver" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 6443
|
||||
to_port = 6443
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-etcd" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -51,6 +31,7 @@ resource "aws_security_group_rule" "controller-etcd" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape etcd metrics
|
||||
resource "aws_security_group_rule" "controller-etcd-metrics" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -61,6 +42,16 @@ resource "aws_security_group_rule" "controller-etcd-metrics" {
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-apiserver" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 6443
|
||||
to_port = 6443
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-flannel" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -81,6 +72,7 @@ resource "aws_security_group_rule" "controller-flannel-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -91,6 +83,7 @@ resource "aws_security_group_rule" "controller-node-exporter" {
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelets for exec, log, port-forward
|
||||
resource "aws_security_group_rule" "controller-kubelet" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -111,26 +104,6 @@ resource "aws_security_group_rule" "controller-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.worker.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-bgp" {
|
||||
security_group_id = "${aws_security_group.controller.id}"
|
||||
|
||||
@ -213,16 +186,6 @@ resource "aws_security_group" "worker" {
|
||||
tags = "${map("Name", "${var.cluster_name}-worker")}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-icmp" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "icmp"
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-ssh" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
@ -273,6 +236,7 @@ resource "aws_security_group_rule" "worker-flannel-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "aws_security_group_rule" "worker-node-exporter" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
@ -293,6 +257,7 @@ resource "aws_security_group_rule" "ingress-health" {
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelets for exec, log, port-forward
|
||||
resource "aws_security_group_rule" "worker-kubelet" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
@ -303,6 +268,7 @@ resource "aws_security_group_rule" "worker-kubelet" {
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kubelet metrics
|
||||
resource "aws_security_group_rule" "worker-kubelet-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
@ -313,26 +279,6 @@ resource "aws_security_group_rule" "worker-kubelet-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet-read" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
source_security_group_id = "${aws_security_group.controller.id}"
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-kubelet-read-self" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 10255
|
||||
to_port = 10255
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-bgp" {
|
||||
security_group_id = "${aws_security_group.worker.id}"
|
||||
|
||||
|
@ -53,6 +53,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "disk_iops" {
|
||||
type = "string"
|
||||
default = "0"
|
||||
description = "IOPS of the EBS volume (e.g. 100)"
|
||||
}
|
||||
|
||||
variable "worker_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
@ -110,3 +116,9 @@ variable "cluster_domain_suffix" {
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -1,23 +1,8 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/latest/meta-data/local-ipv4\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -43,6 +28,7 @@ write_files:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
permissions: '0644'
|
||||
@ -68,8 +54,7 @@ bootcmd:
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- [systemctl, start, --no-block, kubelet.service]
|
||||
users:
|
||||
- default
|
||||
|
@ -46,6 +46,12 @@ variable "disk_type" {
|
||||
description = "Type of the EBS volume (e.g. standard, gp2, io1)"
|
||||
}
|
||||
|
||||
variable "disk_iops" {
|
||||
type = "string"
|
||||
default = "0"
|
||||
description = "IOPS of the EBS volume (required for io1)"
|
||||
}
|
||||
|
||||
variable "spot_price" {
|
||||
type = "string"
|
||||
default = ""
|
||||
|
@ -52,6 +52,7 @@ resource "aws_launch_configuration" "worker" {
|
||||
root_block_device {
|
||||
volume_type = "${var.disk_type}"
|
||||
volume_size = "${var.disk_size}"
|
||||
iops = "${var.disk_iops}"
|
||||
}
|
||||
|
||||
# network
|
||||
|
23
azure/container-linux/kubernetes/LICENSE
Normal file
23
azure/container-linux/kubernetes/LICENSE
Normal file
@ -0,0 +1,23 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2017 Typhoon Authors
|
||||
Copyright (c) 2017 Dalton Hubble
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
23
azure/container-linux/kubernetes/README.md
Normal file
23
azure/container-linux/kubernetes/README.md
Normal file
@ -0,0 +1,23 @@
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
* Minimal, stable base Kubernetes distribution
|
||||
* Declarative infrastructure and configuration
|
||||
* Free (freedom and cost) and privacy-respecting
|
||||
* Practical for labs, datacenters, and clouds
|
||||
|
||||
Typhoon distributes upstream Kubernetes, architectural conventions, and cluster addons, much like a GNU/Linux distribution provides the Linux kernel and userspace components.
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the Azure [tutorial](https://typhoon.psdn.io/cl/azure/).
|
||||
|
14
azure/container-linux/kubernetes/bootkube.tf
Normal file
14
azure/container-linux/kubernetes/bootkube.tf
Normal file
@ -0,0 +1,14 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
etcd_servers = ["${formatlist("%s.%s", azurerm_dns_a_record.etcds.*.name, var.dns_zone)}"]
|
||||
asset_dir = "${var.asset_dir}"
|
||||
networking = "flannel"
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
}
|
161
azure/container-linux/kubernetes/cl/controller.yaml.tmpl
Normal file
161
azure/container-linux/kubernetes/cl/controller.yaml.tmpl
Normal file
@ -0,0 +1,161 @@
|
||||
---
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
enable: true
|
||||
dropins:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
Environment="ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379"
|
||||
Environment="ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380"
|
||||
Environment="ETCD_LISTEN_METRICS_URLS=http://0.0.0.0:2381"
|
||||
Environment="ETCD_INITIAL_CLUSTER=${etcd_initial_cluster}"
|
||||
Environment="ETCD_STRICT_RECONFIG_CHECK=true"
|
||||
Environment="ETCD_SSL_DIR=/etc/ssl/etcd"
|
||||
Environment="ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/server-ca.crt"
|
||||
Environment="ETCD_CERT_FILE=/etc/ssl/certs/etcd/server.crt"
|
||||
Environment="ETCD_KEY_FILE=/etc/ssl/certs/etcd/server.key"
|
||||
Environment="ETCD_CLIENT_CERT_AUTH=true"
|
||||
Environment="ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/peer-ca.crt"
|
||||
Environment="ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt"
|
||||
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
|
||||
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
|
||||
- name: docker.service
|
||||
enable: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
Wants=systemd-resolved.service
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
RequiredBy=etcd-member.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/kubelet.env
|
||||
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
|
||||
--volume=resolv,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: bootkube.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Bootstrap a Kubernetes cluster
|
||||
ConditionPathExists=!/opt/bootkube/init_bootkube.done
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootkube
|
||||
ExecStart=/opt/bootkube/bootkube-start
|
||||
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /opt/bootkube/bootkube-start
|
||||
filesystem: root
|
||||
mode: 0544
|
||||
user:
|
||||
id: 500
|
||||
group:
|
||||
id: 500
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash
|
||||
# Wrapper for bootkube start
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
- "${ssh_authorized_key}"
|
168
azure/container-linux/kubernetes/controllers.tf
Normal file
168
azure/container-linux/kubernetes/controllers.tf
Normal file
@ -0,0 +1,168 @@
|
||||
# Discrete DNS records for each controller's private IPv4 for etcd usage
|
||||
resource "azurerm_dns_a_record" "etcds" {
|
||||
count = "${var.controller_count}"
|
||||
resource_group_name = "${var.dns_zone_group}"
|
||||
|
||||
# DNS Zone name where record should be created
|
||||
zone_name = "${var.dns_zone}"
|
||||
|
||||
# DNS record
|
||||
name = "${format("%s-etcd%d", var.cluster_name, count.index)}"
|
||||
ttl = 300
|
||||
|
||||
# private IPv4 address for etcd
|
||||
records = ["${element(azurerm_network_interface.controllers.*.private_ip_address, count.index)}"]
|
||||
}
|
||||
|
||||
locals {
|
||||
# Channel for a Container Linux derivative
|
||||
# coreos-stable -> Container Linux Stable
|
||||
channel = "${element(split("-", var.os_image), 1)}"
|
||||
}
|
||||
|
||||
# Controller availability set to spread controllers
|
||||
resource "azurerm_availability_set" "controllers" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-controllers"
|
||||
location = "${var.region}"
|
||||
platform_fault_domain_count = 2
|
||||
platform_update_domain_count = 4
|
||||
managed = true
|
||||
}
|
||||
|
||||
# Controller instances
|
||||
resource "azurerm_virtual_machine" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
location = "${var.region}"
|
||||
availability_set_id = "${azurerm_availability_set.controllers.id}"
|
||||
vm_size = "${var.controller_type}"
|
||||
|
||||
# boot
|
||||
storage_image_reference {
|
||||
publisher = "CoreOS"
|
||||
offer = "CoreOS"
|
||||
sku = "${local.channel}"
|
||||
version = "latest"
|
||||
}
|
||||
|
||||
# storage
|
||||
storage_os_disk {
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
create_option = "FromImage"
|
||||
caching = "ReadWrite"
|
||||
disk_size_gb = "${var.disk_size}"
|
||||
os_type = "Linux"
|
||||
managed_disk_type = "Premium_LRS"
|
||||
}
|
||||
|
||||
# network
|
||||
network_interface_ids = ["${element(azurerm_network_interface.controllers.*.id, count.index)}"]
|
||||
|
||||
os_profile {
|
||||
computer_name = "${var.cluster_name}-controller-${count.index}"
|
||||
admin_username = "core"
|
||||
custom_data = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
# Azure mandates setting an ssh_key, even though Ignition custom_data handles it too
|
||||
os_profile_linux_config {
|
||||
disable_password_authentication = true
|
||||
|
||||
ssh_keys {
|
||||
path = "/home/core/.ssh/authorized_keys"
|
||||
key_data = "${var.ssh_authorized_key}"
|
||||
}
|
||||
}
|
||||
|
||||
# lifecycle
|
||||
delete_os_disk_on_termination = true
|
||||
delete_data_disks_on_termination = true
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [
|
||||
"storage_os_disk",
|
||||
"os_profile",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Controller NICs with public and private IPv4
|
||||
resource "azurerm_network_interface" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
location = "${azurerm_resource_group.cluster.location}"
|
||||
network_security_group_id = "${azurerm_network_security_group.controller.id}"
|
||||
|
||||
ip_configuration {
|
||||
name = "ip0"
|
||||
subnet_id = "${azurerm_subnet.controller.id}"
|
||||
private_ip_address_allocation = "dynamic"
|
||||
|
||||
# public IPv4
|
||||
public_ip_address_id = "${element(azurerm_public_ip.controllers.*.id, count.index)}"
|
||||
}
|
||||
}
|
||||
|
||||
# Add controller NICs to the controller backend address pool
|
||||
resource "azurerm_network_interface_backend_address_pool_association" "controllers" {
|
||||
network_interface_id = "${azurerm_network_interface.controllers.id}"
|
||||
ip_configuration_name = "ip0"
|
||||
backend_address_pool_id = "${azurerm_lb_backend_address_pool.controller.id}"
|
||||
}
|
||||
|
||||
# Controller public IPv4 addresses
|
||||
resource "azurerm_public_ip" "controllers" {
|
||||
count = "${var.controller_count}"
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
location = "${azurerm_resource_group.cluster.location}"
|
||||
sku = "Standard"
|
||||
public_ip_address_allocation = "static"
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = "${var.controller_count}"
|
||||
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.controller_clc_snippets}"]
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
template = "${file("${path.module}/cl/controller.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = "${join(",", data.template_file.etcds.*.rendered)}"
|
||||
|
||||
kubeconfig = "${indent(10, module.bootkube.kubeconfig)}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = "${var.controller_count}"
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars {
|
||||
index = "${count.index}"
|
||||
cluster_name = "${var.cluster_name}"
|
||||
dns_zone = "${var.dns_zone}"
|
||||
}
|
||||
}
|
144
azure/container-linux/kubernetes/lb.tf
Normal file
144
azure/container-linux/kubernetes/lb.tf
Normal file
@ -0,0 +1,144 @@
|
||||
# DNS record for the apiserver load balancer
|
||||
resource "azurerm_dns_a_record" "apiserver" {
|
||||
resource_group_name = "${var.dns_zone_group}"
|
||||
|
||||
# DNS Zone name where record should be created
|
||||
zone_name = "${var.dns_zone}"
|
||||
|
||||
# DNS record
|
||||
name = "${var.cluster_name}"
|
||||
ttl = 300
|
||||
|
||||
# IPv4 address of apiserver load balancer
|
||||
records = ["${azurerm_public_ip.apiserver-ipv4.ip_address}"]
|
||||
}
|
||||
|
||||
# Static IPv4 address for the apiserver frontend
|
||||
resource "azurerm_public_ip" "apiserver-ipv4" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-apiserver-ipv4"
|
||||
location = "${var.region}"
|
||||
sku = "Standard"
|
||||
public_ip_address_allocation = "static"
|
||||
}
|
||||
|
||||
# Static IPv4 address for the ingress frontend
|
||||
resource "azurerm_public_ip" "ingress-ipv4" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-ingress-ipv4"
|
||||
location = "${var.region}"
|
||||
sku = "Standard"
|
||||
public_ip_address_allocation = "static"
|
||||
}
|
||||
|
||||
# Network Load Balancer for apiservers and ingress
|
||||
resource "azurerm_lb" "cluster" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}"
|
||||
location = "${var.region}"
|
||||
sku = "Standard"
|
||||
|
||||
frontend_ip_configuration {
|
||||
name = "apiserver"
|
||||
public_ip_address_id = "${azurerm_public_ip.apiserver-ipv4.id}"
|
||||
}
|
||||
|
||||
frontend_ip_configuration {
|
||||
name = "ingress"
|
||||
public_ip_address_id = "${azurerm_public_ip.ingress-ipv4.id}"
|
||||
}
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "apiserver" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "apiserver"
|
||||
loadbalancer_id = "${azurerm_lb.cluster.id}"
|
||||
frontend_ip_configuration_name = "apiserver"
|
||||
|
||||
protocol = "Tcp"
|
||||
frontend_port = 6443
|
||||
backend_port = 6443
|
||||
backend_address_pool_id = "${azurerm_lb_backend_address_pool.controller.id}"
|
||||
probe_id = "${azurerm_lb_probe.apiserver.id}"
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-http" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "ingress-http"
|
||||
loadbalancer_id = "${azurerm_lb.cluster.id}"
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
|
||||
protocol = "Tcp"
|
||||
frontend_port = 80
|
||||
backend_port = 80
|
||||
backend_address_pool_id = "${azurerm_lb_backend_address_pool.worker.id}"
|
||||
probe_id = "${azurerm_lb_probe.ingress.id}"
|
||||
}
|
||||
|
||||
resource "azurerm_lb_rule" "ingress-https" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "ingress-https"
|
||||
loadbalancer_id = "${azurerm_lb.cluster.id}"
|
||||
frontend_ip_configuration_name = "ingress"
|
||||
|
||||
protocol = "Tcp"
|
||||
frontend_port = 443
|
||||
backend_port = 443
|
||||
backend_address_pool_id = "${azurerm_lb_backend_address_pool.worker.id}"
|
||||
probe_id = "${azurerm_lb_probe.ingress.id}"
|
||||
}
|
||||
|
||||
# Address pool of controllers
|
||||
resource "azurerm_lb_backend_address_pool" "controller" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "controller"
|
||||
loadbalancer_id = "${azurerm_lb.cluster.id}"
|
||||
}
|
||||
|
||||
# Address pool of workers
|
||||
resource "azurerm_lb_backend_address_pool" "worker" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "worker"
|
||||
loadbalancer_id = "${azurerm_lb.cluster.id}"
|
||||
}
|
||||
|
||||
# Health checks / probes
|
||||
|
||||
# TCP health check for apiserver
|
||||
resource "azurerm_lb_probe" "apiserver" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "apiserver"
|
||||
loadbalancer_id = "${azurerm_lb.cluster.id}"
|
||||
protocol = "Tcp"
|
||||
port = 6443
|
||||
|
||||
# unhealthy threshold
|
||||
number_of_probes = 3
|
||||
|
||||
interval_in_seconds = 5
|
||||
}
|
||||
|
||||
# HTTP health check for ingress
|
||||
resource "azurerm_lb_probe" "ingress" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "ingress"
|
||||
loadbalancer_id = "${azurerm_lb.cluster.id}"
|
||||
protocol = "Http"
|
||||
port = 10254
|
||||
request_path = "/healthz"
|
||||
|
||||
# unhealthy threshold
|
||||
number_of_probes = 3
|
||||
|
||||
interval_in_seconds = 5
|
||||
}
|
33
azure/container-linux/kubernetes/network.tf
Normal file
33
azure/container-linux/kubernetes/network.tf
Normal file
@ -0,0 +1,33 @@
|
||||
# Organize cluster into a resource group
|
||||
resource "azurerm_resource_group" "cluster" {
|
||||
name = "${var.cluster_name}"
|
||||
location = "${var.region}"
|
||||
}
|
||||
|
||||
resource "azurerm_virtual_network" "network" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}"
|
||||
location = "${azurerm_resource_group.cluster.location}"
|
||||
address_space = ["${var.host_cidr}"]
|
||||
}
|
||||
|
||||
# Subnets - separate subnets for controller and workers because Azure
|
||||
# network security groups are based on IPv4 CIDR rather than instance
|
||||
# tags like GCP or security group membership like AWS
|
||||
|
||||
resource "azurerm_subnet" "controller" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "controller"
|
||||
virtual_network_name = "${azurerm_virtual_network.network.name}"
|
||||
address_prefix = "${cidrsubnet(var.host_cidr, 1, 0)}"
|
||||
}
|
||||
|
||||
resource "azurerm_subnet" "worker" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "worker"
|
||||
virtual_network_name = "${azurerm_virtual_network.network.name}"
|
||||
address_prefix = "${cidrsubnet(var.host_cidr, 1, 1)}"
|
||||
}
|
32
azure/container-linux/kubernetes/outputs.tf
Normal file
32
azure/container-linux/kubernetes/outputs.tf
Normal file
@ -0,0 +1,32 @@
|
||||
# Outputs for Kubernetes Ingress
|
||||
|
||||
output "ingress_static_ipv4" {
|
||||
value = "${azurerm_public_ip.ingress-ipv4.ip_address}"
|
||||
description = "IPv4 address of the load balancer for distributing traffic to Ingress controllers"
|
||||
}
|
||||
|
||||
# Outputs for worker pools
|
||||
|
||||
output "region" {
|
||||
value = "${azurerm_resource_group.cluster.location}"
|
||||
}
|
||||
|
||||
output "resource_group_name" {
|
||||
value = "${azurerm_resource_group.cluster.name}"
|
||||
}
|
||||
|
||||
output "subnet_id" {
|
||||
value = "${azurerm_subnet.worker.id}"
|
||||
}
|
||||
|
||||
output "security_group_id" {
|
||||
value = "${azurerm_network_security_group.worker.id}"
|
||||
}
|
||||
|
||||
output "backend_address_pool_id" {
|
||||
value = "${azurerm_lb_backend_address_pool.worker.id}"
|
||||
}
|
||||
|
||||
output "kubeconfig" {
|
||||
value = "${module.bootkube.kubeconfig}"
|
||||
}
|
25
azure/container-linux/kubernetes/require.tf
Normal file
25
azure/container-linux/kubernetes/require.tf
Normal file
@ -0,0 +1,25 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = ">= 0.11.0"
|
||||
}
|
||||
|
||||
provider "azurerm" {
|
||||
version = "~> 1.19"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "null" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "template" {
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "tls" {
|
||||
version = "~> 1.0"
|
||||
}
|
287
azure/container-linux/kubernetes/security.tf
Normal file
287
azure/container-linux/kubernetes/security.tf
Normal file
@ -0,0 +1,287 @@
|
||||
# Controller security group
|
||||
|
||||
resource "azurerm_network_security_group" "controller" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-controller"
|
||||
location = "${azurerm_resource_group.cluster.location}"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-etcd" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-etcd"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2379-2380"
|
||||
source_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
destination_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape etcd metrics
|
||||
resource "azurerm_network_security_rule" "controller-etcd-metrics" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-etcd-metrics"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "2381"
|
||||
source_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
destination_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-apiserver" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-apiserver"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "6443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-flannel" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-flannel"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = ["${azurerm_subnet.controller.address_prefix}", "${azurerm_subnet.worker.address_prefix}"]
|
||||
destination_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "2025"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
destination_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||
resource "azurerm_network_security_rule" "controller-kubelet" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-kubelet"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "2030"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10250"
|
||||
|
||||
# allow Prometheus to scrape kubelet metrics too
|
||||
source_address_prefixes = ["${azurerm_subnet.controller.address_prefix}", "${azurerm_subnet.worker.address_prefix}"]
|
||||
destination_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
}
|
||||
|
||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||
# https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#default-security-rules
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-allow-loadblancer" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-loadbalancer"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "3000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "*"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefix = "AzureLoadBalancer"
|
||||
destination_address_prefix = "*"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-deny-all" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "deny-all"
|
||||
network_security_group_name = "${azurerm_network_security_group.controller.name}"
|
||||
priority = "3005"
|
||||
access = "Deny"
|
||||
direction = "Inbound"
|
||||
protocol = "*"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = "*"
|
||||
}
|
||||
|
||||
# Worker security group
|
||||
|
||||
resource "azurerm_network_security_group" "worker" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "${var.cluster_name}-worker"
|
||||
location = "${azurerm_resource_group.cluster.location}"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-ssh"
|
||||
network_security_group_name = "${azurerm_network_security_group.worker.name}"
|
||||
priority = "2000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "22"
|
||||
source_address_prefix = "${azurerm_subnet.controller.address_prefix}"
|
||||
destination_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-http" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-http"
|
||||
network_security_group_name = "${azurerm_network_security_group.worker.name}"
|
||||
priority = "2005"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "80"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-https" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-https"
|
||||
network_security_group_name = "${azurerm_network_security_group.worker.name}"
|
||||
priority = "2010"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "443"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-flannel" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-flannel"
|
||||
network_security_group_name = "${azurerm_network_security_group.worker.name}"
|
||||
priority = "2015"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Udp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "8472"
|
||||
source_address_prefixes = ["${azurerm_subnet.controller.address_prefix}", "${azurerm_subnet.worker.address_prefix}"]
|
||||
destination_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter daemonset
|
||||
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-node-exporter"
|
||||
network_security_group_name = "${azurerm_network_security_group.worker.name}"
|
||||
priority = "2020"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9100"
|
||||
source_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
destination_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
}
|
||||
|
||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||
resource "azurerm_network_security_rule" "worker-kubelet" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-kubelet"
|
||||
network_security_group_name = "${azurerm_network_security_group.worker.name}"
|
||||
priority = "2025"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "10250"
|
||||
|
||||
# allow Prometheus to scrape kubelet metrics too
|
||||
source_address_prefixes = ["${azurerm_subnet.controller.address_prefix}", "${azurerm_subnet.worker.address_prefix}"]
|
||||
destination_address_prefix = "${azurerm_subnet.worker.address_prefix}"
|
||||
}
|
||||
|
||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||
# https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#default-security-rules
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-allow-loadblancer" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "allow-loadbalancer"
|
||||
network_security_group_name = "${azurerm_network_security_group.worker.name}"
|
||||
priority = "3000"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "*"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefix = "AzureLoadBalancer"
|
||||
destination_address_prefix = "*"
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-deny-all" {
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
|
||||
name = "deny-all"
|
||||
network_security_group_name = "${azurerm_network_security_group.worker.name}"
|
||||
priority = "3005"
|
||||
access = "Deny"
|
||||
direction = "Inbound"
|
||||
protocol = "*"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "*"
|
||||
source_address_prefix = "*"
|
||||
destination_address_prefix = "*"
|
||||
}
|
95
azure/container-linux/kubernetes/ssh.tf
Normal file
95
azure/container-linux/kubernetes/ssh.tf
Normal file
@ -0,0 +1,95 @@
|
||||
# Secure copy etcd TLS assets to controllers.
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
depends_on = [
|
||||
"azurerm_virtual_machine.controllers",
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(azurerm_public_ip.controllers.*.ip_address, count.index)}"
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_ca_cert}"
|
||||
destination = "$HOME/etcd-client-ca.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_client_cert}"
|
||||
destination = "$HOME/etcd-client.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_client_key}"
|
||||
destination = "$HOME/etcd-client.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_server_cert}"
|
||||
destination = "$HOME/etcd-server.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_server_key}"
|
||||
destination = "$HOME/etcd-server.key"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_peer_cert}"
|
||||
destination = "$HOME/etcd-peer.crt"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = "${module.bootkube.etcd_peer_key}"
|
||||
destination = "$HOME/etcd-peer.key"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mkdir -p /etc/ssl/etcd/etcd",
|
||||
"sudo mv etcd-client* /etc/ssl/etcd/",
|
||||
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/server-ca.crt",
|
||||
"sudo mv etcd-server.crt /etc/ssl/etcd/etcd/server.crt",
|
||||
"sudo mv etcd-server.key /etc/ssl/etcd/etcd/server.key",
|
||||
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/peer-ca.crt",
|
||||
"sudo mv etcd-peer.crt /etc/ssl/etcd/etcd/peer.crt",
|
||||
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
|
||||
"sudo chown -R etcd:etcd /etc/ssl/etcd",
|
||||
"sudo chmod -R 500 /etc/ssl/etcd",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy bootkube assets to ONE controller and start bootkube to perform
|
||||
# one-time self-hosted cluster bootstrapping.
|
||||
resource "null_resource" "bootkube-start" {
|
||||
depends_on = [
|
||||
"module.bootkube",
|
||||
"module.workers",
|
||||
"azurerm_dns_a_record.apiserver",
|
||||
"null_resource.copy-controller-secrets",
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = "${element(azurerm_public_ip.controllers.*.ip_address, 0)}"
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
source = "${var.asset_dir}"
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/assets /opt/bootkube",
|
||||
"sudo systemctl start bootkube",
|
||||
]
|
||||
}
|
||||
}
|
123
azure/container-linux/kubernetes/variables.tf
Normal file
123
azure/container-linux/kubernetes/variables.tf
Normal file
@ -0,0 +1,123 @@
|
||||
variable "cluster_name" {
|
||||
type = "string"
|
||||
description = "Unique cluster name (prepended to dns_zone)"
|
||||
}
|
||||
|
||||
# Azure
|
||||
|
||||
variable "region" {
|
||||
type = "string"
|
||||
description = "Azure Region (e.g. centralus , see `az account list-locations --output table`)"
|
||||
}
|
||||
|
||||
variable "dns_zone" {
|
||||
type = "string"
|
||||
description = "Azure DNS Zone (e.g. azure.example.com)"
|
||||
}
|
||||
|
||||
variable "dns_zone_group" {
|
||||
type = "string"
|
||||
description = "Resource group where the Azure DNS Zone resides (e.g. global)"
|
||||
}
|
||||
|
||||
# instances
|
||||
|
||||
variable "controller_count" {
|
||||
type = "string"
|
||||
default = "1"
|
||||
description = "Number of controllers (i.e. masters)"
|
||||
}
|
||||
|
||||
variable "worker_count" {
|
||||
type = "string"
|
||||
default = "1"
|
||||
description = "Number of workers"
|
||||
}
|
||||
|
||||
variable "controller_type" {
|
||||
type = "string"
|
||||
default = "Standard_DS1_v2"
|
||||
description = "Machine type for controllers (see `az vm list-skus --location centralus`)"
|
||||
}
|
||||
|
||||
variable "worker_type" {
|
||||
type = "string"
|
||||
default = "Standard_F1"
|
||||
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
type = "string"
|
||||
default = "coreos-stable"
|
||||
description = "Channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha)"
|
||||
}
|
||||
|
||||
variable "disk_size" {
|
||||
type = "string"
|
||||
default = "40"
|
||||
description = "Size of the disk in GB"
|
||||
}
|
||||
|
||||
variable "worker_priority" {
|
||||
type = "string"
|
||||
default = "Regular"
|
||||
description = "Set worker priority to Low to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time."
|
||||
}
|
||||
|
||||
variable "controller_clc_snippets" {
|
||||
type = "list"
|
||||
description = "Controller Container Linux Config snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "worker_clc_snippets" {
|
||||
type = "list"
|
||||
description = "Worker Container Linux Config snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = "string"
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "asset_dir" {
|
||||
description = "Path to a directory where generated assets should be placed (contains secrets)"
|
||||
type = "string"
|
||||
}
|
||||
|
||||
variable "host_cidr" {
|
||||
description = "CIDR IPv4 range to assign to instances"
|
||||
type = "string"
|
||||
default = "10.0.0.0/16"
|
||||
}
|
||||
|
||||
variable "pod_cidr" {
|
||||
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||
type = "string"
|
||||
default = "10.2.0.0/16"
|
||||
}
|
||||
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
23
azure/container-linux/kubernetes/workers.tf
Normal file
23
azure/container-linux/kubernetes/workers.tf
Normal file
@ -0,0 +1,23 @@
|
||||
module "workers" {
|
||||
source = "workers"
|
||||
name = "${var.cluster_name}"
|
||||
|
||||
# Azure
|
||||
resource_group_name = "${azurerm_resource_group.cluster.name}"
|
||||
region = "${azurerm_resource_group.cluster.location}"
|
||||
subnet_id = "${azurerm_subnet.worker.id}"
|
||||
security_group_id = "${azurerm_network_security_group.worker.id}"
|
||||
backend_address_pool_id = "${azurerm_lb_backend_address_pool.worker.id}"
|
||||
|
||||
count = "${var.worker_count}"
|
||||
vm_type = "${var.worker_type}"
|
||||
os_image = "${var.os_image}"
|
||||
priority = "${var.worker_priority}"
|
||||
|
||||
# configuration
|
||||
kubeconfig = "${module.bootkube.kubeconfig}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
clc_snippets = "${var.worker_clc_snippets}"
|
||||
}
|
122
azure/container-linux/kubernetes/workers/cl/worker.yaml.tmpl
Normal file
122
azure/container-linux/kubernetes/workers/cl/worker.yaml.tmpl
Normal file
@ -0,0 +1,122 @@
|
||||
---
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
enable: true
|
||||
- name: locksmithd.service
|
||||
mask: true
|
||||
- name: wait-for-dns.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
Wants=systemd-resolved.service
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
- name: kubelet.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/kubelet.env
|
||||
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
|
||||
--volume=resolv,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
--volume var-lib-cni,kind=host,source=/var/lib/cni \
|
||||
--mount volume=var-lib-cni,target=/var/lib/cni \
|
||||
--volume var-lib-calico,kind=host,source=/var/lib/calico \
|
||||
--mount volume=var-lib-calico,target=/var/lib/calico \
|
||||
--volume opt-cni-bin,kind=host,source=/opt/cni/bin \
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/cni
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
ExecStart=/usr/lib/coreos/kubelet-wrapper \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${k8s_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enable: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Waiting to delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/etc/kubernetes/delete-node
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
files:
|
||||
- path: /etc/kubernetes/kubeconfig
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
${kubeconfig}
|
||||
- path: /etc/kubernetes/kubelet.env
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/kubernetes/delete-node
|
||||
filesystem: root
|
||||
mode: 0744
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.12.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname | tr '[:upper:]' '[:lower:]')
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
ssh_authorized_keys:
|
||||
- "${ssh_authorized_key}"
|
91
azure/container-linux/kubernetes/workers/variables.tf
Normal file
91
azure/container-linux/kubernetes/workers/variables.tf
Normal file
@ -0,0 +1,91 @@
|
||||
variable "name" {
|
||||
type = "string"
|
||||
description = "Unique name for the worker pool"
|
||||
}
|
||||
|
||||
# Azure
|
||||
|
||||
variable "region" {
|
||||
type = "string"
|
||||
description = "Must be set to the Azure Region of cluster"
|
||||
}
|
||||
|
||||
variable "resource_group_name" {
|
||||
type = "string"
|
||||
description = "Must be set to the resource group name of cluster"
|
||||
}
|
||||
|
||||
variable "subnet_id" {
|
||||
type = "string"
|
||||
description = "Must be set to the `worker_subnet_id` output by cluster"
|
||||
}
|
||||
|
||||
variable "security_group_id" {
|
||||
type = "string"
|
||||
description = "Must be set to the `worker_security_group_id` output by cluster"
|
||||
}
|
||||
|
||||
variable "backend_address_pool_id" {
|
||||
type = "string"
|
||||
description = "Must be set to the `worker_backend_address_pool_id` output by cluster"
|
||||
}
|
||||
|
||||
# instances
|
||||
|
||||
variable "count" {
|
||||
type = "string"
|
||||
default = "1"
|
||||
description = "Number of instances"
|
||||
}
|
||||
|
||||
variable "vm_type" {
|
||||
type = "string"
|
||||
default = "Standard_F1"
|
||||
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
type = "string"
|
||||
default = "coreos-stable"
|
||||
description = "Channel for a Container Linux derivative (coreos-stable, coreos-beta, coreos-alpha)"
|
||||
}
|
||||
|
||||
variable "priority" {
|
||||
type = "string"
|
||||
default = "Regular"
|
||||
description = "Set priority to Low to use reduced cost surplus capacity, with the tradeoff that instances can be evicted at any time."
|
||||
}
|
||||
|
||||
variable "clc_snippets" {
|
||||
type = "list"
|
||||
description = "Container Linux Config snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "kubeconfig" {
|
||||
type = "string"
|
||||
description = "Must be set to `kubeconfig` output by cluster"
|
||||
}
|
||||
|
||||
variable "ssh_authorized_key" {
|
||||
type = "string"
|
||||
description = "SSH public key for user 'core'"
|
||||
}
|
||||
|
||||
variable "service_cidr" {
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
|
||||
type = "string"
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
114
azure/container-linux/kubernetes/workers/workers.tf
Normal file
114
azure/container-linux/kubernetes/workers/workers.tf
Normal file
@ -0,0 +1,114 @@
|
||||
locals {
|
||||
# Channel for a Container Linux derivative
|
||||
# coreos-stable -> Container Linux Stable
|
||||
channel = "${element(split("-", var.os_image), 1)}"
|
||||
}
|
||||
|
||||
# Workers scale set
|
||||
resource "azurerm_virtual_machine_scale_set" "workers" {
|
||||
resource_group_name = "${var.resource_group_name}"
|
||||
|
||||
name = "${var.name}-workers"
|
||||
location = "${var.region}"
|
||||
single_placement_group = false
|
||||
|
||||
sku {
|
||||
name = "${var.vm_type}"
|
||||
tier = "standard"
|
||||
capacity = "${var.count}"
|
||||
}
|
||||
|
||||
# boot
|
||||
storage_profile_image_reference {
|
||||
publisher = "CoreOS"
|
||||
offer = "CoreOS"
|
||||
sku = "${local.channel}"
|
||||
version = "latest"
|
||||
}
|
||||
|
||||
# storage
|
||||
storage_profile_os_disk {
|
||||
create_option = "FromImage"
|
||||
caching = "ReadWrite"
|
||||
os_type = "linux"
|
||||
managed_disk_type = "Standard_LRS"
|
||||
}
|
||||
|
||||
os_profile {
|
||||
computer_name_prefix = "${var.name}-worker-"
|
||||
admin_username = "core"
|
||||
custom_data = "${data.ct_config.worker-ignition.rendered}"
|
||||
}
|
||||
|
||||
# Azure mandates setting an ssh_key, even though Ignition custom_data handles it too
|
||||
os_profile_linux_config {
|
||||
disable_password_authentication = true
|
||||
|
||||
ssh_keys {
|
||||
path = "/home/core/.ssh/authorized_keys"
|
||||
key_data = "${var.ssh_authorized_key}"
|
||||
}
|
||||
}
|
||||
|
||||
# network
|
||||
network_profile {
|
||||
name = "nic0"
|
||||
primary = true
|
||||
network_security_group_id = "${var.security_group_id}"
|
||||
|
||||
ip_configuration {
|
||||
name = "ip0"
|
||||
primary = true
|
||||
subnet_id = "${var.subnet_id}"
|
||||
|
||||
# backend address pool to which the NIC should be added
|
||||
load_balancer_backend_address_pool_ids = ["${var.backend_address_pool_id}"]
|
||||
}
|
||||
}
|
||||
|
||||
# lifecycle
|
||||
upgrade_policy_mode = "Manual"
|
||||
priority = "${var.priority}"
|
||||
eviction_policy = "Delete"
|
||||
}
|
||||
|
||||
# Scale up or down to maintain desired number, tolerating deallocations.
|
||||
resource "azurerm_autoscale_setting" "workers" {
|
||||
resource_group_name = "${var.resource_group_name}"
|
||||
|
||||
name = "${var.name}-maintain-desired"
|
||||
location = "${var.region}"
|
||||
|
||||
# autoscale
|
||||
enabled = true
|
||||
target_resource_id = "${azurerm_virtual_machine_scale_set.workers.id}"
|
||||
|
||||
profile {
|
||||
name = "default"
|
||||
|
||||
capacity {
|
||||
minimum = "${var.count}"
|
||||
default = "${var.count}"
|
||||
maximum = "${var.count}"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Worker Ignition configs
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = "${data.template_file.worker-config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.clc_snippets}"]
|
||||
}
|
||||
|
||||
# Worker Container Linux configs
|
||||
data "template_file" "worker-config" {
|
||||
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
kubeconfig = "${indent(10, var.kubeconfig)}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
0
azure/ignore/.gitkeep
Normal file
0
azure/ignore/.gitkeep
Normal file
@ -11,9 +11,10 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
@ -12,4 +12,5 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${domain_name}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${domain_name}:2380"
|
||||
@ -70,6 +70,10 @@ systemd:
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--volume iscsiconf,kind=host,source=/etc/iscsi/ \
|
||||
--mount volume=iscsiconf,target=/etc/iscsi/ \
|
||||
--volume iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
|
||||
--mount volume=iscsiadm,target=/sbin/iscsiadm \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -97,6 +101,7 @@ systemd:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -123,7 +128,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
@ -149,22 +154,17 @@ storage:
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=$BOOTKUBE_ASSETS \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$$RKT_OPTS \
|
||||
$${BOOTKUBE_ACI}:$${BOOTKUBE_VERSION} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
networkd:
|
||||
${networkd_content}
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -45,6 +45,10 @@ systemd:
|
||||
--mount volume=opt-cni-bin,target=/opt/cni/bin \
|
||||
--volume var-log,kind=host,source=/var/log \
|
||||
--mount volume=var-log,target=/var/log \
|
||||
--volume iscsiconf,kind=host,source=/etc/iscsi/ \
|
||||
--mount volume=iscsiconf,target=/etc/iscsi/ \
|
||||
--volume iscsiadm,kind=host,source=/usr/sbin/iscsiadm \
|
||||
--mount volume=iscsiadm,target=/sbin/iscsiadm \
|
||||
--insecure-options=image"
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -69,6 +73,7 @@ systemd:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
Restart=always
|
||||
@ -84,7 +89,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/hostname
|
||||
filesystem: root
|
||||
mode: 0644
|
||||
@ -96,8 +101,6 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
networkd:
|
||||
${networkd_content}
|
||||
passwd:
|
||||
users:
|
||||
- name: core
|
||||
|
@ -3,7 +3,7 @@ resource "matchbox_group" "install" {
|
||||
|
||||
name = "${format("install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
||||
profile = "${local.flavor == "flatcar" ? element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
|
||||
profile = "${local.flavor == "flatcar" ? var.cached_install == "true" ? element(matchbox_profile.cached-flatcar-linux-install.*.name, count.index) : element(matchbox_profile.flatcar-install.*.name, count.index) : var.cached_install == "true" ? element(matchbox_profile.cached-container-linux-install.*.name, count.index) : element(matchbox_profile.container-linux-install.*.name, count.index)}"
|
||||
|
||||
selector {
|
||||
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
|
||||
|
@ -49,7 +49,7 @@ data "template_file" "container-linux-install-configs" {
|
||||
}
|
||||
|
||||
// Container Linux Install profile (from matchbox /assets cache)
|
||||
// Note: Admin must have downloaded os_version into matchbox assets.
|
||||
// Note: Admin must have downloaded os_version into matchbox assets/coreos.
|
||||
resource "matchbox_profile" "cached-container-linux-install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
name = "${format("%s-cached-container-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
@ -87,7 +87,7 @@ data "template_file" "cached-container-linux-install-configs" {
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
|
||||
# profile uses -b baseurl to install from matchbox cache
|
||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/coreos"
|
||||
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/${local.flavor}"
|
||||
}
|
||||
}
|
||||
|
||||
@ -114,22 +114,46 @@ resource "matchbox_profile" "flatcar-install" {
|
||||
container_linux_config = "${element(data.template_file.container-linux-install-configs.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
// Flatcar Linux Install profile (from matchbox /assets cache)
|
||||
// Note: Admin must have downloaded os_version into matchbox assets/flatcar.
|
||||
resource "matchbox_profile" "cached-flatcar-linux-install" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
name = "${format("%s-cached-flatcar-linux-install-%s", var.cluster_name, element(concat(var.controller_names, var.worker_names), count.index))}"
|
||||
|
||||
kernel = "/assets/flatcar/${var.os_version}/flatcar_production_pxe.vmlinuz"
|
||||
|
||||
initrd = [
|
||||
"/assets/flatcar/${var.os_version}/flatcar_production_pxe_image.cpio.gz",
|
||||
]
|
||||
|
||||
args = [
|
||||
"initrd=flatcar_production_pxe_image.cpio.gz",
|
||||
"flatcar.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
|
||||
"flatcar.first_boot=yes",
|
||||
"console=tty0",
|
||||
"console=ttyS0",
|
||||
"${var.kernel_args}",
|
||||
]
|
||||
|
||||
container_linux_config = "${element(data.template_file.cached-container-linux-install-configs.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
// Kubernetes Controller profiles
|
||||
resource "matchbox_profile" "controllers" {
|
||||
count = "${length(var.controller_names)}"
|
||||
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
||||
count = "${length(var.controller_names)}"
|
||||
name = "${format("%s-controller-%s", var.cluster_name, element(var.controller_names, count.index))}"
|
||||
raw_ignition = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = "${length(var.controller_names)}"
|
||||
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
count = "${length(var.controller_names)}"
|
||||
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
|
||||
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
|
||||
snippets = ["${local.clc_map[element(var.controller_names, count.index)]}"]
|
||||
}
|
||||
|
||||
|
||||
data "template_file" "controller-configs" {
|
||||
count = "${length(var.controller_names)}"
|
||||
|
||||
@ -142,24 +166,21 @@ data "template_file" "controller-configs" {
|
||||
k8s_dns_service_ip = "${module.bootkube.kube_dns_service_ip}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
|
||||
# Terraform evaluates both sides regardless and element cannot be used on 0 length lists
|
||||
networkd_content = "${length(var.controller_networkds) == 0 ? "" : element(concat(var.controller_networkds, list("")), count.index)}"
|
||||
}
|
||||
}
|
||||
|
||||
// Kubernetes Worker profiles
|
||||
resource "matchbox_profile" "workers" {
|
||||
count = "${length(var.worker_names)}"
|
||||
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
||||
count = "${length(var.worker_names)}"
|
||||
name = "${format("%s-worker-%s", var.cluster_name, element(var.worker_names, count.index))}"
|
||||
raw_ignition = "${element(data.ct_config.worker-ignitions.*.rendered, count.index)}"
|
||||
}
|
||||
|
||||
|
||||
data "ct_config" "worker-ignitions" {
|
||||
count = "${length(var.worker_names)}"
|
||||
content = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
||||
count = "${length(var.worker_names)}"
|
||||
content = "${element(data.template_file.worker-configs.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
|
||||
# Must use direct lookup. Cannot use lookup(map, key) since it only works for flat maps
|
||||
snippets = ["${local.clc_map[element(var.worker_names, count.index)]}"]
|
||||
}
|
||||
@ -174,9 +195,6 @@ data "template_file" "worker-configs" {
|
||||
k8s_dns_service_ip = "${module.bootkube.kube_dns_service_ip}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||
|
||||
# Terraform evaluates both sides regardless and element cannot be used on 0 length lists
|
||||
networkd_content = "${length(var.worker_networkds) == 0 ? "" : element(concat(var.worker_networkds, list("")), count.index)}"
|
||||
}
|
||||
}
|
||||
|
||||
@ -185,12 +203,13 @@ locals {
|
||||
# Default Container Linux config snippets map every node names to list("\n") so
|
||||
# all lookups succeed
|
||||
clc_defaults = "${zipmap(concat(var.controller_names, var.worker_names), chunklist(data.template_file.clc-default-snippets.*.rendered, 1))}"
|
||||
|
||||
# Union of the default and user specific snippets, later overrides prior.
|
||||
clc_map = "${merge(local.clc_defaults, var.clc_snippets)}"
|
||||
}
|
||||
|
||||
// Horrible hack to generate a Terraform list of node count length
|
||||
data "template_file" "clc-default-snippets" {
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
count = "${length(var.controller_names) + length(var.worker_names)}"
|
||||
template = "\n"
|
||||
}
|
||||
|
@ -24,39 +24,39 @@ variable "os_version" {
|
||||
# Terraform's crude "type system" does not properly support lists of maps so we do this.
|
||||
|
||||
variable "controller_names" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of controller names (e.g. [node1])"
|
||||
}
|
||||
|
||||
variable "controller_macs" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
|
||||
}
|
||||
|
||||
variable "controller_domains" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
|
||||
}
|
||||
|
||||
variable "worker_names" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of worker names (e.g. [node2, node3])"
|
||||
}
|
||||
|
||||
variable "worker_macs" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
|
||||
}
|
||||
|
||||
variable "worker_domains" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
|
||||
}
|
||||
|
||||
variable "clc_snippets" {
|
||||
type = "map"
|
||||
type = "map"
|
||||
description = "Map from machine names to lists of Container Linux Config snippets"
|
||||
default = {}
|
||||
default = {}
|
||||
}
|
||||
|
||||
# configuration
|
||||
@ -142,16 +142,8 @@ variable "kernel_args" {
|
||||
default = []
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported, temporary
|
||||
|
||||
variable "controller_networkds" {
|
||||
type = "list"
|
||||
description = "Controller Container Linux config networkd section"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "worker_networkds" {
|
||||
type = "list"
|
||||
description = "Worker Container Linux config networkd section"
|
||||
default = []
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -11,8 +11,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${var.k8s_domain_name}"]
|
||||
@ -11,6 +11,7 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
|
@ -51,6 +51,7 @@ write_files:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
@ -83,9 +84,9 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
|
@ -29,6 +29,7 @@ write_files:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
content: |
|
||||
@ -59,7 +60,7 @@ runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, restart, NetworkManager]
|
||||
- [hostnamectl, set-hostname, ${domain_name}]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -25,32 +25,32 @@ EOD
|
||||
# Terraform's crude "type system" does not properly support lists of maps so we do this.
|
||||
|
||||
variable "controller_names" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of controller names (e.g. [node1])"
|
||||
}
|
||||
|
||||
variable "controller_macs" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of controller identifying MAC addresses (e.g. [52:54:00:a1:9c:ae])"
|
||||
}
|
||||
|
||||
variable "controller_domains" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of controller FQDNs (e.g. [node1.example.com])"
|
||||
}
|
||||
|
||||
variable "worker_names" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of worker names (e.g. [node2, node3])"
|
||||
}
|
||||
|
||||
variable "worker_macs" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of worker identifying MAC addresses (e.g. [52:54:00:b2:2f:86, 52:54:00:c3:61:77])"
|
||||
}
|
||||
|
||||
variable "worker_domains" {
|
||||
type = "list"
|
||||
type = "list"
|
||||
description = "Ordered list of worker FQDNs (e.g. [node2.example.com, node3.example.com])"
|
||||
}
|
||||
|
||||
@ -110,3 +110,9 @@ variable "kernel_args" {
|
||||
type = "list"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -11,10 +11,11 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
@ -11,4 +11,5 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ systemd:
|
||||
- name: 40-etcd-cluster.conf
|
||||
contents: |
|
||||
[Service]
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.9"
|
||||
Environment="ETCD_IMAGE_TAG=v3.3.10"
|
||||
Environment="ETCD_NAME=${etcd_name}"
|
||||
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379"
|
||||
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380"
|
||||
@ -56,12 +56,9 @@ systemd:
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube
|
||||
Requires=coreos-metadata.service
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/kubelet.env
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
|
||||
--volume=resolv,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
@ -93,13 +90,13 @@ systemd:
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
@ -128,7 +125,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -148,17 +145,14 @@ storage:
|
||||
set -e
|
||||
# Move experimental manifests
|
||||
[ -n "$(ls /opt/bootkube/assets/manifests-*/* 2>/dev/null)" ] && mv /opt/bootkube/assets/manifests-*/* /opt/bootkube/assets/manifests && rm -rf /opt/bootkube/assets/manifests-*
|
||||
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
|
||||
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.13.0}"
|
||||
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
|
||||
exec /usr/bin/rkt run \
|
||||
--trust-keys-from-https \
|
||||
--volume assets,kind=host,source=$${BOOTKUBE_ASSETS} \
|
||||
--volume assets,kind=host,source=/opt/bootkube/assets \
|
||||
--mount volume=assets,target=/assets \
|
||||
--volume bootstrap,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=bootstrap,target=/etc/kubernetes \
|
||||
$${RKT_OPTS} \
|
||||
$${BOOTKUBE_ACI}:$${BOOTKUBE_VERSION} \
|
||||
quay.io/coreos/bootkube:v0.14.0 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/bootkube -- start --asset-dir=/assets "$@"
|
||||
|
@ -31,12 +31,9 @@ systemd:
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube
|
||||
Requires=coreos-metadata.service
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/etc/kubernetes/kubelet.env
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \
|
||||
--volume=resolv,kind=host,source=/etc/resolv.conf \
|
||||
--mount volume=resolv,target=/etc/resolv.conf \
|
||||
@ -66,12 +63,12 @@ systemd:
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid
|
||||
Restart=always
|
||||
@ -98,7 +95,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
KUBELET_IMAGE_URL=docker://k8s.gcr.io/hyperkube
|
||||
KUBELET_IMAGE_TAG=v1.11.2
|
||||
KUBELET_IMAGE_TAG=v1.12.3
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
filesystem: root
|
||||
contents:
|
||||
@ -116,7 +113,7 @@ storage:
|
||||
--volume config,kind=host,source=/etc/kubernetes \
|
||||
--mount volume=config,target=/etc/kubernetes \
|
||||
--insecure-options=image \
|
||||
docker://k8s.gcr.io/hyperkube:v1.11.2 \
|
||||
docker://k8s.gcr.io/hyperkube:v1.12.3 \
|
||||
--net=host \
|
||||
--dns=host \
|
||||
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)
|
||||
|
@ -44,12 +44,18 @@ resource "digitalocean_droplet" "controllers" {
|
||||
ipv6 = true
|
||||
private_networking = true
|
||||
|
||||
user_data = "${element(data.ct_config.controller_ign.*.rendered, count.index)}"
|
||||
user_data = "${element(data.ct_config.controller-ignitions.*.rendered, count.index)}"
|
||||
ssh_keys = ["${var.ssh_fingerprints}"]
|
||||
|
||||
tags = [
|
||||
"${digitalocean_tag.controllers.id}",
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [
|
||||
"user_data",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label controllers
|
||||
@ -57,8 +63,16 @@ resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Container Linux Config
|
||||
data "template_file" "controller_config" {
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = "${var.controller_count}"
|
||||
content = "${element(data.template_file.controller-configs.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.controller_clc_snippets}"]
|
||||
}
|
||||
|
||||
# Controller Container Linux configs
|
||||
data "template_file" "controller-configs" {
|
||||
count = "${var.controller_count}"
|
||||
|
||||
template = "${file("${path.module}/cl/controller.yaml.tmpl")}"
|
||||
@ -85,11 +99,3 @@ data "template_file" "etcds" {
|
||||
dns_zone = "${var.dns_zone}"
|
||||
}
|
||||
}
|
||||
|
||||
data "ct_config" "controller_ign" {
|
||||
count = "${var.controller_count}"
|
||||
content = "${element(data.template_file.controller_config.*.rendered, count.index)}"
|
||||
pretty_print = false
|
||||
|
||||
snippets = ["${var.controller_clc_snippets}"]
|
||||
}
|
||||
|
@ -3,7 +3,8 @@ output "controllers_dns" {
|
||||
}
|
||||
|
||||
output "workers_dns" {
|
||||
value = "${digitalocean_record.workers.0.fqdn}"
|
||||
# Multiple A and AAAA records with the same FQDN
|
||||
value = "${digitalocean_record.workers-record-a.0.fqdn}"
|
||||
}
|
||||
|
||||
output "controllers_ipv4" {
|
||||
|
@ -5,7 +5,7 @@ terraform {
|
||||
}
|
||||
|
||||
provider "digitalocean" {
|
||||
version = "~> 0.1.2"
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
|
@ -92,3 +92,9 @@ variable "cluster_domain_suffix" {
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
# Worker DNS records
|
||||
resource "digitalocean_record" "workers" {
|
||||
resource "digitalocean_record" "workers-record-a" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
@ -11,6 +11,18 @@ resource "digitalocean_record" "workers" {
|
||||
value = "${element(digitalocean_droplet.workers.*.ipv4_address, count.index)}"
|
||||
}
|
||||
|
||||
resource "digitalocean_record" "workers-record-aaaa" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = "${var.dns_zone}"
|
||||
|
||||
name = "${var.cluster_name}-workers"
|
||||
type = "AAAA"
|
||||
ttl = 300
|
||||
value = "${element(digitalocean_droplet.workers.*.ipv6_address, count.index)}"
|
||||
}
|
||||
|
||||
# Worker droplet instances
|
||||
resource "digitalocean_droplet" "workers" {
|
||||
count = "${var.worker_count}"
|
||||
@ -25,12 +37,16 @@ resource "digitalocean_droplet" "workers" {
|
||||
ipv6 = true
|
||||
private_networking = true
|
||||
|
||||
user_data = "${data.ct_config.worker_ign.rendered}"
|
||||
user_data = "${data.ct_config.worker-ignition.rendered}"
|
||||
ssh_keys = ["${var.ssh_fingerprints}"]
|
||||
|
||||
tags = [
|
||||
"${digitalocean_tag.workers.id}",
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
create_before_destroy = true
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label workers
|
||||
@ -38,8 +54,15 @@ resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Container Linux Config
|
||||
data "template_file" "worker_config" {
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = "${data.template_file.worker-config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.worker_clc_snippets}"]
|
||||
}
|
||||
|
||||
# Worker Container Linux config
|
||||
data "template_file" "worker-config" {
|
||||
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
|
||||
|
||||
vars = {
|
||||
@ -47,9 +70,3 @@ data "template_file" "worker_config" {
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
}
|
||||
}
|
||||
|
||||
data "ct_config" "worker_ign" {
|
||||
content = "${data.template_file.worker_config.rendered}"
|
||||
pretty_print = false
|
||||
snippets = ["${var.worker_clc_snippets}"]
|
||||
}
|
||||
|
@ -11,9 +11,9 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.11.2 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Kubernetes v1.12.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
|
||||
* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled
|
||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Self-hosted Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootkube" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=70c28399703cb4ec8930394682400d90d733e5a5"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=4021467b7f280ceb54320333690e8574a3bd8d84"
|
||||
|
||||
cluster_name = "${var.cluster_name}"
|
||||
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
|
||||
@ -11,6 +11,7 @@ module "bootkube" {
|
||||
pod_cidr = "${var.pod_cidr}"
|
||||
service_cidr = "${var.service_cidr}"
|
||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||
enable_reporting = "${var.enable_reporting}"
|
||||
|
||||
# Fedora
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
|
@ -19,24 +19,9 @@ write_files:
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -65,6 +50,7 @@ write_files:
|
||||
--node-labels=node-role.kubernetes.io/master \
|
||||
--node-labels=node-role.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
@ -89,11 +75,10 @@ bootcmd:
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.9"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.13.0"
|
||||
- "atomic install --system --name=etcd quay.io/poseidon/etcd:v3.3.10"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- "atomic install --system --name=bootkube quay.io/poseidon/bootkube:v0.14.0"
|
||||
- [systemctl, start, --no-block, etcd.service]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -1,23 +1,8 @@
|
||||
#cloud-config
|
||||
write_files:
|
||||
- path: /etc/systemd/system/cloud-metadata.service
|
||||
content: |
|
||||
[Unit]
|
||||
Description=Cloud metadata agent
|
||||
[Service]
|
||||
Type=oneshot
|
||||
Environment=OUTPUT=/run/metadata/cloud
|
||||
ExecStart=/usr/bin/mkdir -p /run/metadata
|
||||
ExecStart=/usr/bin/bash -c 'echo "HOSTNAME_OVERRIDE=$(curl\
|
||||
--url http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address\
|
||||
--retry 10)" > $${OUTPUT}'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- path: /etc/systemd/system/kubelet.service.d/10-typhoon.conf
|
||||
content: |
|
||||
[Unit]
|
||||
Requires=cloud-metadata.service
|
||||
After=cloud-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -43,6 +28,7 @@ write_files:
|
||||
--network-plugin=cni \
|
||||
--node-labels=node-role.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins"
|
||||
- path: /etc/systemd/system/kubelet.path
|
||||
content: |
|
||||
@ -65,8 +51,7 @@ bootcmd:
|
||||
- [modprobe, ip_vs]
|
||||
runcmd:
|
||||
- [systemctl, daemon-reload]
|
||||
- [systemctl, enable, cloud-metadata.service]
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.11.2"
|
||||
- "atomic install --system --name=kubelet quay.io/poseidon/kubelet:v1.12.3"
|
||||
- [systemctl, enable, kubelet.path]
|
||||
- [systemctl, start, --no-block, kubelet.path]
|
||||
users:
|
||||
|
@ -50,6 +50,12 @@ resource "digitalocean_droplet" "controllers" {
|
||||
tags = [
|
||||
"${digitalocean_tag.controllers.id}",
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [
|
||||
"user_data",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label controllers
|
||||
|
@ -3,7 +3,8 @@ output "controllers_dns" {
|
||||
}
|
||||
|
||||
output "workers_dns" {
|
||||
value = "${digitalocean_record.workers.0.fqdn}"
|
||||
# Multiple A and AAAA records with the same FQDN
|
||||
value = "${digitalocean_record.workers-record-a.0.fqdn}"
|
||||
}
|
||||
|
||||
output "controllers_ipv4" {
|
||||
|
@ -5,7 +5,7 @@ terraform {
|
||||
}
|
||||
|
||||
provider "digitalocean" {
|
||||
version = "~> 0.1.2"
|
||||
version = "~> 1.0"
|
||||
}
|
||||
|
||||
provider "local" {
|
||||
|
@ -85,3 +85,9 @@ variable "cluster_domain_suffix" {
|
||||
type = "string"
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = "string"
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = "false"
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
# Worker DNS records
|
||||
resource "digitalocean_record" "workers" {
|
||||
resource "digitalocean_record" "workers-record-a" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
@ -11,6 +11,18 @@ resource "digitalocean_record" "workers" {
|
||||
value = "${element(digitalocean_droplet.workers.*.ipv4_address, count.index)}"
|
||||
}
|
||||
|
||||
resource "digitalocean_record" "workers-record-aaaa" {
|
||||
count = "${var.worker_count}"
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = "${var.dns_zone}"
|
||||
|
||||
name = "${var.cluster_name}-workers"
|
||||
type = "AAAA"
|
||||
ttl = 300
|
||||
value = "${element(digitalocean_droplet.workers.*.ipv6_address, count.index)}"
|
||||
}
|
||||
|
||||
# Worker droplet instances
|
||||
resource "digitalocean_droplet" "workers" {
|
||||
count = "${var.worker_count}"
|
||||
@ -31,6 +43,10 @@ resource "digitalocean_droplet" "workers" {
|
||||
tags = [
|
||||
"${digitalocean_tag.workers.id}",
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
create_before_destroy = true
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label workers
|
||||
|
@ -4,7 +4,7 @@ Nginx Ingress controller pods accept and demultiplex HTTP, HTTPS, TCP, or UDP tr
|
||||
|
||||
## AWS
|
||||
|
||||
On AWS, a network load balancer (NLB) distributes traffic across a target group of worker nodes running an Ingress controller deployment on host ports 80 and 443. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a health Ingress controller receive traffic.
|
||||
On AWS, a network load balancer (NLB) distributes TCP traffic across two target groups (port 80 and 443) of worker nodes running an Ingress controller deployment. Security groups rules allow traffic to ports 80 and 443. Health checks ensure only workers with a healthy Ingress controller receive traffic.
|
||||
|
||||
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
@ -12,15 +12,15 @@ Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, de
|
||||
kubectl apply -R -f addons/nginx-ingress/aws
|
||||
```
|
||||
|
||||
For each application, add a DNS CNAME resolving to the ELB's DNS record.
|
||||
For each application, add a DNS CNAME resolving to the NLB's DNS record.
|
||||
|
||||
```
|
||||
app1.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
|
||||
aap2.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
|
||||
app2.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
|
||||
app3.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
|
||||
```
|
||||
|
||||
Find the ELB's DNS name through the console or use the Typhoon module's output `ingress_dns_name`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
Find the NLB's DNS name through the console or use the Typhoon module's output `ingress_dns_name`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
|
||||
```tf
|
||||
resource "google_dns_record_set" "some-application" {
|
||||
@ -35,52 +35,25 @@ resource "google_dns_record_set" "some-application" {
|
||||
}
|
||||
```
|
||||
|
||||
## Digital Ocean
|
||||
## Azure
|
||||
|
||||
On Digital Ocean, a DNS A record (e.g. `nemo-workers.example.com`) resolves to each worker[^1] running an Ingress controller DaemonSet on host ports 80 and 443. Firewall rules allow IPv4 and IPv6 traffic to ports 80 and 443.
|
||||
|
||||
Create the Ingress controller daemonset, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
```
|
||||
kubectl apply -R -f addons/nginx-ingress/digital-ocean
|
||||
```
|
||||
|
||||
For each application, add a CNAME record resolving to the worker(s) DNS record. Use the Typhoon module's output `workers_dns` to find the worker DNS value. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
|
||||
```tf
|
||||
resource "google_dns_record_set" "some-application" {
|
||||
# DNS zone name
|
||||
managed_zone = "example-zone"
|
||||
|
||||
# DNS record
|
||||
name = "app.example.com."
|
||||
type = "CNAME"
|
||||
ttl = 300
|
||||
rrdatas = ["${module.digital-ocean-nemo.workers_dns}."]
|
||||
}
|
||||
```
|
||||
|
||||
[^1]: Digital Ocean does offers load balancers. We've opted not to use them to keep the Digital Ocean setup simple and cheap for developers.
|
||||
|
||||
## Google Cloud
|
||||
|
||||
On Google Cloud, a network load balancer distributes traffic across worker nodes (i.e. a target pool of backends) running an Ingress controller deployment on host ports 80 and 443. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure the target pool only includes worker nodes with a healthy Nginx Ingress controller.
|
||||
On Azure, a load balancer distributes traffic across a backend address pool of worker nodes running an Ingress controller deployment. Security group rules allow traffic to ports 80 and 443. Health probes ensure only workers with a healthy Ingress controller receive traffic.
|
||||
|
||||
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
```
|
||||
kubectl apply -R -f addons/nginx-ingress/google-cloud
|
||||
kubectl apply -R -f addons/nginx-ingress/azure
|
||||
```
|
||||
|
||||
For each application, add a DNS record resolving to the network load balancer's IPv4 address.
|
||||
For each application, add a DNS record resolving to the load balancer's IPv4 address.
|
||||
|
||||
```
|
||||
app1.example.com -> 11.22.33.44
|
||||
aap2.example.com -> 11.22.33.44
|
||||
app2.example.com -> 11.22.33.44
|
||||
app3.example.com -> 11.22.33.44
|
||||
```
|
||||
|
||||
Find the IPv4 address with `gcloud compute addresses list` or use the Typhoon module's output `ingress_static_ipv4`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
Find the load balancer's IPv4 address with the Azure console or use the Typhoon module's output `ingress_static_ipv4`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
|
||||
```tf
|
||||
resource "google_dns_record_set" "some-application" {
|
||||
@ -91,7 +64,7 @@ resource "google_dns_record_set" "some-application" {
|
||||
name = "app.example.com."
|
||||
type = "A"
|
||||
ttl = 300
|
||||
rrdatas = ["${module.google-cloud-yavin.ingress_static_ipv4}"]
|
||||
rrdatas = ["${module.azure-ramius.ingress_static_ipv4}"]
|
||||
}
|
||||
```
|
||||
|
||||
@ -125,3 +98,77 @@ resource "google_dns_record_set" "some-application" {
|
||||
rrdatas = ["SOME-WAN-IP"]
|
||||
}
|
||||
```
|
||||
|
||||
## Digital Ocean
|
||||
|
||||
On Digital Ocean, DNS A and AAAA records (e.g. FQDN `nemo-workers.example.com`) resolve to each worker[^1] running an Ingress controller DaemonSet on host ports 80 and 443. Firewall rules allow IPv4 and IPv6 traffic to ports 80 and 443.
|
||||
|
||||
Create the Ingress controller daemonset, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
```
|
||||
kubectl apply -R -f addons/nginx-ingress/digital-ocean
|
||||
```
|
||||
|
||||
For each application, add a CNAME record resolving to the worker(s) DNS record. Use the Typhoon module's output `workers_dns` to find the worker DNS value. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
|
||||
```tf
|
||||
resource "google_dns_record_set" "some-application" {
|
||||
# DNS zone name
|
||||
managed_zone = "example-zone"
|
||||
|
||||
# DNS record
|
||||
name = "app.example.com."
|
||||
type = "CNAME"
|
||||
ttl = 300
|
||||
rrdatas = ["${module.digital-ocean-nemo.workers_dns}."]
|
||||
}
|
||||
```
|
||||
|
||||
!!! note
|
||||
Hosting IPv6 apps is possible, but requires editing the nginx-ingress addon to use `hostNetwork: true`.
|
||||
|
||||
[^1]: Digital Ocean does offer load balancers. We've opted not to use them to keep the Digital Ocean setup simple and cheap for developers.
|
||||
|
||||
## Google Cloud
|
||||
|
||||
On Google Cloud, a TCP Proxy load balancer distributes IPv4 and IPv6 TCP traffic across a backend service of worker nodes running an Ingress controller deployment. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a healthy Ingress controller receive traffic.
|
||||
|
||||
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, default backend, and namespace.
|
||||
|
||||
```
|
||||
kubectl apply -R -f addons/nginx-ingress/google-cloud
|
||||
```
|
||||
|
||||
For each application, add DNS A records resolving to the load balancer's IPv4 address and DNS AAAA records resolving to the load balancer's IPv6 address.
|
||||
|
||||
```
|
||||
app1.example.com -> 11.22.33.44
|
||||
app2.example.com -> 11.22.33.44
|
||||
app3.example.com -> 11.22.33.44
|
||||
```
|
||||
|
||||
Find the IPv4 address with `gcloud compute addresses list` or use the Typhoon module's outputs `ingress_static_ipv4` and `ingress_static_ipv6`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||
|
||||
```tf
|
||||
resource "google_dns_record_set" "app-record-a" {
|
||||
# DNS zone name
|
||||
managed_zone = "example-zone"
|
||||
|
||||
# DNS record
|
||||
name = "app.example.com."
|
||||
type = "A"
|
||||
ttl = 300
|
||||
rrdatas = ["${module.google-cloud-yavin.ingress_static_ipv4}"]
|
||||
}
|
||||
|
||||
resource "google_dns_record_set" "app-record-aaaa" {
|
||||
# DNS zone name
|
||||
managed_zone = "example-zone"
|
||||
|
||||
# DNS record
|
||||
name = "app.example.com."
|
||||
type = "AAAA"
|
||||
ttl = 300
|
||||
rrdatas = ["${module.google-cloud-yavin.ingress_static_ipv6}"]
|
||||
}
|
||||
```
|
||||
|
@ -47,4 +47,4 @@ Visit [127.0.0.1:9090](http://127.0.0.1:9090) to query [expressions](http://127.
|
||||
<br/>
|
||||

|
||||
|
||||
Use [Grafana](/addons/grafana.md) to view or build dashboards that use Prometheus as the datasource.
|
||||
Use [Grafana](/addons/grafana/) to view or build dashboards that use Prometheus as the datasource.
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user