mirror of
https://github.com/puppetmaster/typhoon.git
synced 2024-12-24 18:19:33 +01:00
Add support for Fedora CoreOS on DigitalOcean
* Add `digital-ocean/fedora-coreos/kubernetes` module * DigitalOcean custom uploaded images do not permit droplet IPv6 networking
This commit is contained in:
parent
73af2f3b7c
commit
80538e2953
11
CHANGES.md
11
CHANGES.md
@ -11,14 +11,21 @@ Notable changes between versions.
|
||||
* Change `kube-proxy` and `calico` or `flannel` to tolerate specific taints ([#682](https://github.com/poseidon/typhoon/pull/682))
|
||||
* Tolerate master and not-ready taints, rather than tolerating all taints
|
||||
* Update flannel from v0.11.0 to v0.12.0 ([#690](https://github.com/poseidon/typhoon/pull/690))
|
||||
* Fix bootstrap when `networking` mode `flannel` (non-default) is chosen ([#689](https://github.com/poseidon/typhoon/pull/689))
|
||||
* Regressed in v1.18.0 changes for Calico ([#675](https://github.com/poseidon/typhoon/pull/675))
|
||||
* Rename Container Linux `controller_clc_snippets` to `controller_snippets` for consistency ([#688](https://github.com/poseidon/typhoon/pull/688))
|
||||
* Rename Container Linux `worker_clc_snippets` to `worker_snippets` for consistency
|
||||
* Rename Container Linux `clc_snippets` (bare-metal) to `snippets` for consistency
|
||||
* Fix bootstrap when `networking` mode `flannel` (non-default) is chosen ([#689](https://github.com/poseidon/typhoon/pull/689))
|
||||
* Regressed in v1.18.0 changes for Calico ([#675](https://github.com/poseidon/typhoon/pull/675))
|
||||
|
||||
#### Azure
|
||||
|
||||
* Fix Azure worker UDP outbound connections ([#691](https://github.com/poseidon/typhoon/pull/691))
|
||||
* Fix Azure worker clock sync timeouts
|
||||
|
||||
#### DigitalOcean
|
||||
|
||||
* Add support for Fedora CoreOS ([#699](https://github.com/poseidon/typhoon/pull/699))
|
||||
|
||||
#### Addons
|
||||
|
||||
* Refresh Prometheus rules/alerts and Grafana dashboards ([#692](https://github.com/poseidon/typhoon/pull/692))
|
||||
|
@ -35,6 +35,7 @@ Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](aws/fedora-coreos/kubernetes) | stable |
|
||||
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](bare-metal/fedora-coreos/kubernetes) | beta |
|
||||
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](digital-ocean/fedora-coreos/kubernetes) | alpha |
|
||||
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | beta |
|
||||
|
||||
Typhoon is available for [Flatcar Container Linux](https://www.flatcar-linux.org/releases/).
|
||||
@ -44,14 +45,14 @@ Typhoon is available for [Flatcar Container Linux](https://www.flatcar-linux.org
|
||||
| AWS | Flatcar Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
|
||||
| Azure | Flatcar Linux | [azure/container-linux/kubernetes](azure/container-linux/kubernetes) | alpha |
|
||||
| Bare-Metal | Flatcar Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
|
||||
| DigitalOcean | Flatcar Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | alpha |
|
||||
| Google Cloud | Flatcar Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | alpha |
|
||||
| Digital Ocean | Flatcar Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
* [Docs](https://typhoon.psdn.io)
|
||||
* Architecture [concepts](https://typhoon.psdn.io/architecture/concepts/) and [operating systems](https://typhoon.psdn.io/architecture/operating-systems/)
|
||||
* Fedora CoreOS tutorials for [AWS](docs/fedora-coreos/aws.md), [Bare-Metal](docs/fedora-coreos/bare-metal.md), and [Google Cloud](docs/fedora-coreos/google-cloud.md)
|
||||
* Fedora CoreOS tutorials for [AWS](docs/fedora-coreos/aws.md), [Bare-Metal](docs/fedora-coreos/bare-metal.md), [DigitalOcean](docs/fedora-coreos/digitalocean.md), and [Google Cloud](docs/fedora-coreos/google-cloud.md)
|
||||
* Flatcar Linux tutorials for [AWS](docs/cl/aws.md), [Azure](docs/cl/azure.md), [Bare-Metal](docs/cl/bare-metal.md), [DigitalOcean](docs/cl/digital-ocean.md), and [Google Cloud](docs/cl/google-cloud.md)
|
||||
|
||||
## Usage
|
||||
|
23
digital-ocean/fedora-coreos/kubernetes/LICENSE
Normal file
23
digital-ocean/fedora-coreos/kubernetes/LICENSE
Normal file
@ -0,0 +1,23 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2020 Typhoon Authors
|
||||
Copyright (c) 2020 Dalton Hubble
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
23
digital-ocean/fedora-coreos/kubernetes/README.md
Normal file
23
digital-ocean/fedora-coreos/kubernetes/README.md
Normal file
@ -0,0 +1,23 @@
|
||||
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
Typhoon is a minimal and free Kubernetes distribution.
|
||||
|
||||
* Minimal, stable base Kubernetes distribution
|
||||
* Declarative infrastructure and configuration
|
||||
* Free (freedom and cost) and privacy-respecting
|
||||
* Practical for labs, datacenters, and clouds
|
||||
|
||||
Typhoon distributes upstream Kubernetes, architectural conventions, and cluster addons, much like a GNU/Linux distribution provides the Linux kernel and userspace components.
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.18.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
||||
* Ready for Ingress, Prometheus, Grafana, CSI, and other [addons](https://typhoon.psdn.io/addons/overview/)
|
||||
|
||||
## Docs
|
||||
|
||||
Please see the [official docs](https://typhoon.psdn.io) and the Digital Ocean [tutorial](https://typhoon.psdn.io/fedora-coreos/digitalocean/).
|
||||
|
25
digital-ocean/fedora-coreos/kubernetes/bootstrap.tf
Normal file
25
digital-ocean/fedora-coreos/kubernetes/bootstrap.tf
Normal file
@ -0,0 +1,25 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1ad53d3b1c1ad75a4ed27f124f772fc5dc025245"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = digitalocean_record.etcds.*.fqdn
|
||||
asset_dir = var.asset_dir
|
||||
|
||||
networking = var.networking
|
||||
|
||||
# only effective with Calico networking
|
||||
network_encapsulation = "vxlan"
|
||||
network_mtu = "1450"
|
||||
|
||||
pod_cidr = var.pod_cidr
|
||||
service_cidr = var.service_cidr
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
|
||||
# Fedora CoreOS
|
||||
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||
}
|
||||
|
100
digital-ocean/fedora-coreos/kubernetes/controllers.tf
Normal file
100
digital-ocean/fedora-coreos/kubernetes/controllers.tf
Normal file
@ -0,0 +1,100 @@
|
||||
# Controller Instance DNS records
|
||||
resource "digitalocean_record" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = var.dns_zone
|
||||
|
||||
# DNS record (will be prepended to domain)
|
||||
name = var.cluster_name
|
||||
type = "A"
|
||||
ttl = 300
|
||||
|
||||
# IPv4 addresses of controllers
|
||||
value = digitalocean_droplet.controllers.*.ipv4_address[count.index]
|
||||
}
|
||||
|
||||
# Discrete DNS records for each controller's private IPv4 for etcd usage
|
||||
resource "digitalocean_record" "etcds" {
|
||||
count = var.controller_count
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = var.dns_zone
|
||||
|
||||
# DNS record (will be prepended to domain)
|
||||
name = "${var.cluster_name}-etcd${count.index}"
|
||||
type = "A"
|
||||
ttl = 300
|
||||
|
||||
# private IPv4 address for etcd
|
||||
value = digitalocean_droplet.controllers.*.ipv4_address_private[count.index]
|
||||
}
|
||||
|
||||
# Controller droplet instances
|
||||
resource "digitalocean_droplet" "controllers" {
|
||||
count = var.controller_count
|
||||
|
||||
name = "${var.cluster_name}-controller-${count.index}"
|
||||
region = var.region
|
||||
|
||||
image = var.os_image
|
||||
size = var.controller_type
|
||||
|
||||
# network
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
private_networking = true
|
||||
|
||||
user_data = data.ct_config.controller-ignitions.*.rendered[count.index]
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
digitalocean_tag.controllers.id,
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
ignore_changes = [user_data]
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label controllers
|
||||
resource "digitalocean_tag" "controllers" {
|
||||
name = "${var.cluster_name}-controller"
|
||||
}
|
||||
|
||||
# Controller Ignition configs
|
||||
data "ct_config" "controller-ignitions" {
|
||||
count = var.controller_count
|
||||
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||
strict = true
|
||||
snippets = var.controller_snippets
|
||||
}
|
||||
|
||||
# Controller Fedora CoreOS configs
|
||||
data "template_file" "controller-configs" {
|
||||
count = var.controller_count
|
||||
|
||||
template = file("${path.module}/fcc/controller.yaml")
|
||||
|
||||
vars = {
|
||||
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||
etcd_name = "etcd${count.index}"
|
||||
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
}
|
||||
|
||||
data "template_file" "etcds" {
|
||||
count = var.controller_count
|
||||
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||
|
||||
vars = {
|
||||
index = count.index
|
||||
cluster_name = var.cluster_name
|
||||
dns_zone = var.dns_zone
|
||||
}
|
||||
}
|
||||
|
214
digital-ocean/fedora-coreos/kubernetes/fcc/controller.yaml
Normal file
214
digital-ocean/fedora-coreos/kubernetes/fcc/controller.yaml
Normal file
@ -0,0 +1,214 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: etcd-member.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=etcd (System Container)
|
||||
Documentation=https://github.com/coreos/etcd
|
||||
Wants=network-online.target network.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
# https://github.com/opencontainers/runc/pull/1807
|
||||
# Type=notify
|
||||
# NotifyAccess=exec
|
||||
Type=exec
|
||||
Restart=on-failure
|
||||
RestartSec=10s
|
||||
TimeoutStartSec=0
|
||||
LimitNOFILE=40000
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
#--volume $${NOTIFY_SOCKET}:/run/systemd/notify \
|
||||
ExecStart=/usr/bin/podman run --name etcd \
|
||||
--env-file /etc/etcd/etcd.env \
|
||||
--network host \
|
||||
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||
quay.io/coreos/etcd:v3.4.7
|
||||
ExecStop=/usr/bin/podman stop etcd
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: docker.service
|
||||
enabled: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
RequiredBy=etcd-member.service
|
||||
- name: kubelet.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube (System Container)
|
||||
Requires=afterburn.service
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /etc/ssl/certs:/etc/ssl/certs:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
--volume /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
--volume /etc/pki/tls/certs:/usr/share/ca-certificates:ro \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
quay.io/poseidon/kubelet:v1.18.1 \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/master \
|
||||
--node-labels=node.kubernetes.io/controller="true" \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: kubelet.path
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
[Path]
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: bootstrap.service
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubernetes control plane
|
||||
ConditionPathExists=!/opt/bootstrap/bootstrap.done
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
ExecStartPre=-/usr/bin/podman rm bootstrap
|
||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||
--network host \
|
||||
--volume /etc/kubernetes/bootstrap-secrets:/etc/kubernetes/secrets:ro,Z \
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.18.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
- path: /opt/bootstrap
|
||||
files:
|
||||
- path: /opt/bootstrap/layout
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||
mkdir -p /etc/ssl/etcd/etcd
|
||||
mkdir -p /etc/kubernetes/bootstrap-secrets
|
||||
mv tls/etcd/{peer*,server*} /etc/ssl/etcd/etcd/
|
||||
mv tls/etcd/etcd-client* /etc/kubernetes/bootstrap-secrets/
|
||||
chown -R etcd:etcd /etc/ssl/etcd
|
||||
chmod -R 500 /etc/ssl/etcd
|
||||
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||
sudo mkdir -p /etc/kubernetes/manifests
|
||||
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
||||
sudo mkdir -p /opt/bootstrap/assets
|
||||
sudo mv manifests /opt/bootstrap/assets/manifests
|
||||
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
export KUBECONFIG=/etc/kubernetes/secrets/kubeconfig
|
||||
until kubectl version; do
|
||||
echo "Waiting for static pod control plane"
|
||||
sleep 5
|
||||
done
|
||||
until kubectl apply -f /assets/manifests -R; do
|
||||
echo "Retry applying manifests"
|
||||
sleep 5
|
||||
done
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Manager]
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
||||
- path: /etc/etcd/etcd.env
|
||||
mode: 0644
|
||||
contents:
|
||||
inline: |
|
||||
# TODO: Use a systemd dropin once podman v1.4.5 is avail.
|
||||
NOTIFY_SOCKET=/run/systemd/notify
|
||||
ETCD_NAME=${etcd_name}
|
||||
ETCD_DATA_DIR=/var/lib/etcd
|
||||
ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379
|
||||
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380
|
||||
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
|
||||
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
|
||||
ETCD_LISTEN_METRICS_URLS=http://0.0.0.0:2381
|
||||
ETCD_INITIAL_CLUSTER=${etcd_initial_cluster}
|
||||
ETCD_STRICT_RECONFIG_CHECK=true
|
||||
ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/server-ca.crt
|
||||
ETCD_CERT_FILE=/etc/ssl/certs/etcd/server.crt
|
||||
ETCD_KEY_FILE=/etc/ssl/certs/etcd/server.key
|
||||
ETCD_CLIENT_CERT_AUTH=true
|
||||
ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/peer-ca.crt
|
||||
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||
ETCD_PEER_CLIENT_CERT_AUTH=true
|
117
digital-ocean/fedora-coreos/kubernetes/fcc/worker.yaml
Normal file
117
digital-ocean/fedora-coreos/kubernetes/fcc/worker.yaml
Normal file
@ -0,0 +1,117 @@
|
||||
---
|
||||
variant: fcos
|
||||
version: 1.0.0
|
||||
systemd:
|
||||
units:
|
||||
- name: docker.service
|
||||
enabled: true
|
||||
- name: wait-for-dns.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Wait for DNS entries
|
||||
Before=kubelet.service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||
[Install]
|
||||
RequiredBy=kubelet.service
|
||||
- name: kubelet.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Kubelet via Hyperkube (System Container)
|
||||
Requires=afterburn.service
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||
ExecStart=/usr/bin/podman run --name kubelet \
|
||||
--privileged \
|
||||
--pid host \
|
||||
--network host \
|
||||
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||
--volume /etc/ssl/certs:/etc/ssl/certs:ro \
|
||||
--volume /lib/modules:/lib/modules:ro \
|
||||
--volume /run:/run \
|
||||
--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||
--volume /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||
--volume /etc/pki/tls/certs:/usr/share/ca-certificates:ro \
|
||||
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||
--volume /var/lib/docker:/var/lib/docker \
|
||||
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||
--volume /var/log:/var/log \
|
||||
--volume /var/run/lock:/var/run/lock:z \
|
||||
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||
quay.io/poseidon/kubelet:v1.18.1 \
|
||||
--anonymous-auth=false \
|
||||
--authentication-token-webhook \
|
||||
--authorization-mode=Webhook \
|
||||
--cgroup-driver=systemd \
|
||||
--cgroups-per-qos=true \
|
||||
--enforce-node-allocatable=pods \
|
||||
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||
--cluster_dns=${cluster_dns_service_ip} \
|
||||
--cluster_domain=${cluster_domain_suffix} \
|
||||
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||
--exit-on-lock-contention \
|
||||
--healthz-port=0 \
|
||||
--hostname-override=$${AFTERBURN_DIGITALOCEAN_IPV4_PRIVATE_0} \
|
||||
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||
--lock-file=/var/run/lock/kubelet.lock \
|
||||
--network-plugin=cni \
|
||||
--node-labels=node.kubernetes.io/node \
|
||||
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||
--read-only-port=0 \
|
||||
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||
ExecStop=-/usr/bin/podman stop kubelet
|
||||
Delegate=yes
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: kubelet.path
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Watch for kubeconfig
|
||||
[Path]
|
||||
PathExists=/etc/kubernetes/kubeconfig
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
- name: delete-node.service
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Description=Delete Kubernetes node on shutdown
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
ExecStart=/bin/true
|
||||
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.1 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
storage:
|
||||
directories:
|
||||
- path: /etc/kubernetes
|
||||
files:
|
||||
- path: /etc/sysctl.d/max-user-watches.conf
|
||||
contents:
|
||||
inline: |
|
||||
fs.inotify.max_user_watches=16184
|
||||
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||
contents:
|
||||
inline: |
|
||||
[Manager]
|
||||
DefaultCPUAccounting=yes
|
||||
DefaultMemoryAccounting=yes
|
||||
DefaultBlockIOAccounting=yes
|
117
digital-ocean/fedora-coreos/kubernetes/network.tf
Normal file
117
digital-ocean/fedora-coreos/kubernetes/network.tf
Normal file
@ -0,0 +1,117 @@
|
||||
resource "digitalocean_firewall" "rules" {
|
||||
name = var.cluster_name
|
||||
|
||||
tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
|
||||
|
||||
# allow ssh, internal flannel, internal node-exporter, internal kubelet
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "22"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
|
||||
inbound_rule {
|
||||
protocol = "udp"
|
||||
port_range = "4789"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape node-exporter
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "9100"
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Allow Prometheus to scrape kube-proxy
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "10249"
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "10250"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# allow all outbound traffic
|
||||
outbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "1-65535"
|
||||
destination_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
|
||||
outbound_rule {
|
||||
protocol = "udp"
|
||||
port_range = "1-65535"
|
||||
destination_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
|
||||
outbound_rule {
|
||||
protocol = "icmp"
|
||||
port_range = "1-65535"
|
||||
destination_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
}
|
||||
|
||||
resource "digitalocean_firewall" "controllers" {
|
||||
name = "${var.cluster_name}-controllers"
|
||||
|
||||
tags = ["${var.cluster_name}-controller"]
|
||||
|
||||
# etcd
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "2379-2380"
|
||||
source_tags = [digitalocean_tag.controllers.name]
|
||||
}
|
||||
|
||||
# etcd metrics
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "2381"
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# kube-apiserver
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "6443"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
|
||||
# kube-scheduler metrics, kube-controller-manager metrics
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "10251-10252"
|
||||
source_tags = [digitalocean_tag.workers.name]
|
||||
}
|
||||
}
|
||||
|
||||
resource "digitalocean_firewall" "workers" {
|
||||
name = "${var.cluster_name}-workers"
|
||||
|
||||
tags = ["${var.cluster_name}-worker"]
|
||||
|
||||
# allow HTTP/HTTPS ingress
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "80"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "443"
|
||||
source_addresses = ["0.0.0.0/0", "::/0"]
|
||||
}
|
||||
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "10254"
|
||||
source_addresses = ["0.0.0.0/0"]
|
||||
}
|
||||
}
|
||||
|
47
digital-ocean/fedora-coreos/kubernetes/outputs.tf
Normal file
47
digital-ocean/fedora-coreos/kubernetes/outputs.tf
Normal file
@ -0,0 +1,47 @@
|
||||
output "kubeconfig-admin" {
|
||||
value = module.bootstrap.kubeconfig-admin
|
||||
}
|
||||
|
||||
output "controllers_dns" {
|
||||
value = digitalocean_record.controllers[0].fqdn
|
||||
}
|
||||
|
||||
output "workers_dns" {
|
||||
# Multiple A and AAAA records with the same FQDN
|
||||
value = digitalocean_record.workers-record-a[0].fqdn
|
||||
}
|
||||
|
||||
output "controllers_ipv4" {
|
||||
value = digitalocean_droplet.controllers.*.ipv4_address
|
||||
}
|
||||
|
||||
output "controllers_ipv6" {
|
||||
value = digitalocean_droplet.controllers.*.ipv6_address
|
||||
}
|
||||
|
||||
output "workers_ipv4" {
|
||||
value = digitalocean_droplet.workers.*.ipv4_address
|
||||
}
|
||||
|
||||
output "workers_ipv6" {
|
||||
value = digitalocean_droplet.workers.*.ipv6_address
|
||||
}
|
||||
|
||||
# Outputs for worker pools
|
||||
|
||||
output "kubeconfig" {
|
||||
value = module.bootstrap.kubeconfig-kubelet
|
||||
}
|
||||
|
||||
# Outputs for custom firewalls
|
||||
|
||||
output "controller_tag" {
|
||||
description = "Tag applied to controller droplets"
|
||||
value = digitalocean_tag.controllers.name
|
||||
}
|
||||
|
||||
output "worker_tag" {
|
||||
description = "Tag applied to worker droplets"
|
||||
value = digitalocean_tag.workers.name
|
||||
}
|
||||
|
87
digital-ocean/fedora-coreos/kubernetes/ssh.tf
Normal file
87
digital-ocean/fedora-coreos/kubernetes/ssh.tf
Normal file
@ -0,0 +1,87 @@
|
||||
locals {
|
||||
# format assets for distribution
|
||||
assets_bundle = [
|
||||
# header with the unpack location
|
||||
for key, value in module.bootstrap.assets_dist :
|
||||
format("##### %s\n%s", key, value)
|
||||
]
|
||||
}
|
||||
|
||||
# Secure copy assets to controllers. Activates kubelet.service
|
||||
resource "null_resource" "copy-controller-secrets" {
|
||||
count = var.controller_count
|
||||
|
||||
depends_on = [
|
||||
module.bootstrap,
|
||||
digitalocean_firewall.rules
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = digitalocean_droplet.controllers.*.ipv4_address[count.index]
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = join("\n", local.assets_bundle)
|
||||
destination = "$HOME/assets"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
"sudo /opt/bootstrap/layout",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Secure copy kubeconfig to all workers. Activates kubelet.service.
|
||||
resource "null_resource" "copy-worker-secrets" {
|
||||
count = var.worker_count
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = digitalocean_droplet.workers.*.ipv4_address[count.index]
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "file" {
|
||||
content = module.bootstrap.kubeconfig-kubelet
|
||||
destination = "$HOME/kubeconfig"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo mv $HOME/kubeconfig /etc/kubernetes/kubeconfig",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
# Connect to a controller to perform one-time cluster bootstrap.
|
||||
resource "null_resource" "bootstrap" {
|
||||
depends_on = [
|
||||
null_resource.copy-controller-secrets,
|
||||
null_resource.copy-worker-secrets,
|
||||
]
|
||||
|
||||
connection {
|
||||
type = "ssh"
|
||||
host = digitalocean_droplet.controllers[0].ipv4_address
|
||||
user = "core"
|
||||
timeout = "15m"
|
||||
}
|
||||
|
||||
provisioner "remote-exec" {
|
||||
inline = [
|
||||
"sudo systemctl start bootstrap",
|
||||
]
|
||||
}
|
||||
}
|
||||
|
114
digital-ocean/fedora-coreos/kubernetes/variables.tf
Normal file
114
digital-ocean/fedora-coreos/kubernetes/variables.tf
Normal file
@ -0,0 +1,114 @@
|
||||
variable "cluster_name" {
|
||||
type = string
|
||||
description = "Unique cluster name (prepended to dns_zone)"
|
||||
}
|
||||
|
||||
# Digital Ocean
|
||||
|
||||
variable "region" {
|
||||
type = string
|
||||
description = "Digital Ocean region (e.g. nyc1, sfo2, fra1, tor1)"
|
||||
}
|
||||
|
||||
variable "dns_zone" {
|
||||
type = string
|
||||
description = "Digital Ocean domain (i.e. DNS zone) (e.g. do.example.com)"
|
||||
}
|
||||
|
||||
# instances
|
||||
|
||||
variable "controller_count" {
|
||||
type = number
|
||||
description = "Number of controllers (i.e. masters)"
|
||||
default = 1
|
||||
}
|
||||
|
||||
variable "worker_count" {
|
||||
type = number
|
||||
description = "Number of workers"
|
||||
default = 1
|
||||
}
|
||||
|
||||
variable "controller_type" {
|
||||
type = string
|
||||
description = "Droplet type for controllers (e.g. s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb)."
|
||||
default = "s-2vcpu-2gb"
|
||||
}
|
||||
|
||||
variable "worker_type" {
|
||||
type = string
|
||||
description = "Droplet type for workers (e.g. s-1vcpu-2gb, s-2vcpu-2gb)"
|
||||
default = "s-1vcpu-2gb"
|
||||
}
|
||||
|
||||
variable "os_image" {
|
||||
type = string
|
||||
description = "Fedora CoreOS image for instances"
|
||||
}
|
||||
|
||||
variable "controller_snippets" {
|
||||
type = list(string)
|
||||
description = "Controller Fedora CoreOS Config snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "worker_snippets" {
|
||||
type = list(string)
|
||||
description = "Worker Fedora CoreOS Config snippets"
|
||||
default = []
|
||||
}
|
||||
|
||||
# configuration
|
||||
|
||||
variable "ssh_fingerprints" {
|
||||
type = list(string)
|
||||
description = "SSH public key fingerprints. (e.g. see `ssh-add -l -E md5`)"
|
||||
}
|
||||
|
||||
variable "networking" {
|
||||
type = string
|
||||
description = "Choice of networking provider (flannel or calico)"
|
||||
default = "calico"
|
||||
}
|
||||
|
||||
variable "pod_cidr" {
|
||||
type = string
|
||||
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||
default = "10.2.0.0/16"
|
||||
}
|
||||
|
||||
variable "service_cidr" {
|
||||
type = string
|
||||
description = <<EOD
|
||||
CIDR IPv4 range to assign Kubernetes services.
|
||||
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||
EOD
|
||||
default = "10.3.0.0/16"
|
||||
}
|
||||
|
||||
variable "enable_reporting" {
|
||||
type = bool
|
||||
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||
default = false
|
||||
}
|
||||
|
||||
variable "enable_aggregation" {
|
||||
type = bool
|
||||
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||
default = false
|
||||
}
|
||||
|
||||
# unofficial, undocumented, unsupported
|
||||
|
||||
variable "asset_dir" {
|
||||
type = string
|
||||
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
|
||||
default = ""
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
type = string
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
12
digital-ocean/fedora-coreos/kubernetes/versions.tf
Normal file
12
digital-ocean/fedora-coreos/kubernetes/versions.tf
Normal file
@ -0,0 +1,12 @@
|
||||
# Terraform version and plugin versions
|
||||
|
||||
terraform {
|
||||
required_version = "~> 0.12.6"
|
||||
required_providers {
|
||||
digitalocean = "~> 1.3"
|
||||
ct = "~> 0.3"
|
||||
template = "~> 2.1"
|
||||
null = "~> 2.1"
|
||||
}
|
||||
}
|
||||
|
77
digital-ocean/fedora-coreos/kubernetes/workers.tf
Normal file
77
digital-ocean/fedora-coreos/kubernetes/workers.tf
Normal file
@ -0,0 +1,77 @@
|
||||
# Worker DNS records
|
||||
resource "digitalocean_record" "workers-record-a" {
|
||||
count = var.worker_count
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = var.dns_zone
|
||||
|
||||
name = "${var.cluster_name}-workers"
|
||||
type = "A"
|
||||
ttl = 300
|
||||
value = digitalocean_droplet.workers.*.ipv4_address[count.index]
|
||||
}
|
||||
|
||||
/*
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
resource "digitalocean_record" "workers-record-aaaa" {
|
||||
count = var.worker_count
|
||||
|
||||
# DNS zone where record should be created
|
||||
domain = var.dns_zone
|
||||
|
||||
name = "${var.cluster_name}-workers"
|
||||
type = "AAAA"
|
||||
ttl = 300
|
||||
value = digitalocean_droplet.workers.*.ipv6_address[count.index]
|
||||
}
|
||||
*/
|
||||
|
||||
# Worker droplet instances
|
||||
resource "digitalocean_droplet" "workers" {
|
||||
count = var.worker_count
|
||||
|
||||
name = "${var.cluster_name}-worker-${count.index}"
|
||||
region = var.region
|
||||
|
||||
image = var.os_image
|
||||
size = var.worker_type
|
||||
|
||||
# network
|
||||
# TODO: Only official DigitalOcean images support IPv6
|
||||
ipv6 = false
|
||||
private_networking = true
|
||||
|
||||
user_data = data.ct_config.worker-ignition.rendered
|
||||
ssh_keys = var.ssh_fingerprints
|
||||
|
||||
tags = [
|
||||
digitalocean_tag.workers.id,
|
||||
]
|
||||
|
||||
lifecycle {
|
||||
create_before_destroy = true
|
||||
}
|
||||
}
|
||||
|
||||
# Tag to label workers
|
||||
resource "digitalocean_tag" "workers" {
|
||||
name = "${var.cluster_name}-worker"
|
||||
}
|
||||
|
||||
# Worker Ignition config
|
||||
data "ct_config" "worker-ignition" {
|
||||
content = data.template_file.worker-config.rendered
|
||||
strict = true
|
||||
snippets = var.worker_snippets
|
||||
}
|
||||
|
||||
# Worker Fedora CoreOS config
|
||||
data "template_file" "worker-config" {
|
||||
template = file("${path.module}/fcc/worker.yaml")
|
||||
|
||||
vars = {
|
||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
}
|
||||
}
|
||||
|
@ -50,7 +50,7 @@ Configure the DigitalOcean provider to use your token in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "digitalocean" {
|
||||
version = "1.14.0"
|
||||
version = "1.15.1"
|
||||
token = "${chomp(file("~/.config/digital-ocean/token"))}"
|
||||
}
|
||||
|
||||
@ -204,7 +204,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital
|
||||
|
||||
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
|
||||
|
||||
You'll need a registered domain name or delegated subdomain in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain in DigitalOcean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
# Declare a DigitalOcean record to also create a zone file
|
||||
|
247
docs/fedora-coreos/digitalocean.md
Normal file
247
docs/fedora-coreos/digitalocean.md
Normal file
@ -0,0 +1,247 @@
|
||||
# Digital Ocean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.18.1 cluster on DigitalOcean with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
* Digital Ocean Account and Token
|
||||
* Digital Ocean Domain (registered Domain Name or delegated subdomain)
|
||||
* Terraform v0.12.6+ and [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) installed locally
|
||||
|
||||
## Terraform Setup
|
||||
|
||||
Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your system.
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.12.21
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||
|
||||
```sh
|
||||
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.5.0/terraform-provider-ct-v0.5.0-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.5.0-linux-amd64.tar.gz
|
||||
mv terraform-provider-ct-v0.5.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.5.0
|
||||
```
|
||||
|
||||
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
|
||||
```
|
||||
cd infra/clusters
|
||||
```
|
||||
|
||||
## Provider
|
||||
|
||||
Login to [DigitalOcean](https://cloud.digitalocean.com) or create an [account](https://cloud.digitalocean.com/registrations/new), if you don't have one.
|
||||
|
||||
Generate a Personal Access Token with read/write scope from the [API tab](https://cloud.digitalocean.com/settings/api/tokens). Write the token to a file that can be referenced in configs.
|
||||
|
||||
```sh
|
||||
mkdir -p ~/.config/digital-ocean
|
||||
echo "TOKEN" > ~/.config/digital-ocean/token
|
||||
```
|
||||
|
||||
Configure the DigitalOcean provider to use your token in a `providers.tf` file.
|
||||
|
||||
```tf
|
||||
provider "digitalocean" {
|
||||
version = "1.15.1"
|
||||
token = "${chomp(file("~/.config/digital-ocean/token"))}"
|
||||
}
|
||||
|
||||
provider "ct" {
|
||||
version = "0.5.0"
|
||||
}
|
||||
```
|
||||
|
||||
## Fedora CoreOS Images
|
||||
|
||||
Fedora CoreOS publishes images for DigitalOcean, but does not yet upload them. DigitalOcean allows [custom images](https://blog.digitalocean.com/custom-images/) to be uploaded via URL or file.
|
||||
|
||||
Import a [Fedora CoreOS](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) image via URL to desired a region(s). Reference the DigitalOcean image and set the `os_image` in the next step.
|
||||
|
||||
```tf
|
||||
data "digitalocean_image" "fedora-coreos-31-20200323-3-2" {
|
||||
name = "fedora-coreos-31.20200323.3.2-digitalocean.x86_64.qcow2.gz"
|
||||
}
|
||||
```
|
||||
|
||||
## Cluster
|
||||
|
||||
Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.18.1"
|
||||
|
||||
# Digital Ocean
|
||||
cluster_name = "nemo"
|
||||
region = "nyc3"
|
||||
dns_zone = "digital-ocean.example.com"
|
||||
os_image = data.digitalocean_image.fedora-coreos-31-20200323-3-2.id
|
||||
|
||||
# configuration
|
||||
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/fedora-coreos/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||
|
||||
```sh
|
||||
ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
|
||||
```sh
|
||||
terraform init
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 54 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
|
||||
```sh
|
||||
$ terraform apply
|
||||
module.nemo.null_resource.bootstrap: Still creating... (30s elapsed)
|
||||
module.nemo.null_resource.bootstrap: Provisioning with 'remote-exec'...
|
||||
...
|
||||
module.nemo.null_resource.bootstrap: Still creating... (6m20s elapsed)
|
||||
module.nemo.null_resource.bootstrap: Creation complete (ID: 7599298447329218468)
|
||||
|
||||
Apply complete! Resources: 42 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 3-6 minutes, the Kubernetes cluster will be ready.
|
||||
|
||||
## Verify
|
||||
|
||||
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
|
||||
|
||||
```
|
||||
resource "local_file" "kubeconfig-nemo" {
|
||||
content = module.nemo.kubeconfig-admin
|
||||
filename = "/home/user/.kube/configs/nemo-config"
|
||||
}
|
||||
```
|
||||
|
||||
List nodes in the cluster.
|
||||
|
||||
```
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
10.132.110.130 Ready <none> 10m v1.18.1
|
||||
10.132.115.81 Ready <none> 10m v1.18.1
|
||||
10.132.124.107 Ready <none> 10m v1.18.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
||||
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
|
||||
kube-system calico-node-1m5bf 2/2 Running 0 11m
|
||||
kube-system calico-node-7jmr1 2/2 Running 0 11m
|
||||
kube-system calico-node-bknc8 2/2 Running 0 11m
|
||||
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
|
||||
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
|
||||
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
|
||||
kube-system kube-proxy-fh3td 1/1 Running 0 11m
|
||||
kube-system kube-proxy-k35rc 1/1 Running 0 11m
|
||||
kube-system kube-scheduler-ip-10.132.115.81 1/1 Running 0 11m
|
||||
```
|
||||
|
||||
## Going Further
|
||||
|
||||
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/fedora-coreos/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name (prepended to dns_zone) | "nemo" |
|
||||
| region | Digital Ocean region | "nyc1", "sfo2", "fra1", tor1" |
|
||||
| dns_zone | Digital Ocean domain (i.e. DNS zone) | "do.example.com" |
|
||||
| os_image | Fedora CoreOS image for instances | "custom-image-id" |
|
||||
| ssh_fingerprints | SSH public key fingerprints | ["d7:9d..."] |
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
|
||||
|
||||
You'll need a registered domain name or delegated subdomain in DigitalOcean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
# Declare a DigitalOcean record to also create a zone file
|
||||
resource "digitalocean_domain" "zone-for-clusters" {
|
||||
name = "do.example.com"
|
||||
ip_address = "8.8.8.8"
|
||||
}
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean).
|
||||
|
||||
#### SSH Fingerprints
|
||||
|
||||
DigitalOcean droplets are created with your SSH public key "fingerprint" (i.e. MD5 hash) to allow access. If your SSH public key is at `~/.ssh/id_rsa`, find the fingerprint with,
|
||||
|
||||
```bash
|
||||
ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}'
|
||||
MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7
|
||||
```
|
||||
|
||||
If you use `ssh-agent` (e.g. Yubikey for SSH), find the fingerprint with,
|
||||
|
||||
```
|
||||
ssh-add -l -E md5
|
||||
2048 MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7 cardno:000603633110 (RSA)
|
||||
```
|
||||
|
||||
Digital Ocean requires the SSH public key be uploaded to your account, so you may also find the fingerprint under Settings -> Security. Finally, if you don't have an SSH key, [create one now](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/).
|
||||
|
||||
### Optional
|
||||
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Droplet type for controllers | "s-2vcpu-2gb" | s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb, ... |
|
||||
| worker_type | Droplet type for workers | "s-1vcpu-2gb" | s-1vcpu-2gb, s-2vcpu-2gb, ... |
|
||||
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [example](/advanced/customization/) |
|
||||
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [example](/advanced/customization/) |
|
||||
| networking | Choice of networking provider | "calico" | "flannel" or "calico" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
|
||||
Check the list of valid [droplet types](https://developers.digitalocean.com/documentation/changelog/api-v2/new-size-slugs-for-droplet-plan-changes/) or use `doctl compute size list`.
|
||||
|
||||
!!! warning
|
||||
Do not choose a `controller_type` smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail.
|
||||
|
@ -73,13 +73,13 @@ Fedora CoreOS publishes images for Google Cloud, but does not yet upload them. G
|
||||
|
||||
```
|
||||
gsutil list
|
||||
gsutil cp fedora-coreos-31.20200310.3.0-gcp.x86_64.tar.gz gs://BUCKET
|
||||
gsutil cp fedora-coreos-31.20200323.3.2-gcp.x86_64.tar.gz gs://BUCKET
|
||||
```
|
||||
|
||||
Create a Compute Engine image from the file.
|
||||
|
||||
```
|
||||
gcloud compute images create fedora-coreos-31-20200310-3-0 --source-uri gs://BUCKET/fedora-coreos-31.20200310.3.0-gcp.x86_64.tar.gz
|
||||
gcloud compute images create fedora-coreos-31-20200323-3-2 --source-uri gs://BUCKET/fedora-coreos-31.20200323.3.2-gcp.x86_64.tar.gz
|
||||
```
|
||||
|
||||
## Cluster
|
||||
@ -97,7 +97,7 @@ module "yavin" {
|
||||
dns_zone_name = "example-zone"
|
||||
|
||||
# custom image name from above
|
||||
os_image = "fedora-coreos-31-20200310-3-0"
|
||||
os_image = "fedora-coreos-31-20200323-3-2"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
@ -107,7 +107,7 @@ module "yavin" {
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/variables.tf) source.
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/fedora-coreos/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
@ -194,7 +194,7 @@ Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/variables.tf) source.
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/fedora-coreos/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
@ -204,6 +204,7 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-
|
||||
| region | Google Cloud region | "us-central1" |
|
||||
| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" |
|
||||
| dns_zone_name | Google Cloud DNS zone name | "example-zone" |
|
||||
| os_image | Fedora CoreOS image for compute instances | "fedora-coreos-31-20200323-3-2" |
|
||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
|
||||
|
||||
Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-zones/regions-zones) and list Fedora CoreOS [images](https://cloud.google.com/compute/docs/images) with `gcloud compute images list | grep fedora-coreos`.
|
||||
@ -233,7 +234,6 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
|
||||
| worker_type | Machine type for workers | "n1-standard-1" | See below |
|
||||
| os_image | Fedora CoreOS image for compute instances | "" | "fedora-coreos-31-20200113-3-1" |
|
||||
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
||||
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [examples](/advanced/customization/) |
|
||||
|
@ -35,6 +35,7 @@ Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
|
||||
|---------------|------------------|------------------|--------|
|
||||
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](fedora-coreos/aws.md) | stable |
|
||||
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](fedora-coreos/bare-metal.md) | beta |
|
||||
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](fedora-coreos/digitalocean.md) | alpha |
|
||||
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | beta |
|
||||
|
||||
Typhoon is available for [Flatcar Container Linux](https://www.flatcar-linux.org/releases/).
|
||||
@ -44,13 +45,13 @@ Typhoon is available for [Flatcar Container Linux](https://www.flatcar-linux.org
|
||||
| AWS | Flatcar Linux | [aws/container-linux/kubernetes](cl/aws.md) | stable |
|
||||
| Azure | Flatcar Linux | [azure/container-linux/kubernetes](cl/azure.md) | alpha |
|
||||
| Bare-Metal | Flatcar Linux | [bare-metal/container-linux/kubernetes](cl/bare-metal.md) | stable |
|
||||
| DigitalOcean | Flatcar Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | alpha |
|
||||
| Google Cloud | Flatcar Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | alpha |
|
||||
| Digital Ocean | Flatcar Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | alpha |
|
||||
|
||||
## Documentation
|
||||
|
||||
* Architecture [concepts](architecture/concepts.md) and [operating-systems](architecture/operating-systems.md)
|
||||
* Fedora CoreOS tutorials for [AWS](fedora-coreos/aws.md), [Bare-Metal](fedora-coreos/bare-metal.md), and [Google Cloud](fedora-coreos/google-cloud.md)
|
||||
* Fedora CoreOS tutorials for [AWS](fedora-coreos/aws.md), [Bare-Metal](fedora-coreos/bare-metal.md), [DigitalOcean](fedora-coreos/digitalocean.md), and [Google Cloud](fedora-coreos/google-cloud.md)
|
||||
* Flatcar Linux tutorials for [AWS](cl/aws.md), [Azure](cl/azure.md), [Bare-Metal](cl/bare-metal.md), [DigitalOcean](cl/digital-ocean.md), and [Google Cloud](cl/google-cloud.md)
|
||||
|
||||
## Example
|
||||
|
Loading…
Reference in New Issue
Block a user