Add support for Fedora CoreOS on Azure
* Add `azure/fedora-coreos/kubernetes` module
This commit is contained in:
parent
76ab4c4c2a
commit
5c4a3f73d5
11
CHANGES.md
11
CHANGES.md
|
@ -11,11 +11,12 @@ Notable changes between versions.
|
||||||
|
|
||||||
#### AWS
|
#### AWS
|
||||||
|
|
||||||
* Change Container Linux `os_image` default from `coreos-stable` to `flatcar-stable`
|
* Change Container Linux `os_image` default from `coreos-stable` to `flatcar-stable` ([#702](https://github.com/poseidon/typhoon/pull/702))
|
||||||
|
|
||||||
#### Azure
|
#### Azure
|
||||||
|
|
||||||
* Change Container Linux `os_image` default from `coreos-stable` to `flatcar-stable`
|
* Add support for Fedora CoreOS ([#704](https://github.com/poseidon/typhoon/pull/704))
|
||||||
|
* Change Container Linux `os_image` default from `coreos-stable` to `flatcar-stable` ([#702](https://github.com/poseidon/typhoon/pull/702))
|
||||||
|
|
||||||
#### Bare-Metal
|
#### Bare-Metal
|
||||||
|
|
||||||
|
@ -23,11 +24,11 @@ Notable changes between versions.
|
||||||
|
|
||||||
#### Google
|
#### Google
|
||||||
|
|
||||||
* Change Container Linux `os_image` to be required. Container Linux users should upload a Flatcar Linux image and set it (**action required**)
|
* Change Container Linux `os_image` to be required. Container Linux users should upload a Flatcar Linux image and set it (**action required**) ([#702](https://github.com/poseidon/typhoon/pull/702))
|
||||||
|
|
||||||
#### DigitalOcean
|
#### DigitalOcean
|
||||||
|
|
||||||
* Change Container Linux `os_image` to be required. Container Linux users should upload a Flatcar Linux image and set it (**action required**)
|
* Change Container Linux `os_image` to be required. Container Linux users should upload a Flatcar Linux image and set it (**action required**) ([#702](https://github.com/poseidon/typhoon/pull/702))
|
||||||
|
|
||||||
## v1.18.1
|
## v1.18.1
|
||||||
|
|
||||||
|
@ -44,7 +45,7 @@ Notable changes between versions.
|
||||||
* Rename Container Linux `controller_clc_snippets` to `controller_snippets` for consistency ([#688](https://github.com/poseidon/typhoon/pull/688))
|
* Rename Container Linux `controller_clc_snippets` to `controller_snippets` for consistency ([#688](https://github.com/poseidon/typhoon/pull/688))
|
||||||
* Rename Container Linux `worker_clc_snippets` to `worker_snippets` for consistency
|
* Rename Container Linux `worker_clc_snippets` to `worker_snippets` for consistency
|
||||||
* Rename Container Linux `clc_snippets` (bare-metal) to `snippets` for consistency
|
* Rename Container Linux `clc_snippets` (bare-metal) to `snippets` for consistency
|
||||||
* Drop support for [gitRepo](https://kubernetes.io/docs/concepts/storage/volumes/#gitrepo) volumes
|
* Drop support for [gitRepo](https://kubernetes.io/docs/concepts/storage/volumes/#gitrepo) volumes ([kubelet#3](https://github.com/poseidon/kubelet/pull/3))
|
||||||
|
|
||||||
#### Azure
|
#### Azure
|
||||||
|
|
||||||
|
|
|
@ -26,6 +26,7 @@ Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
|
||||||
| Platform | Operating System | Terraform Module | Status |
|
| Platform | Operating System | Terraform Module | Status |
|
||||||
|---------------|------------------|------------------|--------|
|
|---------------|------------------|------------------|--------|
|
||||||
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](aws/fedora-coreos/kubernetes) | stable |
|
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](aws/fedora-coreos/kubernetes) | stable |
|
||||||
|
| Azure | Fedora CoreOS | [azure/fedora-coreos/kubernetes](azure/fedora-coreos/kubernetes) | alpha |
|
||||||
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](bare-metal/fedora-coreos/kubernetes) | beta |
|
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](bare-metal/fedora-coreos/kubernetes) | beta |
|
||||||
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](digital-ocean/fedora-coreos/kubernetes) | alpha |
|
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](digital-ocean/fedora-coreos/kubernetes) | alpha |
|
||||||
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | beta |
|
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | beta |
|
||||||
|
@ -54,7 +55,7 @@ Typhoon is available for CoreOS Container Linux ([no updates](https://coreos.com
|
||||||
|
|
||||||
* [Docs](https://typhoon.psdn.io)
|
* [Docs](https://typhoon.psdn.io)
|
||||||
* Architecture [concepts](https://typhoon.psdn.io/architecture/concepts/) and [operating systems](https://typhoon.psdn.io/architecture/operating-systems/)
|
* Architecture [concepts](https://typhoon.psdn.io/architecture/concepts/) and [operating systems](https://typhoon.psdn.io/architecture/operating-systems/)
|
||||||
* Fedora CoreOS tutorials for [AWS](docs/fedora-coreos/aws.md), [Bare-Metal](docs/fedora-coreos/bare-metal.md), [DigitalOcean](docs/fedora-coreos/digitalocean.md), and [Google Cloud](docs/fedora-coreos/google-cloud.md)
|
* Fedora CoreOS tutorials for [AWS](docs/fedora-coreos/aws.md), [Azure](docs/fedora-coreos/azure.md), [Bare-Metal](docs/fedora-coreos/bare-metal.md), [DigitalOcean](docs/fedora-coreos/digitalocean.md), and [Google Cloud](docs/fedora-coreos/google-cloud.md)
|
||||||
* Flatcar Linux tutorials for [AWS](docs/cl/aws.md), [Azure](docs/cl/azure.md), [Bare-Metal](docs/cl/bare-metal.md), [DigitalOcean](docs/cl/digital-ocean.md), and [Google Cloud](docs/cl/google-cloud.md)
|
* Flatcar Linux tutorials for [AWS](docs/cl/aws.md), [Azure](docs/cl/azure.md), [Bare-Metal](docs/cl/bare-metal.md), [DigitalOcean](docs/cl/digital-ocean.md), and [Google Cloud](docs/cl/google-cloud.md)
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
|
@ -60,7 +60,7 @@ variable "disk_size" {
|
||||||
|
|
||||||
variable "worker_priority" {
|
variable "worker_priority" {
|
||||||
type = string
|
type = string
|
||||||
description = "Set worker priority to Low to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time."
|
description = "Set worker priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time."
|
||||||
default = "Regular"
|
default = "Regular"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -52,7 +52,7 @@ variable "os_image" {
|
||||||
|
|
||||||
variable "priority" {
|
variable "priority" {
|
||||||
type = string
|
type = string
|
||||||
description = "Set priority to Low to use reduced cost surplus capacity, with the tradeoff that instances can be evicted at any time."
|
description = "Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be evicted at any time."
|
||||||
default = "Regular"
|
default = "Regular"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,23 @@
|
||||||
|
The MIT License (MIT)
|
||||||
|
|
||||||
|
Copyright (c) 2020 Typhoon Authors
|
||||||
|
Copyright (c) 2020 Dalton Hubble
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in
|
||||||
|
all copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||||
|
THE SOFTWARE.
|
||||||
|
|
|
@ -0,0 +1,23 @@
|
||||||
|
# Typhoon <img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||||
|
|
||||||
|
Typhoon is a minimal and free Kubernetes distribution.
|
||||||
|
|
||||||
|
* Minimal, stable base Kubernetes distribution
|
||||||
|
* Declarative infrastructure and configuration
|
||||||
|
* Free (freedom and cost) and privacy-respecting
|
||||||
|
* Practical for labs, datacenters, and clouds
|
||||||
|
|
||||||
|
Typhoon distributes upstream Kubernetes, architectural conventions, and cluster addons, much like a GNU/Linux distribution provides the Linux kernel and userspace components.
|
||||||
|
|
||||||
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
|
* Kubernetes v1.18.1 (upstream)
|
||||||
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
||||||
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
|
## Docs
|
||||||
|
|
||||||
|
Please see the [official docs](https://typhoon.psdn.io) and the Azure [tutorial](https://typhoon.psdn.io/fedora-coreos/azure/).
|
||||||
|
|
|
@ -0,0 +1,26 @@
|
||||||
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
|
module "bootstrap" {
|
||||||
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1ad53d3b1c1ad75a4ed27f124f772fc5dc025245"
|
||||||
|
|
||||||
|
cluster_name = var.cluster_name
|
||||||
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
|
etcd_servers = formatlist("%s.%s", azurerm_dns_a_record.etcds.*.name, var.dns_zone)
|
||||||
|
asset_dir = var.asset_dir
|
||||||
|
|
||||||
|
networking = var.networking
|
||||||
|
|
||||||
|
# only effective with Calico networking
|
||||||
|
# we should be able to use 1450 MTU, but in practice, 1410 was needed
|
||||||
|
network_encapsulation = "vxlan"
|
||||||
|
network_mtu = "1410"
|
||||||
|
|
||||||
|
pod_cidr = var.pod_cidr
|
||||||
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
enable_reporting = var.enable_reporting
|
||||||
|
enable_aggregation = var.enable_aggregation
|
||||||
|
|
||||||
|
# Fedora CoreOS
|
||||||
|
trusted_certs_dir = "/etc/pki/tls/certs"
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,151 @@
|
||||||
|
# Discrete DNS records for each controller's private IPv4 for etcd usage
|
||||||
|
resource "azurerm_dns_a_record" "etcds" {
|
||||||
|
count = var.controller_count
|
||||||
|
resource_group_name = var.dns_zone_group
|
||||||
|
|
||||||
|
# DNS Zone name where record should be created
|
||||||
|
zone_name = var.dns_zone
|
||||||
|
|
||||||
|
# DNS record
|
||||||
|
name = format("%s-etcd%d", var.cluster_name, count.index)
|
||||||
|
ttl = 300
|
||||||
|
|
||||||
|
# private IPv4 address for etcd
|
||||||
|
records = [azurerm_network_interface.controllers.*.private_ip_address[count.index]]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Controller availability set to spread controllers
|
||||||
|
resource "azurerm_availability_set" "controllers" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-controllers"
|
||||||
|
location = var.region
|
||||||
|
platform_fault_domain_count = 2
|
||||||
|
platform_update_domain_count = 4
|
||||||
|
managed = true
|
||||||
|
}
|
||||||
|
|
||||||
|
# Controller instances
|
||||||
|
resource "azurerm_linux_virtual_machine" "controllers" {
|
||||||
|
count = var.controller_count
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-controller-${count.index}"
|
||||||
|
location = var.region
|
||||||
|
availability_set_id = azurerm_availability_set.controllers.id
|
||||||
|
|
||||||
|
size = var.controller_type
|
||||||
|
custom_data = base64encode(data.ct_config.controller-ignitions.*.rendered[count.index])
|
||||||
|
|
||||||
|
# storage
|
||||||
|
source_image_id = var.os_image
|
||||||
|
os_disk {
|
||||||
|
name = "${var.cluster_name}-controller-${count.index}"
|
||||||
|
caching = "None"
|
||||||
|
disk_size_gb = var.disk_size
|
||||||
|
storage_account_type = "Premium_LRS"
|
||||||
|
}
|
||||||
|
|
||||||
|
# network
|
||||||
|
network_interface_ids = [
|
||||||
|
azurerm_network_interface.controllers.*.id[count.index]
|
||||||
|
]
|
||||||
|
|
||||||
|
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
||||||
|
admin_username = "core"
|
||||||
|
admin_ssh_key {
|
||||||
|
username = "core"
|
||||||
|
public_key = var.ssh_authorized_key
|
||||||
|
}
|
||||||
|
|
||||||
|
lifecycle {
|
||||||
|
ignore_changes = [
|
||||||
|
os_disk,
|
||||||
|
custom_data,
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Controller public IPv4 addresses
|
||||||
|
resource "azurerm_public_ip" "controllers" {
|
||||||
|
count = var.controller_count
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-controller-${count.index}"
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Controller NICs with public and private IPv4
|
||||||
|
resource "azurerm_network_interface" "controllers" {
|
||||||
|
count = var.controller_count
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-controller-${count.index}"
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
|
||||||
|
ip_configuration {
|
||||||
|
name = "ip0"
|
||||||
|
subnet_id = azurerm_subnet.controller.id
|
||||||
|
private_ip_address_allocation = "Dynamic"
|
||||||
|
# instance public IPv4
|
||||||
|
public_ip_address_id = azurerm_public_ip.controllers.*.id[count.index]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Associate controller network interface with controller security group
|
||||||
|
resource "azurerm_network_interface_security_group_association" "controllers" {
|
||||||
|
count = var.controller_count
|
||||||
|
|
||||||
|
network_interface_id = azurerm_network_interface.controllers[count.index].id
|
||||||
|
network_security_group_id = azurerm_network_security_group.controller.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Associate controller network interface with controller backend address pool
|
||||||
|
resource "azurerm_network_interface_backend_address_pool_association" "controllers" {
|
||||||
|
count = var.controller_count
|
||||||
|
|
||||||
|
network_interface_id = azurerm_network_interface.controllers[count.index].id
|
||||||
|
ip_configuration_name = "ip0"
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Controller Ignition configs
|
||||||
|
data "ct_config" "controller-ignitions" {
|
||||||
|
count = var.controller_count
|
||||||
|
content = data.template_file.controller-configs.*.rendered[count.index]
|
||||||
|
pretty_print = false
|
||||||
|
snippets = var.controller_snippets
|
||||||
|
}
|
||||||
|
|
||||||
|
# Controller Fedora CoreOS configs
|
||||||
|
data "template_file" "controller-configs" {
|
||||||
|
count = var.controller_count
|
||||||
|
|
||||||
|
template = file("${path.module}/fcc/controller.yaml")
|
||||||
|
|
||||||
|
vars = {
|
||||||
|
# Cannot use cyclic dependencies on controllers or their DNS records
|
||||||
|
etcd_name = "etcd${count.index}"
|
||||||
|
etcd_domain = "${var.cluster_name}-etcd${count.index}.${var.dns_zone}"
|
||||||
|
# etcd0=https://cluster-etcd0.example.com,etcd1=https://cluster-etcd1.example.com,...
|
||||||
|
etcd_initial_cluster = join(",", data.template_file.etcds.*.rendered)
|
||||||
|
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||||
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
data "template_file" "etcds" {
|
||||||
|
count = var.controller_count
|
||||||
|
template = "etcd$${index}=https://$${cluster_name}-etcd$${index}.$${dns_zone}:2380"
|
||||||
|
|
||||||
|
vars = {
|
||||||
|
index = count.index
|
||||||
|
cluster_name = var.cluster_name
|
||||||
|
dns_zone = var.dns_zone
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,213 @@
|
||||||
|
---
|
||||||
|
variant: fcos
|
||||||
|
version: 1.0.0
|
||||||
|
systemd:
|
||||||
|
units:
|
||||||
|
- name: etcd-member.service
|
||||||
|
enabled: true
|
||||||
|
contents: |
|
||||||
|
[Unit]
|
||||||
|
Description=etcd (System Container)
|
||||||
|
Documentation=https://github.com/coreos/etcd
|
||||||
|
Wants=network-online.target network.target
|
||||||
|
After=network-online.target
|
||||||
|
[Service]
|
||||||
|
# https://github.com/opencontainers/runc/pull/1807
|
||||||
|
# Type=notify
|
||||||
|
# NotifyAccess=exec
|
||||||
|
Type=exec
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=10s
|
||||||
|
TimeoutStartSec=0
|
||||||
|
LimitNOFILE=40000
|
||||||
|
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||||
|
ExecStartPre=-/usr/bin/podman rm etcd
|
||||||
|
#--volume $${NOTIFY_SOCKET}:/run/systemd/notify \
|
||||||
|
ExecStart=/usr/bin/podman run --name etcd \
|
||||||
|
--env-file /etc/etcd/etcd.env \
|
||||||
|
--network host \
|
||||||
|
--volume /var/lib/etcd:/var/lib/etcd:rw,Z \
|
||||||
|
--volume /etc/ssl/etcd:/etc/ssl/certs:ro,Z \
|
||||||
|
quay.io/coreos/etcd:v3.4.7
|
||||||
|
ExecStop=/usr/bin/podman stop etcd
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
- name: docker.service
|
||||||
|
enabled: true
|
||||||
|
- name: wait-for-dns.service
|
||||||
|
enabled: true
|
||||||
|
contents: |
|
||||||
|
[Unit]
|
||||||
|
Description=Wait for DNS entries
|
||||||
|
Before=kubelet.service
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
RemainAfterExit=true
|
||||||
|
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||||
|
[Install]
|
||||||
|
RequiredBy=kubelet.service
|
||||||
|
RequiredBy=etcd-member.service
|
||||||
|
- name: kubelet.service
|
||||||
|
enabled: true
|
||||||
|
contents: |
|
||||||
|
[Unit]
|
||||||
|
Description=Kubelet via Hyperkube (System Container)
|
||||||
|
Wants=rpc-statd.service
|
||||||
|
[Service]
|
||||||
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
|
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||||
|
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||||
|
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||||
|
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||||
|
ExecStart=/usr/bin/podman run --name kubelet \
|
||||||
|
--privileged \
|
||||||
|
--pid host \
|
||||||
|
--network host \
|
||||||
|
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||||
|
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||||
|
--volume /etc/ssl/certs:/etc/ssl/certs:ro \
|
||||||
|
--volume /lib/modules:/lib/modules:ro \
|
||||||
|
--volume /run:/run \
|
||||||
|
--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||||
|
--volume /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||||
|
--volume /etc/pki/tls/certs:/usr/share/ca-certificates:ro \
|
||||||
|
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||||
|
--volume /var/lib/docker:/var/lib/docker \
|
||||||
|
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||||
|
--volume /var/log:/var/log \
|
||||||
|
--volume /var/run/lock:/var/run/lock:z \
|
||||||
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
|
quay.io/poseidon/kubelet:v1.18.1 \
|
||||||
|
--anonymous-auth=false \
|
||||||
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
|
--cgroup-driver=systemd \
|
||||||
|
--cgroups-per-qos=true \
|
||||||
|
--enforce-node-allocatable=pods \
|
||||||
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
|
--exit-on-lock-contention \
|
||||||
|
--healthz-port=0 \
|
||||||
|
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
|
--lock-file=/var/run/lock/kubelet.lock \
|
||||||
|
--network-plugin=cni \
|
||||||
|
--node-labels=node.kubernetes.io/master \
|
||||||
|
--node-labels=node.kubernetes.io/controller="true" \
|
||||||
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
|
--read-only-port=0 \
|
||||||
|
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
|
||||||
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
|
Delegate=yes
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
- name: bootstrap.service
|
||||||
|
contents: |
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes control plane
|
||||||
|
ConditionPathExists=!/opt/bootstrap/bootstrap.done
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
RemainAfterExit=true
|
||||||
|
WorkingDirectory=/opt/bootstrap
|
||||||
|
ExecStartPre=-/usr/bin/podman rm bootstrap
|
||||||
|
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||||
|
--network host \
|
||||||
|
--volume /etc/kubernetes/bootstrap-secrets:/etc/kubernetes/secrets:ro,Z \
|
||||||
|
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||||
|
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||||
|
--entrypoint=/apply \
|
||||||
|
quay.io/poseidon/kubelet:v1.18.1
|
||||||
|
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||||
|
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||||
|
storage:
|
||||||
|
directories:
|
||||||
|
- path: /etc/kubernetes
|
||||||
|
- path: /opt/bootstrap
|
||||||
|
files:
|
||||||
|
- path: /etc/kubernetes/kubeconfig
|
||||||
|
mode: 0644
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
${kubeconfig}
|
||||||
|
- path: /opt/bootstrap/layout
|
||||||
|
mode: 0544
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
#!/bin/bash -e
|
||||||
|
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||||
|
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||||
|
mkdir -p /etc/ssl/etcd/etcd
|
||||||
|
mkdir -p /etc/kubernetes/bootstrap-secrets
|
||||||
|
mv tls/etcd/{peer*,server*} /etc/ssl/etcd/etcd/
|
||||||
|
mv tls/etcd/etcd-client* /etc/kubernetes/bootstrap-secrets/
|
||||||
|
chown -R etcd:etcd /etc/ssl/etcd
|
||||||
|
chmod -R 500 /etc/ssl/etcd
|
||||||
|
mv auth/kubeconfig /etc/kubernetes/bootstrap-secrets/
|
||||||
|
mv tls/k8s/* /etc/kubernetes/bootstrap-secrets/
|
||||||
|
sudo mkdir -p /etc/kubernetes/manifests
|
||||||
|
sudo mv static-manifests/* /etc/kubernetes/manifests/
|
||||||
|
sudo mkdir -p /opt/bootstrap/assets
|
||||||
|
sudo mv manifests /opt/bootstrap/assets/manifests
|
||||||
|
sudo mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||||
|
rm -rf assets auth static-manifests tls manifests-networking
|
||||||
|
- path: /opt/bootstrap/apply
|
||||||
|
mode: 0544
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
#!/bin/bash -e
|
||||||
|
export KUBECONFIG=/etc/kubernetes/secrets/kubeconfig
|
||||||
|
until kubectl version; do
|
||||||
|
echo "Waiting for static pod control plane"
|
||||||
|
sleep 5
|
||||||
|
done
|
||||||
|
until kubectl apply -f /assets/manifests -R; do
|
||||||
|
echo "Retry applying manifests"
|
||||||
|
sleep 5
|
||||||
|
done
|
||||||
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
[Manager]
|
||||||
|
DefaultCPUAccounting=yes
|
||||||
|
DefaultMemoryAccounting=yes
|
||||||
|
DefaultBlockIOAccounting=yes
|
||||||
|
- path: /etc/etcd/etcd.env
|
||||||
|
mode: 0644
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
# TODO: Use a systemd dropin once podman v1.4.5 is avail.
|
||||||
|
NOTIFY_SOCKET=/run/systemd/notify
|
||||||
|
ETCD_NAME=${etcd_name}
|
||||||
|
ETCD_DATA_DIR=/var/lib/etcd
|
||||||
|
ETCD_ADVERTISE_CLIENT_URLS=https://${etcd_domain}:2379
|
||||||
|
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://${etcd_domain}:2380
|
||||||
|
ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379
|
||||||
|
ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380
|
||||||
|
ETCD_LISTEN_METRICS_URLS=http://0.0.0.0:2381
|
||||||
|
ETCD_INITIAL_CLUSTER=${etcd_initial_cluster}
|
||||||
|
ETCD_STRICT_RECONFIG_CHECK=true
|
||||||
|
ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/server-ca.crt
|
||||||
|
ETCD_CERT_FILE=/etc/ssl/certs/etcd/server.crt
|
||||||
|
ETCD_KEY_FILE=/etc/ssl/certs/etcd/server.key
|
||||||
|
ETCD_CLIENT_CERT_AUTH=true
|
||||||
|
ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/peer-ca.crt
|
||||||
|
ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt
|
||||||
|
ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key
|
||||||
|
ETCD_PEER_CLIENT_CERT_AUTH=true
|
||||||
|
passwd:
|
||||||
|
users:
|
||||||
|
- name: core
|
||||||
|
ssh_authorized_keys:
|
||||||
|
- ${ssh_authorized_key}
|
||||||
|
|
|
@ -0,0 +1,161 @@
|
||||||
|
# DNS record for the apiserver load balancer
|
||||||
|
resource "azurerm_dns_a_record" "apiserver" {
|
||||||
|
resource_group_name = var.dns_zone_group
|
||||||
|
|
||||||
|
# DNS Zone name where record should be created
|
||||||
|
zone_name = var.dns_zone
|
||||||
|
|
||||||
|
# DNS record
|
||||||
|
name = var.cluster_name
|
||||||
|
ttl = 300
|
||||||
|
|
||||||
|
# IPv4 address of apiserver load balancer
|
||||||
|
records = [azurerm_public_ip.apiserver-ipv4.ip_address]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Static IPv4 address for the apiserver frontend
|
||||||
|
resource "azurerm_public_ip" "apiserver-ipv4" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-apiserver-ipv4"
|
||||||
|
location = var.region
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Static IPv4 address for the ingress frontend
|
||||||
|
resource "azurerm_public_ip" "ingress-ipv4" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-ingress-ipv4"
|
||||||
|
location = var.region
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Network Load Balancer for apiservers and ingress
|
||||||
|
resource "azurerm_lb" "cluster" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = var.cluster_name
|
||||||
|
location = var.region
|
||||||
|
sku = "Standard"
|
||||||
|
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "apiserver"
|
||||||
|
public_ip_address_id = azurerm_public_ip.apiserver-ipv4.id
|
||||||
|
}
|
||||||
|
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "ingress"
|
||||||
|
public_ip_address_id = azurerm_public_ip.ingress-ipv4.id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_rule" "apiserver" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "apiserver"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
frontend_ip_configuration_name = "apiserver"
|
||||||
|
|
||||||
|
protocol = "Tcp"
|
||||||
|
frontend_port = 6443
|
||||||
|
backend_port = 6443
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||||
|
probe_id = azurerm_lb_probe.apiserver.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_rule" "ingress-http" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "ingress-http"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
frontend_ip_configuration_name = "ingress"
|
||||||
|
disable_outbound_snat = true
|
||||||
|
|
||||||
|
protocol = "Tcp"
|
||||||
|
frontend_port = 80
|
||||||
|
backend_port = 80
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
||||||
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_rule" "ingress-https" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "ingress-https"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
frontend_ip_configuration_name = "ingress"
|
||||||
|
disable_outbound_snat = true
|
||||||
|
|
||||||
|
protocol = "Tcp"
|
||||||
|
frontend_port = 443
|
||||||
|
backend_port = 443
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
||||||
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Worker outbound TCP/UDP SNAT
|
||||||
|
resource "azurerm_lb_outbound_rule" "worker-outbound" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "worker"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "ingress"
|
||||||
|
}
|
||||||
|
|
||||||
|
protocol = "All"
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Address pool of controllers
|
||||||
|
resource "azurerm_lb_backend_address_pool" "controller" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "controller"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Address pool of workers
|
||||||
|
resource "azurerm_lb_backend_address_pool" "worker" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "worker"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Health checks / probes
|
||||||
|
|
||||||
|
# TCP health check for apiserver
|
||||||
|
resource "azurerm_lb_probe" "apiserver" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "apiserver"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
protocol = "Tcp"
|
||||||
|
port = 6443
|
||||||
|
|
||||||
|
# unhealthy threshold
|
||||||
|
number_of_probes = 3
|
||||||
|
|
||||||
|
interval_in_seconds = 5
|
||||||
|
}
|
||||||
|
|
||||||
|
# HTTP health check for ingress
|
||||||
|
resource "azurerm_lb_probe" "ingress" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "ingress"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
protocol = "Http"
|
||||||
|
port = 10254
|
||||||
|
request_path = "/healthz"
|
||||||
|
|
||||||
|
# unhealthy threshold
|
||||||
|
number_of_probes = 3
|
||||||
|
|
||||||
|
interval_in_seconds = 5
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,44 @@
|
||||||
|
# Organize cluster into a resource group
|
||||||
|
resource "azurerm_resource_group" "cluster" {
|
||||||
|
name = var.cluster_name
|
||||||
|
location = var.region
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_virtual_network" "network" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = var.cluster_name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
address_space = [var.host_cidr]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Subnets - separate subnets for controller and workers because Azure
|
||||||
|
# network security groups are based on IPv4 CIDR rather than instance
|
||||||
|
# tags like GCP or security group membership like AWS
|
||||||
|
|
||||||
|
resource "azurerm_subnet" "controller" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "controller"
|
||||||
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
|
address_prefix = cidrsubnet(var.host_cidr, 1, 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_subnet_network_security_group_association" "controller" {
|
||||||
|
subnet_id = azurerm_subnet.controller.id
|
||||||
|
network_security_group_id = azurerm_network_security_group.controller.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_subnet" "worker" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "worker"
|
||||||
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
|
address_prefix = cidrsubnet(var.host_cidr, 1, 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_subnet_network_security_group_association" "worker" {
|
||||||
|
subnet_id = azurerm_subnet.worker.id
|
||||||
|
network_security_group_id = azurerm_network_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,59 @@
|
||||||
|
output "kubeconfig-admin" {
|
||||||
|
value = module.bootstrap.kubeconfig-admin
|
||||||
|
}
|
||||||
|
|
||||||
|
# Outputs for Kubernetes Ingress
|
||||||
|
|
||||||
|
output "ingress_static_ipv4" {
|
||||||
|
value = azurerm_public_ip.ingress-ipv4.ip_address
|
||||||
|
description = "IPv4 address of the load balancer for distributing traffic to Ingress controllers"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Outputs for worker pools
|
||||||
|
|
||||||
|
output "region" {
|
||||||
|
value = azurerm_resource_group.cluster.location
|
||||||
|
}
|
||||||
|
|
||||||
|
output "resource_group_name" {
|
||||||
|
value = azurerm_resource_group.cluster.name
|
||||||
|
}
|
||||||
|
|
||||||
|
output "resource_group_id" {
|
||||||
|
value = azurerm_resource_group.cluster.id
|
||||||
|
}
|
||||||
|
|
||||||
|
output "subnet_id" {
|
||||||
|
value = azurerm_subnet.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
|
output "security_group_id" {
|
||||||
|
value = azurerm_network_security_group.worker.id
|
||||||
|
}
|
||||||
|
|
||||||
|
output "kubeconfig" {
|
||||||
|
value = module.bootstrap.kubeconfig-kubelet
|
||||||
|
}
|
||||||
|
|
||||||
|
# Outputs for custom firewalling
|
||||||
|
|
||||||
|
output "worker_security_group_name" {
|
||||||
|
value = azurerm_network_security_group.worker.name
|
||||||
|
}
|
||||||
|
|
||||||
|
output "worker_address_prefix" {
|
||||||
|
description = "Worker network subnet CIDR address (for source/destination)"
|
||||||
|
value = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Outputs for custom load balancing
|
||||||
|
|
||||||
|
output "loadbalancer_id" {
|
||||||
|
description = "ID of the cluster load balancer"
|
||||||
|
value = azurerm_lb.cluster.id
|
||||||
|
}
|
||||||
|
|
||||||
|
output "backend_address_pool_id" {
|
||||||
|
description = "ID of the worker backend address pool"
|
||||||
|
value = azurerm_lb_backend_address_pool.worker.id
|
||||||
|
}
|
|
@ -0,0 +1,336 @@
|
||||||
|
# Controller security group
|
||||||
|
|
||||||
|
resource "azurerm_network_security_group" "controller" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-controller"
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-ssh"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2000"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "22"
|
||||||
|
source_address_prefix = "*"
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-etcd" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-etcd"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2005"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "2379-2380"
|
||||||
|
source_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape etcd metrics
|
||||||
|
resource "azurerm_network_security_rule" "controller-etcd-metrics" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-etcd-metrics"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2010"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "2381"
|
||||||
|
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape kube-proxy metrics
|
||||||
|
resource "azurerm_network_security_rule" "controller-kube-proxy" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-kube-proxy-metrics"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2011"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "10249"
|
||||||
|
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
|
||||||
|
resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-kube-metrics"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2012"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "10251-10252"
|
||||||
|
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-apiserver" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-apiserver"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2015"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "6443"
|
||||||
|
source_address_prefix = "*"
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-vxlan"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2020"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Udp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "4789"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
|
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-node-exporter"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2025"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "9100"
|
||||||
|
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||||
|
resource "azurerm_network_security_rule" "controller-kubelet" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-kubelet"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "2030"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "10250"
|
||||||
|
|
||||||
|
# allow Prometheus to scrape kubelet metrics too
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||||
|
# https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#default-security-rules
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-allow-loadblancer" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-loadbalancer"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "3000"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "*"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "*"
|
||||||
|
source_address_prefix = "AzureLoadBalancer"
|
||||||
|
destination_address_prefix = "*"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "controller-deny-all" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "deny-all"
|
||||||
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
|
priority = "3005"
|
||||||
|
access = "Deny"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "*"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "*"
|
||||||
|
source_address_prefix = "*"
|
||||||
|
destination_address_prefix = "*"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Worker security group
|
||||||
|
|
||||||
|
resource "azurerm_network_security_group" "worker" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-worker"
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-ssh"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2000"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "22"
|
||||||
|
source_address_prefix = azurerm_subnet.controller.address_prefix
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-http" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-http"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2005"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "80"
|
||||||
|
source_address_prefix = "*"
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-https" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-https"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2010"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "443"
|
||||||
|
source_address_prefix = "*"
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-vxlan"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2015"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Udp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "4789"
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
|
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-node-exporter"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2020"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "9100"
|
||||||
|
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow Prometheus to scrape kube-proxy
|
||||||
|
resource "azurerm_network_security_rule" "worker-kube-proxy" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-kube-proxy"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2024"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "10249"
|
||||||
|
source_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||||
|
resource "azurerm_network_security_rule" "worker-kubelet" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-kubelet"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "2025"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "Tcp"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "10250"
|
||||||
|
|
||||||
|
# allow Prometheus to scrape kubelet metrics too
|
||||||
|
source_address_prefixes = [azurerm_subnet.controller.address_prefix, azurerm_subnet.worker.address_prefix]
|
||||||
|
destination_address_prefix = azurerm_subnet.worker.address_prefix
|
||||||
|
}
|
||||||
|
|
||||||
|
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||||
|
# https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#default-security-rules
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-allow-loadblancer" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "allow-loadbalancer"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "3000"
|
||||||
|
access = "Allow"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "*"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "*"
|
||||||
|
source_address_prefix = "AzureLoadBalancer"
|
||||||
|
destination_address_prefix = "*"
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_network_security_rule" "worker-deny-all" {
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
|
||||||
|
name = "deny-all"
|
||||||
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
|
priority = "3005"
|
||||||
|
access = "Deny"
|
||||||
|
direction = "Inbound"
|
||||||
|
protocol = "*"
|
||||||
|
source_port_range = "*"
|
||||||
|
destination_port_range = "*"
|
||||||
|
source_address_prefix = "*"
|
||||||
|
destination_address_prefix = "*"
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,59 @@
|
||||||
|
locals {
|
||||||
|
# format assets for distribution
|
||||||
|
assets_bundle = [
|
||||||
|
# header with the unpack location
|
||||||
|
for key, value in module.bootstrap.assets_dist :
|
||||||
|
format("##### %s\n%s", key, value)
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Secure copy assets to controllers.
|
||||||
|
resource "null_resource" "copy-controller-secrets" {
|
||||||
|
count = var.controller_count
|
||||||
|
|
||||||
|
depends_on = [
|
||||||
|
module.bootstrap,
|
||||||
|
azurerm_linux_virtual_machine.controllers
|
||||||
|
]
|
||||||
|
|
||||||
|
connection {
|
||||||
|
type = "ssh"
|
||||||
|
host = azurerm_public_ip.controllers.*.ip_address[count.index]
|
||||||
|
user = "core"
|
||||||
|
timeout = "15m"
|
||||||
|
}
|
||||||
|
|
||||||
|
provisioner "file" {
|
||||||
|
content = join("\n", local.assets_bundle)
|
||||||
|
destination = "$HOME/assets"
|
||||||
|
}
|
||||||
|
|
||||||
|
provisioner "remote-exec" {
|
||||||
|
inline = [
|
||||||
|
"sudo /opt/bootstrap/layout",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Connect to a controller to perform one-time cluster bootstrap.
|
||||||
|
resource "null_resource" "bootstrap" {
|
||||||
|
depends_on = [
|
||||||
|
null_resource.copy-controller-secrets,
|
||||||
|
module.workers,
|
||||||
|
azurerm_dns_a_record.apiserver,
|
||||||
|
]
|
||||||
|
|
||||||
|
connection {
|
||||||
|
type = "ssh"
|
||||||
|
host = azurerm_public_ip.controllers.*.ip_address[0]
|
||||||
|
user = "core"
|
||||||
|
timeout = "15m"
|
||||||
|
}
|
||||||
|
|
||||||
|
provisioner "remote-exec" {
|
||||||
|
inline = [
|
||||||
|
"sudo systemctl start bootstrap",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,143 @@
|
||||||
|
variable "cluster_name" {
|
||||||
|
type = string
|
||||||
|
description = "Unique cluster name (prepended to dns_zone)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Azure
|
||||||
|
|
||||||
|
variable "region" {
|
||||||
|
type = string
|
||||||
|
description = "Azure Region (e.g. centralus , see `az account list-locations --output table`)"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "dns_zone" {
|
||||||
|
type = string
|
||||||
|
description = "Azure DNS Zone (e.g. azure.example.com)"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "dns_zone_group" {
|
||||||
|
type = string
|
||||||
|
description = "Resource group where the Azure DNS Zone resides (e.g. global)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# instances
|
||||||
|
|
||||||
|
variable "controller_count" {
|
||||||
|
type = number
|
||||||
|
description = "Number of controllers (i.e. masters)"
|
||||||
|
default = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "worker_count" {
|
||||||
|
type = number
|
||||||
|
description = "Number of workers"
|
||||||
|
default = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "controller_type" {
|
||||||
|
type = string
|
||||||
|
description = "Machine type for controllers (see `az vm list-skus --location centralus`)"
|
||||||
|
default = "Standard_B2s"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "worker_type" {
|
||||||
|
type = string
|
||||||
|
description = "Machine type for workers (see `az vm list-skus --location centralus`)"
|
||||||
|
default = "Standard_DS1_v2"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "os_image" {
|
||||||
|
type = string
|
||||||
|
description = "Fedora CoreOS image for instances"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "disk_size" {
|
||||||
|
type = number
|
||||||
|
description = "Size of the disk in GB"
|
||||||
|
default = 40
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "worker_priority" {
|
||||||
|
type = string
|
||||||
|
description = "Set worker priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time."
|
||||||
|
default = "Regular"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "controller_snippets" {
|
||||||
|
type = list(string)
|
||||||
|
description = "Controller Fedora CoreOS Config snippets"
|
||||||
|
default = []
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "worker_snippets" {
|
||||||
|
type = list(string)
|
||||||
|
description = "Worker Fedora CoreOS Config snippets"
|
||||||
|
default = []
|
||||||
|
}
|
||||||
|
|
||||||
|
# configuration
|
||||||
|
|
||||||
|
variable "ssh_authorized_key" {
|
||||||
|
type = string
|
||||||
|
description = "SSH public key for user 'core'"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "networking" {
|
||||||
|
type = string
|
||||||
|
description = "Choice of networking provider (flannel or calico)"
|
||||||
|
default = "calico"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "host_cidr" {
|
||||||
|
type = string
|
||||||
|
description = "CIDR IPv4 range to assign to instances"
|
||||||
|
default = "10.0.0.0/16"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "pod_cidr" {
|
||||||
|
type = string
|
||||||
|
description = "CIDR IPv4 range to assign Kubernetes pods"
|
||||||
|
default = "10.2.0.0/16"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "service_cidr" {
|
||||||
|
type = string
|
||||||
|
description = <<EOD
|
||||||
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
|
EOD
|
||||||
|
default = "10.3.0.0/16"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_reporting" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_aggregation" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable the Kubernetes Aggregation Layer (defaults to false)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "worker_node_labels" {
|
||||||
|
type = list(string)
|
||||||
|
description = "List of initial worker node labels"
|
||||||
|
default = []
|
||||||
|
}
|
||||||
|
|
||||||
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "asset_dir" {
|
||||||
|
type = string
|
||||||
|
description = "Absolute path to a directory where generated assets should be placed (contains secrets)"
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,12 @@
|
||||||
|
# Terraform version and plugin versions
|
||||||
|
|
||||||
|
terraform {
|
||||||
|
required_version = "~> 0.12.6"
|
||||||
|
required_providers {
|
||||||
|
azurerm = "~> 2.0"
|
||||||
|
ct = "~> 0.3"
|
||||||
|
template = "~> 2.1"
|
||||||
|
null = "~> 2.1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,24 @@
|
||||||
|
module "workers" {
|
||||||
|
source = "./workers"
|
||||||
|
name = var.cluster_name
|
||||||
|
|
||||||
|
# Azure
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
region = azurerm_resource_group.cluster.location
|
||||||
|
subnet_id = azurerm_subnet.worker.id
|
||||||
|
security_group_id = azurerm_network_security_group.worker.id
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
||||||
|
|
||||||
|
worker_count = var.worker_count
|
||||||
|
vm_type = var.worker_type
|
||||||
|
os_image = var.os_image
|
||||||
|
priority = var.worker_priority
|
||||||
|
|
||||||
|
# configuration
|
||||||
|
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||||
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
snippets = var.worker_snippets
|
||||||
|
node_labels = var.worker_node_labels
|
||||||
|
}
|
|
@ -0,0 +1,119 @@
|
||||||
|
---
|
||||||
|
variant: fcos
|
||||||
|
version: 1.0.0
|
||||||
|
systemd:
|
||||||
|
units:
|
||||||
|
- name: docker.service
|
||||||
|
enabled: true
|
||||||
|
- name: wait-for-dns.service
|
||||||
|
enabled: true
|
||||||
|
contents: |
|
||||||
|
[Unit]
|
||||||
|
Description=Wait for DNS entries
|
||||||
|
Before=kubelet.service
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
RemainAfterExit=true
|
||||||
|
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
|
||||||
|
[Install]
|
||||||
|
RequiredBy=kubelet.service
|
||||||
|
- name: kubelet.service
|
||||||
|
enabled: true
|
||||||
|
contents: |
|
||||||
|
[Unit]
|
||||||
|
Description=Kubelet via Hyperkube (System Container)
|
||||||
|
Wants=rpc-statd.service
|
||||||
|
[Service]
|
||||||
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
|
||||||
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
|
ExecStartPre=/bin/mkdir -p /var/lib/calico
|
||||||
|
ExecStartPre=/bin/mkdir -p /var/lib/kubelet/volumeplugins
|
||||||
|
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
|
||||||
|
ExecStartPre=-/usr/bin/podman rm kubelet
|
||||||
|
ExecStart=/usr/bin/podman run --name kubelet \
|
||||||
|
--privileged \
|
||||||
|
--pid host \
|
||||||
|
--network host \
|
||||||
|
--volume /etc/kubernetes:/etc/kubernetes:ro,z \
|
||||||
|
--volume /usr/lib/os-release:/etc/os-release:ro \
|
||||||
|
--volume /etc/ssl/certs:/etc/ssl/certs:ro \
|
||||||
|
--volume /lib/modules:/lib/modules:ro \
|
||||||
|
--volume /run:/run \
|
||||||
|
--volume /sys/fs/cgroup:/sys/fs/cgroup:ro \
|
||||||
|
--volume /sys/fs/cgroup/systemd:/sys/fs/cgroup/systemd \
|
||||||
|
--volume /etc/pki/tls/certs:/usr/share/ca-certificates:ro \
|
||||||
|
--volume /var/lib/calico:/var/lib/calico:ro \
|
||||||
|
--volume /var/lib/docker:/var/lib/docker \
|
||||||
|
--volume /var/lib/kubelet:/var/lib/kubelet:rshared,z \
|
||||||
|
--volume /var/log:/var/log \
|
||||||
|
--volume /var/run/lock:/var/run/lock:z \
|
||||||
|
--volume /opt/cni/bin:/opt/cni/bin:z \
|
||||||
|
quay.io/poseidon/kubelet:v1.18.1 \
|
||||||
|
--anonymous-auth=false \
|
||||||
|
--authentication-token-webhook \
|
||||||
|
--authorization-mode=Webhook \
|
||||||
|
--cgroup-driver=systemd \
|
||||||
|
--cgroups-per-qos=true \
|
||||||
|
--enforce-node-allocatable=pods \
|
||||||
|
--client-ca-file=/etc/kubernetes/ca.crt \
|
||||||
|
--cluster_dns=${cluster_dns_service_ip} \
|
||||||
|
--cluster_domain=${cluster_domain_suffix} \
|
||||||
|
--cni-conf-dir=/etc/kubernetes/cni/net.d \
|
||||||
|
--exit-on-lock-contention \
|
||||||
|
--healthz-port=0 \
|
||||||
|
--kubeconfig=/etc/kubernetes/kubeconfig \
|
||||||
|
--lock-file=/var/run/lock/kubelet.lock \
|
||||||
|
--network-plugin=cni \
|
||||||
|
--node-labels=node.kubernetes.io/node \
|
||||||
|
%{~ for label in split(",", node_labels) ~}
|
||||||
|
--node-labels=${label} \
|
||||||
|
%{~ endfor ~}
|
||||||
|
--pod-manifest-path=/etc/kubernetes/manifests \
|
||||||
|
--read-only-port=0 \
|
||||||
|
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
|
||||||
|
ExecStop=-/usr/bin/podman stop kubelet
|
||||||
|
Delegate=yes
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
- name: delete-node.service
|
||||||
|
enabled: true
|
||||||
|
contents: |
|
||||||
|
[Unit]
|
||||||
|
Description=Delete Kubernetes node on shutdown
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
RemainAfterExit=true
|
||||||
|
ExecStart=/bin/true
|
||||||
|
ExecStop=/bin/bash -c '/usr/bin/podman run --volume /etc/kubernetes:/etc/kubernetes:ro,z --entrypoint /usr/local/bin/kubectl quay.io/poseidon/kubelet:v1.18.1 --kubeconfig=/etc/kubernetes/kubeconfig delete node $HOSTNAME'
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
storage:
|
||||||
|
directories:
|
||||||
|
- path: /etc/kubernetes
|
||||||
|
files:
|
||||||
|
- path: /etc/kubernetes/kubeconfig
|
||||||
|
mode: 0644
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
${kubeconfig}
|
||||||
|
- path: /etc/sysctl.d/max-user-watches.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
fs.inotify.max_user_watches=16184
|
||||||
|
- path: /etc/systemd/system.conf.d/accounting.conf
|
||||||
|
contents:
|
||||||
|
inline: |
|
||||||
|
[Manager]
|
||||||
|
DefaultCPUAccounting=yes
|
||||||
|
DefaultMemoryAccounting=yes
|
||||||
|
DefaultBlockIOAccounting=yes
|
||||||
|
passwd:
|
||||||
|
users:
|
||||||
|
- name: core
|
||||||
|
ssh_authorized_keys:
|
||||||
|
- ${ssh_authorized_key}
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,98 @@
|
||||||
|
variable "name" {
|
||||||
|
type = string
|
||||||
|
description = "Unique name for the worker pool"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Azure
|
||||||
|
|
||||||
|
variable "region" {
|
||||||
|
type = string
|
||||||
|
description = "Must be set to the Azure Region of cluster"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "resource_group_name" {
|
||||||
|
type = string
|
||||||
|
description = "Must be set to the resource group name of cluster"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "subnet_id" {
|
||||||
|
type = string
|
||||||
|
description = "Must be set to the `worker_subnet_id` output by cluster"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "security_group_id" {
|
||||||
|
type = string
|
||||||
|
description = "Must be set to the `worker_security_group_id` output by cluster"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "backend_address_pool_id" {
|
||||||
|
type = string
|
||||||
|
description = "Must be set to the `worker_backend_address_pool_id` output by cluster"
|
||||||
|
}
|
||||||
|
|
||||||
|
# instances
|
||||||
|
|
||||||
|
variable "worker_count" {
|
||||||
|
type = number
|
||||||
|
description = "Number of instances"
|
||||||
|
default = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "vm_type" {
|
||||||
|
type = string
|
||||||
|
description = "Machine type for instances (see `az vm list-skus --location centralus`)"
|
||||||
|
default = "Standard_DS1_v2"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "os_image" {
|
||||||
|
type = string
|
||||||
|
description = "Fedora CoreOS image for instances"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "priority" {
|
||||||
|
type = string
|
||||||
|
description = "Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be evicted at any time."
|
||||||
|
default = "Regular"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "snippets" {
|
||||||
|
type = list(string)
|
||||||
|
description = "Fedora CoreOS Config snippets"
|
||||||
|
default = []
|
||||||
|
}
|
||||||
|
|
||||||
|
# configuration
|
||||||
|
|
||||||
|
variable "kubeconfig" {
|
||||||
|
type = string
|
||||||
|
description = "Must be set to `kubeconfig` output by cluster"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ssh_authorized_key" {
|
||||||
|
type = string
|
||||||
|
description = "SSH public key for user 'core'"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "service_cidr" {
|
||||||
|
type = string
|
||||||
|
description = <<EOD
|
||||||
|
CIDR IPv4 range to assign Kubernetes services.
|
||||||
|
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for coredns.
|
||||||
|
EOD
|
||||||
|
default = "10.3.0.0/16"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "node_labels" {
|
||||||
|
type = list(string)
|
||||||
|
description = "List of initial node labels"
|
||||||
|
default = []
|
||||||
|
}
|
||||||
|
|
||||||
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
type = string
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,4 @@
|
||||||
|
|
||||||
|
terraform {
|
||||||
|
required_version = ">= 0.12"
|
||||||
|
}
|
|
@ -0,0 +1,92 @@
|
||||||
|
# Workers scale set
|
||||||
|
resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||||
|
resource_group_name = var.resource_group_name
|
||||||
|
|
||||||
|
name = "${var.name}-worker"
|
||||||
|
location = var.region
|
||||||
|
sku = var.vm_type
|
||||||
|
instances = var.worker_count
|
||||||
|
# instance name prefix for instances in the set
|
||||||
|
computer_name_prefix = "${var.name}-worker"
|
||||||
|
single_placement_group = false
|
||||||
|
custom_data = base64encode(data.ct_config.worker-ignition.rendered)
|
||||||
|
|
||||||
|
# storage
|
||||||
|
source_image_id = var.os_image
|
||||||
|
os_disk {
|
||||||
|
storage_account_type = "Standard_LRS"
|
||||||
|
caching = "ReadWrite"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
||||||
|
admin_username = "core"
|
||||||
|
admin_ssh_key {
|
||||||
|
username = "core"
|
||||||
|
public_key = var.ssh_authorized_key
|
||||||
|
}
|
||||||
|
|
||||||
|
# network
|
||||||
|
network_interface {
|
||||||
|
name = "nic0"
|
||||||
|
primary = true
|
||||||
|
network_security_group_id = var.security_group_id
|
||||||
|
|
||||||
|
ip_configuration {
|
||||||
|
name = "ip0"
|
||||||
|
primary = true
|
||||||
|
subnet_id = var.subnet_id
|
||||||
|
|
||||||
|
# backend address pool to which the NIC should be added
|
||||||
|
load_balancer_backend_address_pool_ids = [var.backend_address_pool_id]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# lifecycle
|
||||||
|
upgrade_mode = "Manual"
|
||||||
|
# eviction policy may only be set when priority is Spot
|
||||||
|
priority = var.priority
|
||||||
|
eviction_policy = var.priority == "Spot" ? "Delete" : null
|
||||||
|
}
|
||||||
|
|
||||||
|
# Scale up or down to maintain desired number, tolerating deallocations.
|
||||||
|
resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||||
|
resource_group_name = var.resource_group_name
|
||||||
|
|
||||||
|
name = "${var.name}-maintain-desired"
|
||||||
|
location = var.region
|
||||||
|
|
||||||
|
# autoscale
|
||||||
|
enabled = true
|
||||||
|
target_resource_id = azurerm_linux_virtual_machine_scale_set.workers.id
|
||||||
|
|
||||||
|
profile {
|
||||||
|
name = "default"
|
||||||
|
|
||||||
|
capacity {
|
||||||
|
minimum = var.worker_count
|
||||||
|
default = var.worker_count
|
||||||
|
maximum = var.worker_count
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Worker Ignition configs
|
||||||
|
data "ct_config" "worker-ignition" {
|
||||||
|
content = data.template_file.worker-config.rendered
|
||||||
|
pretty_print = false
|
||||||
|
snippets = var.snippets
|
||||||
|
}
|
||||||
|
|
||||||
|
# Worker Fedora CoreOS configs
|
||||||
|
data "template_file" "worker-config" {
|
||||||
|
template = file("${path.module}/fcc/worker.yaml")
|
||||||
|
|
||||||
|
vars = {
|
||||||
|
kubeconfig = indent(10, var.kubeconfig)
|
||||||
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
node_labels = join(",", var.node_labels)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
|
@ -7,6 +7,7 @@ Internal Terraform Modules:
|
||||||
* `aws/container-linux/kubernetes/workers`
|
* `aws/container-linux/kubernetes/workers`
|
||||||
* `aws/fedora-coreos/kubernetes/workers`
|
* `aws/fedora-coreos/kubernetes/workers`
|
||||||
* `azure/container-linux/kubernetes/workers`
|
* `azure/container-linux/kubernetes/workers`
|
||||||
|
* `azure/fedora-coreos/kubernetes/workers`
|
||||||
* `google-cloud/container-linux/kubernetes/workers`
|
* `google-cloud/container-linux/kubernetes/workers`
|
||||||
* `google-cloud/fedora-coreos/kubernetes/workers`
|
* `google-cloud/fedora-coreos/kubernetes/workers`
|
||||||
|
|
||||||
|
@ -91,14 +92,14 @@ module "ramius-worker-pool" {
|
||||||
backend_address_pool_id = module.ramius.backend_address_pool_id
|
backend_address_pool_id = module.ramius.backend_address_pool_id
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
name = "ramius-low-priority"
|
name = "ramius-spot"
|
||||||
kubeconfig = module.ramius.kubeconfig
|
kubeconfig = module.ramius.kubeconfig
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
|
|
||||||
# optional
|
# optional
|
||||||
worker_count = 2
|
worker_count = 2
|
||||||
vm_type = "Standard_F4"
|
vm_type = "Standard_F4"
|
||||||
priority = "Low"
|
priority = "Spot"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -1,8 +1,5 @@
|
||||||
# Azure
|
# Azure
|
||||||
|
|
||||||
!!! danger
|
|
||||||
Typhoon for Azure is alpha. For production, use AWS, Google Cloud, or bare-metal. As Azure matures, check [errata](https://github.com/poseidon/typhoon/wiki/Errata) for known shortcomings.
|
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.18.1 cluster on Azure with CoreOS Container Linux or Flatcar Linux.
|
In this tutorial, we'll create a Kubernetes v1.18.1 cluster on Azure with CoreOS Container Linux or Flatcar Linux.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||||
|
@ -50,7 +47,7 @@ Configure the Azure provider in a `providers.tf` file.
|
||||||
|
|
||||||
```tf
|
```tf
|
||||||
provider "azurerm" {
|
provider "azurerm" {
|
||||||
version = "2.1.0"
|
version = "2.5.0"
|
||||||
}
|
}
|
||||||
|
|
||||||
provider "ct" {
|
provider "ct" {
|
||||||
|
|
|
@ -0,0 +1,261 @@
|
||||||
|
# Azure
|
||||||
|
|
||||||
|
In this tutorial, we'll create a Kubernetes v1.18.1 cluster on Azure with Fedora CoreOS.
|
||||||
|
|
||||||
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||||
|
|
||||||
|
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
* Azure account
|
||||||
|
* Azure DNS Zone (registered Domain Name or delegated subdomain)
|
||||||
|
* Terraform v0.12.6+ and [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) installed locally
|
||||||
|
|
||||||
|
## Terraform Setup
|
||||||
|
|
||||||
|
Install [Terraform](https://www.terraform.io/downloads.html) v0.12.6+ on your system.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ terraform version
|
||||||
|
Terraform v0.12.21
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the [terraform-provider-ct](https://github.com/poseidon/terraform-provider-ct) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
wget https://github.com/poseidon/terraform-provider-ct/releases/download/v0.5.0/terraform-provider-ct-v0.5.0-linux-amd64.tar.gz
|
||||||
|
tar xzf terraform-provider-ct-v0.5.0-linux-amd64.tar.gz
|
||||||
|
mv terraform-provider-ct-v0.5.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.5.0
|
||||||
|
```
|
||||||
|
|
||||||
|
Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||||
|
|
||||||
|
```
|
||||||
|
cd infra/clusters
|
||||||
|
```
|
||||||
|
|
||||||
|
## Provider
|
||||||
|
|
||||||
|
[Install](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) the Azure `az` command line tool to [authenticate with Azure](https://www.terraform.io/docs/providers/azurerm/authenticating_via_azure_cli.html).
|
||||||
|
|
||||||
|
```
|
||||||
|
az login
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure the Azure provider in a `providers.tf` file.
|
||||||
|
|
||||||
|
```tf
|
||||||
|
provider "azurerm" {
|
||||||
|
version = "2.5.0"
|
||||||
|
}
|
||||||
|
|
||||||
|
provider "ct" {
|
||||||
|
version = "0.5.0"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Additional configuration options are described in the `azurerm` provider [docs](https://www.terraform.io/docs/providers/azurerm/).
|
||||||
|
|
||||||
|
## Fedora CoreOS Images
|
||||||
|
|
||||||
|
Fedora CoreOS publishes images for Azure, but does not yet upload them. Azure allows custom images to be uploaded to a storage account bucket and imported.
|
||||||
|
|
||||||
|
[Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow).
|
||||||
|
|
||||||
|
```
|
||||||
|
xz -d fedora-coreos-31.20200323.3.2-azure.x86_64.vhd.xz
|
||||||
|
```
|
||||||
|
|
||||||
|
Create an Azure disk (note its ID) and create an Azure image from it (note its ID).
|
||||||
|
|
||||||
|
```
|
||||||
|
az disk create --name fedora-coreos-31.20200323.3.2 -g GROUP --source https://BUCKET.blob.core.windows.net/fedora-coreos/fedora-coreos-31.20200323.3.2-azure.x86_64.vhd
|
||||||
|
|
||||||
|
az image create --name fedora-coreos-31.20200323.3.2 -g GROUP --os-type=linux --source /subscriptions/some/path/providers/Microsoft.Compute/disks/fedora-coreos-31.20200323.3.2
|
||||||
|
```
|
||||||
|
|
||||||
|
Set the [os_image](#variables) in the next step.
|
||||||
|
|
||||||
|
## Cluster
|
||||||
|
|
||||||
|
Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
|
||||||
|
|
||||||
|
```tf
|
||||||
|
module "ramius" {
|
||||||
|
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.18.1"
|
||||||
|
|
||||||
|
# Azure
|
||||||
|
cluster_name = "ramius"
|
||||||
|
region = "centralus"
|
||||||
|
dns_zone = "azure.example.com"
|
||||||
|
dns_zone_group = "example-group"
|
||||||
|
|
||||||
|
# configuration
|
||||||
|
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-31.20200323.3.2"
|
||||||
|
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
worker_count = 2
|
||||||
|
host_cidr = "10.0.0.0/20"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/azure/fedora-coreos/kubernetes/variables.tf) source.
|
||||||
|
|
||||||
|
## ssh-agent
|
||||||
|
|
||||||
|
Initial bootstrapping requires `bootstrap.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
ssh-add ~/.ssh/id_rsa
|
||||||
|
ssh-add -L
|
||||||
|
```
|
||||||
|
|
||||||
|
## Apply
|
||||||
|
|
||||||
|
Initialize the config directory if this is the first use with Terraform.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
terraform init
|
||||||
|
```
|
||||||
|
|
||||||
|
Plan the resources to be created.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ terraform plan
|
||||||
|
Plan: 86 to add, 0 to change, 0 to destroy.
|
||||||
|
```
|
||||||
|
|
||||||
|
Apply the changes to create the cluster.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ terraform apply
|
||||||
|
...
|
||||||
|
module.ramius.null_resource.bootstrap: Still creating... (6m50s elapsed)
|
||||||
|
module.ramius.null_resource.bootstrap: Still creating... (7m0s elapsed)
|
||||||
|
module.ramius.null_resource.bootstrap: Creation complete after 7m8s (ID: 3961816482286168143)
|
||||||
|
|
||||||
|
Apply complete! Resources: 69 added, 0 changed, 0 destroyed.
|
||||||
|
```
|
||||||
|
|
||||||
|
In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||||
|
|
||||||
|
## Verify
|
||||||
|
|
||||||
|
[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Obtain the generated cluster `kubeconfig` from module outputs (e.g. write to a local file).
|
||||||
|
|
||||||
|
```
|
||||||
|
resource "local_file" "kubeconfig-ramius" {
|
||||||
|
content = module.ramius.kubeconfig-admin
|
||||||
|
filename = "/home/user/.kube/configs/ramius-config"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
List nodes in the cluster.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||||
|
$ kubectl get nodes
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
ramius-controller-0 Ready <none> 24m v1.18.1
|
||||||
|
ramius-worker-000001 Ready <none> 25m v1.18.1
|
||||||
|
ramius-worker-000002 Ready <none> 24m v1.18.1
|
||||||
|
```
|
||||||
|
|
||||||
|
List the pods.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl get pods --all-namespaces
|
||||||
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
|
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
|
||||||
|
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
|
||||||
|
kube-system calico-node-1m5bf 2/2 Running 0 26m
|
||||||
|
kube-system calico-node-7jmr1 2/2 Running 0 26m
|
||||||
|
kube-system calico-node-bknc8 2/2 Running 0 26m
|
||||||
|
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
|
||||||
|
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
|
||||||
|
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
|
||||||
|
kube-system kube-proxy-jxr5d 1/1 Running 0 26m
|
||||||
|
kube-system kube-proxy-lbdw5 1/1 Running 0 26m
|
||||||
|
kube-system kube-scheduler-ramius-controller-0 1/1 Running 0 26m
|
||||||
|
```
|
||||||
|
|
||||||
|
## Going Further
|
||||||
|
|
||||||
|
Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/).
|
||||||
|
|
||||||
|
## Variables
|
||||||
|
|
||||||
|
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/azure/fedora-coreos/kubernetes/variables.tf) source.
|
||||||
|
|
||||||
|
### Required
|
||||||
|
|
||||||
|
| Name | Description | Example |
|
||||||
|
|:-----|:------------|:--------|
|
||||||
|
| cluster_name | Unique cluster name (prepended to dns_zone) | "ramius" |
|
||||||
|
| region | Azure region | "centralus" |
|
||||||
|
| dns_zone | Azure DNS zone | "azure.example.com" |
|
||||||
|
| dns_zone_group | Resource group where the Azure DNS zone resides | "global" |
|
||||||
|
| os_image | Fedora CoreOS image for instances | "/subscriptions/..../custom-image" |
|
||||||
|
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
Regions are shown in [docs](https://azure.microsoft.com/en-us/global-infrastructure/regions/) or with `az account list-locations --output table`.
|
||||||
|
|
||||||
|
#### DNS Zone
|
||||||
|
|
||||||
|
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `ramius.azure.example.com`.
|
||||||
|
|
||||||
|
You'll need a registered domain name or delegated subdomain on Azure DNS. You can set this up once and create many clusters with unique names.
|
||||||
|
|
||||||
|
```tf
|
||||||
|
# Azure resource group for DNS zone
|
||||||
|
resource "azurerm_resource_group" "global" {
|
||||||
|
name = "global"
|
||||||
|
location = "centralus"
|
||||||
|
}
|
||||||
|
|
||||||
|
# DNS zone for clusters
|
||||||
|
resource "azurerm_dns_zone" "clusters" {
|
||||||
|
resource_group_name = azurerm_resource_group.global.name
|
||||||
|
|
||||||
|
name = "azure.example.com"
|
||||||
|
zone_type = "Public"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource group with `"azurerm_resource_group.global.name`.
|
||||||
|
|
||||||
|
!!! tip ""
|
||||||
|
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Azure DNS (e.g. azure.mydomain.com) and [update nameservers](https://docs.microsoft.com/en-us/azure/dns/dns-delegate-domain-azure-dns).
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
| Name | Description | Default | Example |
|
||||||
|
|:-----|:------------|:--------|:--------|
|
||||||
|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||||
|
| worker_count | Number of workers | 1 | 3 |
|
||||||
|
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
|
||||||
|
| worker_type | Machine type for workers | "Standard_DS1_v2" | See below |
|
||||||
|
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||||
|
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
||||||
|
| controller_snippets | Controller Fedora CoreOS Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
|
| worker_snippets | Worker Fedora CoreOS Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
|
| networking | Choice of networking provider | "calico" | "flannel" or "calico" |
|
||||||
|
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
|
||||||
|
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||||
|
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||||
|
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
|
||||||
|
|
||||||
|
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
Unlike AWS and GCP, Azure requires its *virtual* networks to have non-overlapping IPv4 CIDRs (yeah, go figure). Instead of each cluster just using `10.0.0.0/16` for instances, each Azure cluster's `host_cidr` must be non-overlapping (e.g. 10.0.0.0/20 for the 1st cluster, 10.0.16.0/20 for the 2nd cluster, etc).
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller.
|
||||||
|
|
||||||
|
#### Spot Priority
|
||||||
|
|
||||||
|
Add `worker_priority=Spot` to use [Spot Priority](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/spot-vms) workers that run on Azure's surplus capacity at lower cost, but with the tradeoff that they can be deallocated at random. Spot priority VMs are Azure's analog to AWS spot instances or GCP premptible instances.
|
|
@ -63,7 +63,7 @@ provider "ct" {
|
||||||
|
|
||||||
Fedora CoreOS publishes images for DigitalOcean, but does not yet upload them. DigitalOcean allows [custom images](https://blog.digitalocean.com/custom-images/) to be uploaded via URL or file.
|
Fedora CoreOS publishes images for DigitalOcean, but does not yet upload them. DigitalOcean allows [custom images](https://blog.digitalocean.com/custom-images/) to be uploaded via URL or file.
|
||||||
|
|
||||||
Import a [Fedora CoreOS](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) image via URL to desired a region(s). Reference the DigitalOcean image and set the `os_image` in the next step.
|
Import a [Fedora CoreOS](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) image via URL to desired a region(s).
|
||||||
|
|
||||||
```tf
|
```tf
|
||||||
data "digitalocean_image" "fedora-coreos-31-20200323-3-2" {
|
data "digitalocean_image" "fedora-coreos-31-20200323-3-2" {
|
||||||
|
@ -71,6 +71,8 @@ data "digitalocean_image" "fedora-coreos-31-20200323-3-2" {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Set the [os_image](#variables) in the next step.
|
||||||
|
|
||||||
## Cluster
|
## Cluster
|
||||||
|
|
||||||
Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubernetes`.
|
Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubernetes`.
|
||||||
|
@ -83,9 +85,9 @@ module "nemo" {
|
||||||
cluster_name = "nemo"
|
cluster_name = "nemo"
|
||||||
region = "nyc3"
|
region = "nyc3"
|
||||||
dns_zone = "digital-ocean.example.com"
|
dns_zone = "digital-ocean.example.com"
|
||||||
os_image = data.digitalocean_image.fedora-coreos-31-20200323-3-2.id
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
|
os_image = data.digitalocean_image.fedora-coreos-31-20200323-3-2.id
|
||||||
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
|
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
|
||||||
|
|
||||||
# optional
|
# optional
|
||||||
|
|
|
@ -82,6 +82,8 @@ Create a Compute Engine image from the file.
|
||||||
gcloud compute images create fedora-coreos-31-20200323-3-2 --source-uri gs://BUCKET/fedora-coreos-31.20200323.3.2-gcp.x86_64.tar.gz
|
gcloud compute images create fedora-coreos-31-20200323-3-2 --source-uri gs://BUCKET/fedora-coreos-31.20200323.3.2-gcp.x86_64.tar.gz
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Set the [os_image](#variables) in the next step.
|
||||||
|
|
||||||
## Cluster
|
## Cluster
|
||||||
|
|
||||||
Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kubernetes`.
|
Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kubernetes`.
|
||||||
|
@ -96,10 +98,8 @@ module "yavin" {
|
||||||
dns_zone = "example.com"
|
dns_zone = "example.com"
|
||||||
dns_zone_name = "example-zone"
|
dns_zone_name = "example-zone"
|
||||||
|
|
||||||
# custom image name from above
|
|
||||||
os_image = "fedora-coreos-31-20200323-3-2"
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
|
os_image = "fedora-coreos-31-20200323-3-2"
|
||||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||||
|
|
||||||
# optional
|
# optional
|
||||||
|
|
|
@ -26,6 +26,7 @@ Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
|
||||||
| Platform | Operating System | Terraform Module | Status |
|
| Platform | Operating System | Terraform Module | Status |
|
||||||
|---------------|------------------|------------------|--------|
|
|---------------|------------------|------------------|--------|
|
||||||
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](fedora-coreos/aws.md) | stable |
|
| AWS | Fedora CoreOS | [aws/fedora-coreos/kubernetes](fedora-coreos/aws.md) | stable |
|
||||||
|
| Azure | Fedora CoreOS | [azure/fedora-coreos/kubernetes](fedora-coreos/azure.md) | alpha |
|
||||||
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](fedora-coreos/bare-metal.md) | beta |
|
| Bare-Metal | Fedora CoreOS | [bare-metal/fedora-coreos/kubernetes](fedora-coreos/bare-metal.md) | beta |
|
||||||
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](fedora-coreos/digitalocean.md) | alpha |
|
| DigitalOcean | Fedora CoreOS | [digital-ocean/fedora-coreos/kubernetes](fedora-coreos/digitalocean.md) | alpha |
|
||||||
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | beta |
|
| Google Cloud | Fedora CoreOS | [google-cloud/fedora-coreos/kubernetes](google-cloud/fedora-coreos/kubernetes) | beta |
|
||||||
|
@ -54,7 +55,7 @@ Typhoon is available for CoreOS Container Linux ([no updates](https://coreos.com
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
* Architecture [concepts](architecture/concepts.md) and [operating-systems](architecture/operating-systems.md)
|
* Architecture [concepts](architecture/concepts.md) and [operating-systems](architecture/operating-systems.md)
|
||||||
* Fedora CoreOS tutorials for [AWS](fedora-coreos/aws.md), [Bare-Metal](fedora-coreos/bare-metal.md), [DigitalOcean](fedora-coreos/digitalocean.md), and [Google Cloud](fedora-coreos/google-cloud.md)
|
* Fedora CoreOS tutorials for [AWS](fedora-coreos/aws.md), [Azure](fedora-coreos/azure.md), [Bare-Metal](fedora-coreos/bare-metal.md), [DigitalOcean](fedora-coreos/digitalocean.md), and [Google Cloud](fedora-coreos/google-cloud.md)
|
||||||
* Flatcar Linux tutorials for [AWS](cl/aws.md), [Azure](cl/azure.md), [Bare-Metal](cl/bare-metal.md), [DigitalOcean](cl/digital-ocean.md), and [Google Cloud](cl/google-cloud.md)
|
* Flatcar Linux tutorials for [AWS](cl/aws.md), [Azure](cl/azure.md), [Bare-Metal](cl/bare-metal.md), [DigitalOcean](cl/digital-ocean.md), and [Google Cloud](cl/google-cloud.md)
|
||||||
|
|
||||||
## Example
|
## Example
|
||||||
|
|
|
@ -14,10 +14,10 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||||
* Kubernetes v1.18.1 (upstream)
|
* Kubernetes v1.18.1 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/cl/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/fedora-coreos/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/) customization
|
||||||
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
* Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
|
||||||
|
|
||||||
## Docs
|
## Docs
|
||||||
|
|
||||||
Please see the [official docs](https://typhoon.psdn.io) and the Google Cloud [tutorial](https://typhoon.psdn.io/cl/google-cloud/).
|
Please see the [official docs](https://typhoon.psdn.io) and the Google Cloud [tutorial](https://typhoon.psdn.io/fedora-coreos/google-cloud/).
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue