mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-01-07 17:19:33 +01:00
Add IPv6 support for Typhoon Azure clusters
* Define a dual-stack virtual network with both IPv4 and IPv6 private address space. Change `host_cidr` variable (string) to a `network_cidr` variable (object) with "ipv4" and "ipv6" fields that list CIDR strings. * Define dual-stack controller and worker subnets. Disable Azure default outbound access (a deprecated fallback mechanism) * Enable dual-stack load balancing to Kubernetes Ingress by adding a public IPv6 frontend IP and LB rule to the load balancer. * Enable worker outbound IPv6 connectivity through load balancer SNAT by adding an IPv6 frontend IP and outbound rule * Configure controller nodes with a public IPv6 address to provide direct outbound IPv6 connectivity * Add an IPv6 worker backend pool. Azure requires separate IPv4 and IPv6 backend pools, though the health probe can be shared * Extend network security group rules for IPv6 source/destinations Checklist: Access to controller and worker nodes via IPv6 addresses: * SSH access to controller nodes via public IPv6 address * SSH access to worker nodes via (private) IPv6 address (via controller) Outbound IPv6 connectivity from controller and worker nodes: ``` nc -6 -zv ipv6.google.com 80 Ncat: Version 7.94 ( https://nmap.org/ncat ) Ncat: Connected to [2607:f8b0:4001:c16::66]:80. Ncat: 0 bytes sent, 0 bytes received in 0.02 seconds. ``` Serve Ingress traffic via IPv4 or IPv6 just requires setting up A and AAAA records and running the ingress controller with `hostNetwork: true` since, hostPort only forwards IPv4 traffic
This commit is contained in:
parent
3483ed8bd5
commit
48d4973957
32
CHANGES.md
32
CHANGES.md
@ -4,6 +4,38 @@ Notable changes between versions.
|
|||||||
|
|
||||||
## Latest
|
## Latest
|
||||||
|
|
||||||
|
### Azure
|
||||||
|
|
||||||
|
* Configure the virtual network and subnets with IPv6 private address space
|
||||||
|
* Change `host_cidr` variable (string) to a `network_cidr` object with `ipv4` and `ipv6` fields that list CIDR strings. Leave the variable unset to use the defaults. (**breaking**)
|
||||||
|
* Add support for dual-stack Kubernetes Ingress Load Balancing
|
||||||
|
* Add a public IPv6 frontend, 80/443 rules, and a worker-ipv6 backend pool
|
||||||
|
* Change the `controller_address_prefixes` output from a list of strings to an object with `ipv4` and `ipv6` fields. Most Azure resources can't accept a mix, so these are split out (**breaking**)
|
||||||
|
* Change the `worker_address_prefixes` output from a list of strings to an object with `ipv4` and `ipv6` fields. Most Azure resources can't accept a mix, so these are split out (**breaking**)
|
||||||
|
* Change the `backend_address_pool_id` output (and worker module input) from a string to an object with `ipv4` and `ipv6` fields that list ids (**breaking**)
|
||||||
|
* Configure nodes to have outbound IPv6 internet connectivity (analogous to IPv4 SNAT)
|
||||||
|
* Configure controller nodes to have a public IPv6 address
|
||||||
|
* Configure worker nodes to use outbound rules and the load balancer for SNAT
|
||||||
|
* Extend network security rules to allow IPv6 traffic, analogous to IPv4
|
||||||
|
|
||||||
|
```diff
|
||||||
|
module "cluster" {
|
||||||
|
...
|
||||||
|
# optional
|
||||||
|
- host_cidr = "10.0.0.0/16"
|
||||||
|
+ network_cidr = {
|
||||||
|
+ ipv4 = ["10.0.0.0/16"]
|
||||||
|
+ }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## v1.30.2
|
||||||
|
|
||||||
|
* Kubernetes [v1.30.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1302)
|
||||||
|
* Update CoreDNS from v1.9.4 to v1.11.1
|
||||||
|
* Update Cilium from v1.15.5 to [v1.15.6](https://github.com/cilium/cilium/releases/tag/v1.15.6)
|
||||||
|
* Update flannel from v0.25.1 to [v0.25.4](https://github.com/flannel-io/flannel/releases/tag/v0.25.4)
|
||||||
|
|
||||||
## v1.30.1
|
## v1.30.1
|
||||||
|
|
||||||
* Kubernetes [v1.30.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1301)
|
* Kubernetes [v1.30.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1301)
|
||||||
|
@ -19,14 +19,13 @@ resource "azurerm_dns_a_record" "etcds" {
|
|||||||
ttl = 300
|
ttl = 300
|
||||||
|
|
||||||
# private IPv4 address for etcd
|
# private IPv4 address for etcd
|
||||||
records = [azurerm_network_interface.controllers.*.private_ip_address[count.index]]
|
records = [azurerm_network_interface.controllers[count.index].private_ip_address]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller availability set to spread controllers
|
# Controller availability set to spread controllers
|
||||||
resource "azurerm_availability_set" "controllers" {
|
resource "azurerm_availability_set" "controllers" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "${var.cluster_name}-controllers"
|
name = "${var.cluster_name}-controllers"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
location = var.region
|
location = var.region
|
||||||
platform_fault_domain_count = 2
|
platform_fault_domain_count = 2
|
||||||
platform_update_domain_count = 4
|
platform_update_domain_count = 4
|
||||||
@ -35,15 +34,13 @@ resource "azurerm_availability_set" "controllers" {
|
|||||||
|
|
||||||
# Controller instances
|
# Controller instances
|
||||||
resource "azurerm_linux_virtual_machine" "controllers" {
|
resource "azurerm_linux_virtual_machine" "controllers" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "${var.cluster_name}-controller-${count.index}"
|
name = "${var.cluster_name}-controller-${count.index}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
location = var.region
|
location = var.region
|
||||||
availability_set_id = azurerm_availability_set.controllers.id
|
availability_set_id = azurerm_availability_set.controllers.id
|
||||||
|
size = var.controller_type
|
||||||
size = var.controller_type
|
|
||||||
custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
|
|
||||||
|
|
||||||
# storage
|
# storage
|
||||||
source_image_id = var.os_image
|
source_image_id = var.os_image
|
||||||
@ -56,10 +53,16 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
|||||||
|
|
||||||
# network
|
# network
|
||||||
network_interface_ids = [
|
network_interface_ids = [
|
||||||
azurerm_network_interface.controllers.*.id[count.index]
|
azurerm_network_interface.controllers[count.index].id
|
||||||
]
|
]
|
||||||
|
|
||||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
# boot
|
||||||
|
custom_data = base64encode(data.ct_config.controllers[count.index].rendered)
|
||||||
|
boot_diagnostics {
|
||||||
|
# defaults to a managed storage account
|
||||||
|
}
|
||||||
|
|
||||||
|
# Azure requires an RSA admin_ssh_key
|
||||||
admin_username = "core"
|
admin_username = "core"
|
||||||
admin_ssh_key {
|
admin_ssh_key {
|
||||||
username = "core"
|
username = "core"
|
||||||
@ -74,31 +77,52 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller public IPv4 addresses
|
# Controller node public IPv4 addresses
|
||||||
resource "azurerm_public_ip" "controllers" {
|
resource "azurerm_public_ip" "controllers-ipv4" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "${var.cluster_name}-controller-${count.index}"
|
name = "${var.cluster_name}-controller-${count.index}-ipv4"
|
||||||
location = azurerm_resource_group.cluster.location
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
sku = "Standard"
|
location = azurerm_resource_group.cluster.location
|
||||||
allocation_method = "Static"
|
ip_version = "IPv4"
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller NICs with public and private IPv4
|
# Controller node public IPv6 addresses
|
||||||
resource "azurerm_network_interface" "controllers" {
|
resource "azurerm_public_ip" "controllers-ipv6" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "${var.cluster_name}-controller-${count.index}"
|
name = "${var.cluster_name}-controller-${count.index}-ipv6"
|
||||||
location = azurerm_resource_group.cluster.location
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
ip_version = "IPv6"
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Controllers' network interfaces
|
||||||
|
resource "azurerm_network_interface" "controllers" {
|
||||||
|
count = var.controller_count
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-controller-${count.index}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
|
||||||
ip_configuration {
|
ip_configuration {
|
||||||
name = "ip0"
|
name = "ipv4"
|
||||||
|
primary = true
|
||||||
subnet_id = azurerm_subnet.controller.id
|
subnet_id = azurerm_subnet.controller.id
|
||||||
private_ip_address_allocation = "Dynamic"
|
private_ip_address_allocation = "Dynamic"
|
||||||
# instance public IPv4
|
private_ip_address_version = "IPv4"
|
||||||
public_ip_address_id = azurerm_public_ip.controllers.*.id[count.index]
|
public_ip_address_id = azurerm_public_ip.controllers-ipv4[count.index].id
|
||||||
|
}
|
||||||
|
ip_configuration {
|
||||||
|
name = "ipv6"
|
||||||
|
subnet_id = azurerm_subnet.controller.id
|
||||||
|
private_ip_address_allocation = "Dynamic"
|
||||||
|
private_ip_address_version = "IPv6"
|
||||||
|
public_ip_address_id = azurerm_public_ip.controllers-ipv6[count.index].id
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -115,7 +139,7 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
|||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
|
|
||||||
network_interface_id = azurerm_network_interface.controllers[count.index].id
|
network_interface_id = azurerm_network_interface.controllers[count.index].id
|
||||||
ip_configuration_name = "ip0"
|
ip_configuration_name = "ipv4"
|
||||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -15,31 +15,39 @@ resource "azurerm_dns_a_record" "apiserver" {
|
|||||||
|
|
||||||
# Static IPv4 address for the apiserver frontend
|
# Static IPv4 address for the apiserver frontend
|
||||||
resource "azurerm_public_ip" "apiserver-ipv4" {
|
resource "azurerm_public_ip" "apiserver-ipv4" {
|
||||||
|
name = "${var.cluster_name}-apiserver-ipv4"
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = var.region
|
||||||
name = "${var.cluster_name}-apiserver-ipv4"
|
sku = "Standard"
|
||||||
location = var.region
|
allocation_method = "Static"
|
||||||
sku = "Standard"
|
|
||||||
allocation_method = "Static"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Static IPv4 address for the ingress frontend
|
# Static IPv4 address for the ingress frontend
|
||||||
resource "azurerm_public_ip" "ingress-ipv4" {
|
resource "azurerm_public_ip" "ingress-ipv4" {
|
||||||
|
name = "${var.cluster_name}-ingress-ipv4"
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = var.region
|
||||||
|
ip_version = "IPv4"
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
|
}
|
||||||
|
|
||||||
name = "${var.cluster_name}-ingress-ipv4"
|
# Static IPv6 address for the ingress frontend
|
||||||
location = var.region
|
resource "azurerm_public_ip" "ingress-ipv6" {
|
||||||
sku = "Standard"
|
name = "${var.cluster_name}-ingress-ipv6"
|
||||||
allocation_method = "Static"
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = var.region
|
||||||
|
ip_version = "IPv6"
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Network Load Balancer for apiservers and ingress
|
# Network Load Balancer for apiservers and ingress
|
||||||
resource "azurerm_lb" "cluster" {
|
resource "azurerm_lb" "cluster" {
|
||||||
|
name = var.cluster_name
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = var.region
|
||||||
name = var.cluster_name
|
sku = "Standard"
|
||||||
location = var.region
|
|
||||||
sku = "Standard"
|
|
||||||
|
|
||||||
frontend_ip_configuration {
|
frontend_ip_configuration {
|
||||||
name = "apiserver"
|
name = "apiserver"
|
||||||
@ -47,15 +55,21 @@ resource "azurerm_lb" "cluster" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
frontend_ip_configuration {
|
frontend_ip_configuration {
|
||||||
name = "ingress"
|
name = "ingress-ipv4"
|
||||||
public_ip_address_id = azurerm_public_ip.ingress-ipv4.id
|
public_ip_address_id = azurerm_public_ip.ingress-ipv4.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "ingress-ipv6"
|
||||||
|
public_ip_address_id = azurerm_public_ip.ingress-ipv6.id
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_lb_rule" "apiserver" {
|
resource "azurerm_lb_rule" "apiserver" {
|
||||||
name = "apiserver"
|
name = "apiserver"
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
frontend_ip_configuration_name = "apiserver"
|
frontend_ip_configuration_name = "apiserver"
|
||||||
|
disable_outbound_snat = true
|
||||||
|
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
frontend_port = 6443
|
frontend_port = 6443
|
||||||
@ -64,44 +78,60 @@ resource "azurerm_lb_rule" "apiserver" {
|
|||||||
probe_id = azurerm_lb_probe.apiserver.id
|
probe_id = azurerm_lb_probe.apiserver.id
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_lb_rule" "ingress-http" {
|
resource "azurerm_lb_rule" "ingress-http-ipv4" {
|
||||||
name = "ingress-http"
|
name = "ingress-http-ipv4"
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
frontend_ip_configuration_name = "ingress"
|
frontend_ip_configuration_name = "ingress-ipv4"
|
||||||
disable_outbound_snat = true
|
disable_outbound_snat = true
|
||||||
|
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
frontend_port = 80
|
frontend_port = 80
|
||||||
backend_port = 80
|
backend_port = 80
|
||||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker.id]
|
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker-ipv4.id]
|
||||||
probe_id = azurerm_lb_probe.ingress.id
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_lb_rule" "ingress-https" {
|
resource "azurerm_lb_rule" "ingress-https-ipv4" {
|
||||||
name = "ingress-https"
|
name = "ingress-https-ipv4"
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
frontend_ip_configuration_name = "ingress"
|
frontend_ip_configuration_name = "ingress-ipv4"
|
||||||
disable_outbound_snat = true
|
disable_outbound_snat = true
|
||||||
|
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
frontend_port = 443
|
frontend_port = 443
|
||||||
backend_port = 443
|
backend_port = 443
|
||||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker.id]
|
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker-ipv4.id]
|
||||||
probe_id = azurerm_lb_probe.ingress.id
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
}
|
}
|
||||||
|
|
||||||
# Worker outbound TCP/UDP SNAT
|
resource "azurerm_lb_rule" "ingress-http-ipv6" {
|
||||||
resource "azurerm_lb_outbound_rule" "worker-outbound" {
|
name = "ingress-http-ipv6"
|
||||||
name = "worker"
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
frontend_ip_configuration_name = "ingress-ipv6"
|
||||||
frontend_ip_configuration {
|
disable_outbound_snat = true
|
||||||
name = "ingress"
|
|
||||||
}
|
|
||||||
|
|
||||||
protocol = "All"
|
protocol = "Tcp"
|
||||||
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
frontend_port = 80
|
||||||
|
backend_port = 80
|
||||||
|
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker-ipv6.id]
|
||||||
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_rule" "ingress-https-ipv6" {
|
||||||
|
name = "ingress-https-ipv6"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
frontend_ip_configuration_name = "ingress-ipv6"
|
||||||
|
disable_outbound_snat = true
|
||||||
|
|
||||||
|
protocol = "Tcp"
|
||||||
|
frontend_port = 443
|
||||||
|
backend_port = 443
|
||||||
|
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker-ipv6.id]
|
||||||
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Backend Address Pools
|
||||||
|
|
||||||
# Address pool of controllers
|
# Address pool of controllers
|
||||||
resource "azurerm_lb_backend_address_pool" "controller" {
|
resource "azurerm_lb_backend_address_pool" "controller" {
|
||||||
name = "controller"
|
name = "controller"
|
||||||
@ -109,8 +139,13 @@ resource "azurerm_lb_backend_address_pool" "controller" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
# Address pool of workers
|
# Address pool of workers
|
||||||
resource "azurerm_lb_backend_address_pool" "worker" {
|
resource "azurerm_lb_backend_address_pool" "worker-ipv4" {
|
||||||
name = "worker"
|
name = "worker-ipv4"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_backend_address_pool" "worker-ipv6" {
|
||||||
|
name = "worker-ipv6"
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -122,10 +157,8 @@ resource "azurerm_lb_probe" "apiserver" {
|
|||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
port = 6443
|
port = 6443
|
||||||
|
|
||||||
# unhealthy threshold
|
# unhealthy threshold
|
||||||
number_of_probes = 3
|
number_of_probes = 3
|
||||||
|
|
||||||
interval_in_seconds = 5
|
interval_in_seconds = 5
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -136,10 +169,29 @@ resource "azurerm_lb_probe" "ingress" {
|
|||||||
protocol = "Http"
|
protocol = "Http"
|
||||||
port = 10254
|
port = 10254
|
||||||
request_path = "/healthz"
|
request_path = "/healthz"
|
||||||
|
|
||||||
# unhealthy threshold
|
# unhealthy threshold
|
||||||
number_of_probes = 3
|
number_of_probes = 3
|
||||||
|
|
||||||
interval_in_seconds = 5
|
interval_in_seconds = 5
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Outbound SNAT
|
||||||
|
|
||||||
|
resource "azurerm_lb_outbound_rule" "outbound-ipv4" {
|
||||||
|
name = "outbound-ipv4"
|
||||||
|
protocol = "All"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.worker-ipv4.id
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "ingress-ipv4"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_outbound_rule" "outbound-ipv6" {
|
||||||
|
name = "outbound-ipv6"
|
||||||
|
protocol = "All"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.worker-ipv6.id
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "ingress-ipv6"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
6
azure/fedora-coreos/kubernetes/locals.tf
Normal file
6
azure/fedora-coreos/kubernetes/locals.tf
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
locals {
|
||||||
|
backend_address_pool_ids = {
|
||||||
|
ipv4 = [azurerm_lb_backend_address_pool.worker-ipv4.id]
|
||||||
|
ipv6 = [azurerm_lb_backend_address_pool.worker-ipv6.id]
|
||||||
|
}
|
||||||
|
}
|
@ -1,3 +1,21 @@
|
|||||||
|
locals {
|
||||||
|
# Subdivide the virtual network into subnets
|
||||||
|
# - controllers use netnum 0
|
||||||
|
# - workers use netnum 1
|
||||||
|
controller_subnets = {
|
||||||
|
ipv4 = [for i, cidr in var.network_cidr.ipv4 : cidrsubnet(cidr, 1, 0)]
|
||||||
|
ipv6 = [for i, cidr in var.network_cidr.ipv6 : cidrsubnet(cidr, 16, 0)]
|
||||||
|
}
|
||||||
|
worker_subnets = {
|
||||||
|
ipv4 = [for i, cidr in var.network_cidr.ipv4 : cidrsubnet(cidr, 1, 1)]
|
||||||
|
ipv6 = [for i, cidr in var.network_cidr.ipv6 : cidrsubnet(cidr, 16, 1)]
|
||||||
|
}
|
||||||
|
cluster_subnets = {
|
||||||
|
ipv4 = concat(local.controller_subnets.ipv4, local.worker_subnets.ipv4)
|
||||||
|
ipv6 = concat(local.controller_subnets.ipv6, local.worker_subnets.ipv6)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
# Organize cluster into a resource group
|
# Organize cluster into a resource group
|
||||||
resource "azurerm_resource_group" "cluster" {
|
resource "azurerm_resource_group" "cluster" {
|
||||||
name = var.cluster_name
|
name = var.cluster_name
|
||||||
@ -5,23 +23,30 @@ resource "azurerm_resource_group" "cluster" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_virtual_network" "network" {
|
resource "azurerm_virtual_network" "network" {
|
||||||
|
name = var.cluster_name
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
address_space = concat(
|
||||||
|
var.network_cidr.ipv4,
|
||||||
|
var.network_cidr.ipv6
|
||||||
|
)
|
||||||
|
|
||||||
name = var.cluster_name
|
|
||||||
location = azurerm_resource_group.cluster.location
|
|
||||||
address_space = [var.host_cidr]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Subnets - separate subnets for controller and workers because Azure
|
# Subnets - separate subnets for controllers and workers because Azure
|
||||||
# network security groups are based on IPv4 CIDR rather than instance
|
# network security groups are oriented around address prefixes rather
|
||||||
# tags like GCP or security group membership like AWS
|
# than instance tags (GCP) or security group membership (AWS)
|
||||||
|
|
||||||
resource "azurerm_subnet" "controller" {
|
resource "azurerm_subnet" "controller" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "controller"
|
name = "controller"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
virtual_network_name = azurerm_virtual_network.network.name
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
address_prefixes = [cidrsubnet(var.host_cidr, 1, 0)]
|
address_prefixes = concat(
|
||||||
|
local.controller_subnets.ipv4,
|
||||||
|
local.controller_subnets.ipv6,
|
||||||
|
)
|
||||||
|
default_outbound_access_enabled = false
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet_network_security_group_association" "controller" {
|
resource "azurerm_subnet_network_security_group_association" "controller" {
|
||||||
@ -30,11 +55,14 @@ resource "azurerm_subnet_network_security_group_association" "controller" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet" "worker" {
|
resource "azurerm_subnet" "worker" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "worker"
|
name = "worker"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
virtual_network_name = azurerm_virtual_network.network.name
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
address_prefixes = [cidrsubnet(var.host_cidr, 1, 1)]
|
address_prefixes = concat(
|
||||||
|
local.worker_subnets.ipv4,
|
||||||
|
local.worker_subnets.ipv6,
|
||||||
|
)
|
||||||
|
default_outbound_access_enabled = false
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet_network_security_group_association" "worker" {
|
resource "azurerm_subnet_network_security_group_association" "worker" {
|
||||||
|
@ -10,6 +10,11 @@ output "ingress_static_ipv4" {
|
|||||||
description = "IPv4 address of the load balancer for distributing traffic to Ingress controllers"
|
description = "IPv4 address of the load balancer for distributing traffic to Ingress controllers"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
output "ingress_static_ipv6" {
|
||||||
|
value = azurerm_public_ip.ingress-ipv6.ip_address
|
||||||
|
description = "IPv6 address of the load balancer for distributing traffic to Ingress controllers"
|
||||||
|
}
|
||||||
|
|
||||||
# Outputs for worker pools
|
# Outputs for worker pools
|
||||||
|
|
||||||
output "region" {
|
output "region" {
|
||||||
@ -51,12 +56,12 @@ output "worker_security_group_name" {
|
|||||||
|
|
||||||
output "controller_address_prefixes" {
|
output "controller_address_prefixes" {
|
||||||
description = "Controller network subnet CIDR addresses (for source/destination)"
|
description = "Controller network subnet CIDR addresses (for source/destination)"
|
||||||
value = azurerm_subnet.controller.address_prefixes
|
value = local.controller_subnets
|
||||||
}
|
}
|
||||||
|
|
||||||
output "worker_address_prefixes" {
|
output "worker_address_prefixes" {
|
||||||
description = "Worker network subnet CIDR addresses (for source/destination)"
|
description = "Worker network subnet CIDR addresses (for source/destination)"
|
||||||
value = azurerm_subnet.worker.address_prefixes
|
value = local.worker_subnets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Outputs for custom load balancing
|
# Outputs for custom load balancing
|
||||||
@ -66,9 +71,12 @@ output "loadbalancer_id" {
|
|||||||
value = azurerm_lb.cluster.id
|
value = azurerm_lb.cluster.id
|
||||||
}
|
}
|
||||||
|
|
||||||
output "backend_address_pool_id" {
|
output "backend_address_pool_ids" {
|
||||||
description = "ID of the worker backend address pool"
|
description = "IDs of the worker backend address pools"
|
||||||
value = azurerm_lb_backend_address_pool.worker.id
|
value = {
|
||||||
|
ipv4 = [azurerm_lb_backend_address_pool.worker-ipv4.id]
|
||||||
|
ipv6 = [azurerm_lb_backend_address_pool.worker-ipv6.id]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Outputs for debug
|
# Outputs for debug
|
||||||
|
@ -1,214 +1,223 @@
|
|||||||
# Controller security group
|
# Controller security group
|
||||||
|
|
||||||
resource "azurerm_network_security_group" "controller" {
|
resource "azurerm_network_security_group" "controller" {
|
||||||
|
name = "${var.cluster_name}-controller"
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
name = "${var.cluster_name}-controller"
|
|
||||||
location = azurerm_resource_group.cluster.location
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-icmp" {
|
resource "azurerm_network_security_rule" "controller-icmp" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-icmp"
|
name = "allow-icmp-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "1995"
|
priority = 1995 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Icmp"
|
protocol = "Icmp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "*"
|
destination_port_range = "*"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-ssh" {
|
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-ssh"
|
name = "allow-ssh-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2000"
|
priority = 2000 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "22"
|
destination_port_range = "22"
|
||||||
source_address_prefix = "*"
|
source_address_prefix = "*"
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-etcd" {
|
resource "azurerm_network_security_rule" "controller-etcd" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-etcd"
|
name = "allow-etcd-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2005"
|
priority = 2005 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "2379-2380"
|
destination_port_range = "2379-2380"
|
||||||
source_address_prefixes = azurerm_subnet.controller.address_prefixes
|
source_address_prefixes = local.controller_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape etcd metrics
|
# Allow Prometheus to scrape etcd metrics
|
||||||
resource "azurerm_network_security_rule" "controller-etcd-metrics" {
|
resource "azurerm_network_security_rule" "controller-etcd-metrics" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-etcd-metrics"
|
name = "allow-etcd-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2010"
|
priority = 2010 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "2381"
|
destination_port_range = "2381"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-proxy metrics
|
# Allow Prometheus to scrape kube-proxy metrics
|
||||||
resource "azurerm_network_security_rule" "controller-kube-proxy" {
|
resource "azurerm_network_security_rule" "controller-kube-proxy" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-kube-proxy-metrics"
|
name = "allow-kube-proxy-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2011"
|
priority = 2012 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10249"
|
destination_port_range = "10249"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
|
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
|
||||||
resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-kube-metrics"
|
name = "allow-kube-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2012"
|
priority = 2014 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10257-10259"
|
destination_port_range = "10257-10259"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-apiserver" {
|
resource "azurerm_network_security_rule" "controller-apiserver" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-apiserver"
|
name = "allow-apiserver-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2015"
|
priority = 2016 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "6443"
|
destination_port_range = "6443"
|
||||||
source_address_prefix = "*"
|
source_address_prefix = "*"
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = var.networking == "cilium" ? local.controller_subnets : {}
|
||||||
count = var.networking == "cilium" ? 1 : 0
|
|
||||||
|
|
||||||
name = "allow-cilium-health"
|
name = "allow-cilium-health-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2018"
|
priority = 2018 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "4240"
|
destination_port_range = "4240"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-cilium-metrics" {
|
resource "azurerm_network_security_rule" "controller-cilium-metrics" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = var.networking == "cilium" ? local.controller_subnets : {}
|
||||||
count = var.networking == "cilium" ? 1 : 0
|
|
||||||
|
|
||||||
name = "allow-cilium-metrics"
|
name = "allow-cilium-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2019"
|
priority = 2035 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "9962-9965"
|
destination_port_range = "9962-9965"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-vxlan"
|
name = "allow-vxlan-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2020"
|
priority = 2020 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Udp"
|
protocol = "Udp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "4789"
|
destination_port_range = "4789"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-linux-vxlan"
|
name = "allow-linux-vxlan-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2021"
|
priority = 2022 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Udp"
|
protocol = "Udp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "8472"
|
destination_port_range = "8472"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-node-exporter"
|
name = "allow-node-exporter-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2025"
|
priority = 2025 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "9100"
|
destination_port_range = "9100"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||||
resource "azurerm_network_security_rule" "controller-kubelet" {
|
resource "azurerm_network_security_rule" "controller-kubelet" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-kubelet"
|
name = "allow-kubelet-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2030"
|
priority = 2030 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10250"
|
destination_port_range = "10250"
|
||||||
|
|
||||||
# allow Prometheus to scrape kubelet metrics too
|
# allow Prometheus to scrape kubelet metrics too
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||||
@ -247,182 +256,189 @@ resource "azurerm_network_security_rule" "controller-deny-all" {
|
|||||||
# Worker security group
|
# Worker security group
|
||||||
|
|
||||||
resource "azurerm_network_security_group" "worker" {
|
resource "azurerm_network_security_group" "worker" {
|
||||||
|
name = "${var.cluster_name}-worker"
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
name = "${var.cluster_name}-worker"
|
|
||||||
location = azurerm_resource_group.cluster.location
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-icmp" {
|
resource "azurerm_network_security_rule" "worker-icmp" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-icmp"
|
name = "allow-icmp-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "1995"
|
priority = 1995 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Icmp"
|
protocol = "Icmp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "*"
|
destination_port_range = "*"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-ssh" {
|
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-ssh"
|
name = "allow-ssh-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2000"
|
priority = 2000 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "22"
|
destination_port_range = "22"
|
||||||
source_address_prefixes = azurerm_subnet.controller.address_prefixes
|
source_address_prefixes = local.controller_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-http" {
|
resource "azurerm_network_security_rule" "worker-http" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-http"
|
name = "allow-http-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2005"
|
priority = 2005 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "80"
|
destination_port_range = "80"
|
||||||
source_address_prefix = "*"
|
source_address_prefix = "*"
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-https" {
|
resource "azurerm_network_security_rule" "worker-https" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-https"
|
name = "allow-https-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2010"
|
priority = 2010 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "443"
|
destination_port_range = "443"
|
||||||
source_address_prefix = "*"
|
source_address_prefix = "*"
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = var.networking == "cilium" ? local.worker_subnets : {}
|
||||||
count = var.networking == "cilium" ? 1 : 0
|
|
||||||
|
|
||||||
name = "allow-cilium-health"
|
name = "allow-cilium-health-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2013"
|
priority = 2012 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "4240"
|
destination_port_range = "4240"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-cilium-metrics" {
|
resource "azurerm_network_security_rule" "worker-cilium-metrics" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = var.networking == "cilium" ? local.worker_subnets : {}
|
||||||
count = var.networking == "cilium" ? 1 : 0
|
|
||||||
|
|
||||||
name = "allow-cilium-metrics"
|
name = "allow-cilium-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2014"
|
priority = 2014 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "9962-9965"
|
destination_port_range = "9962-9965"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-vxlan"
|
name = "allow-vxlan-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2015"
|
priority = 2016 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Udp"
|
protocol = "Udp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "4789"
|
destination_port_range = "4789"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-linux-vxlan"
|
name = "allow-linux-vxlan-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2016"
|
priority = 2018 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Udp"
|
protocol = "Udp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "8472"
|
destination_port_range = "8472"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-node-exporter"
|
name = "allow-node-exporter-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2020"
|
priority = 2020 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "9100"
|
destination_port_range = "9100"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-proxy
|
# Allow Prometheus to scrape kube-proxy
|
||||||
resource "azurerm_network_security_rule" "worker-kube-proxy" {
|
resource "azurerm_network_security_rule" "worker-kube-proxy" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-kube-proxy"
|
name = "allow-kube-proxy-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2024"
|
priority = 2024 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10249"
|
destination_port_range = "10249"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||||
resource "azurerm_network_security_rule" "worker-kubelet" {
|
resource "azurerm_network_security_rule" "worker-kubelet" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-kubelet"
|
name = "allow-kubelet-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2025"
|
priority = 2026 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10250"
|
destination_port_range = "10250"
|
||||||
|
|
||||||
# allow Prometheus to scrape kubelet metrics too
|
# allow Prometheus to scrape kubelet metrics too
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||||
|
@ -18,7 +18,7 @@ resource "null_resource" "copy-controller-secrets" {
|
|||||||
|
|
||||||
connection {
|
connection {
|
||||||
type = "ssh"
|
type = "ssh"
|
||||||
host = azurerm_public_ip.controllers.*.ip_address[count.index]
|
host = azurerm_public_ip.controllers-ipv4[count.index].ip_address
|
||||||
user = "core"
|
user = "core"
|
||||||
timeout = "15m"
|
timeout = "15m"
|
||||||
}
|
}
|
||||||
@ -45,7 +45,7 @@ resource "null_resource" "bootstrap" {
|
|||||||
|
|
||||||
connection {
|
connection {
|
||||||
type = "ssh"
|
type = "ssh"
|
||||||
host = azurerm_public_ip.controllers.*.ip_address[0]
|
host = azurerm_public_ip.controllers-ipv4[0].ip_address
|
||||||
user = "core"
|
user = "core"
|
||||||
timeout = "15m"
|
timeout = "15m"
|
||||||
}
|
}
|
||||||
|
@ -94,10 +94,15 @@ variable "networking" {
|
|||||||
default = "cilium"
|
default = "cilium"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "host_cidr" {
|
variable "network_cidr" {
|
||||||
type = string
|
type = object({
|
||||||
description = "CIDR IPv4 range to assign to instances"
|
ipv4 = list(string)
|
||||||
default = "10.0.0.0/16"
|
ipv6 = optional(list(string), ["fd9a:0d2f:b7dc::/48"])
|
||||||
|
})
|
||||||
|
description = "Virtual network CIDR ranges"
|
||||||
|
default = {
|
||||||
|
ipv4 = ["10.0.0.0/16"]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "pod_cidr" {
|
variable "pod_cidr" {
|
||||||
|
@ -3,11 +3,11 @@ module "workers" {
|
|||||||
name = var.cluster_name
|
name = var.cluster_name
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
region = azurerm_resource_group.cluster.location
|
region = azurerm_resource_group.cluster.location
|
||||||
subnet_id = azurerm_subnet.worker.id
|
subnet_id = azurerm_subnet.worker.id
|
||||||
security_group_id = azurerm_network_security_group.worker.id
|
security_group_id = azurerm_network_security_group.worker.id
|
||||||
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
backend_address_pool_ids = local.backend_address_pool_ids
|
||||||
|
|
||||||
worker_count = var.worker_count
|
worker_count = var.worker_count
|
||||||
vm_type = var.worker_type
|
vm_type = var.worker_type
|
||||||
|
@ -25,9 +25,12 @@ variable "security_group_id" {
|
|||||||
description = "Must be set to the `worker_security_group_id` output by cluster"
|
description = "Must be set to the `worker_security_group_id` output by cluster"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "backend_address_pool_id" {
|
variable "backend_address_pool_ids" {
|
||||||
type = string
|
type = object({
|
||||||
description = "Must be set to the `worker_backend_address_pool_id` output by cluster"
|
ipv4 = list(string)
|
||||||
|
ipv6 = list(string)
|
||||||
|
})
|
||||||
|
description = "Must be set to the `backend_address_pool_ids` output by cluster"
|
||||||
}
|
}
|
||||||
|
|
||||||
# instances
|
# instances
|
||||||
|
@ -4,16 +4,14 @@ locals {
|
|||||||
|
|
||||||
# Workers scale set
|
# Workers scale set
|
||||||
resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||||
|
name = "${var.name}-worker"
|
||||||
resource_group_name = var.resource_group_name
|
resource_group_name = var.resource_group_name
|
||||||
|
location = var.region
|
||||||
name = "${var.name}-worker"
|
sku = var.vm_type
|
||||||
location = var.region
|
instances = var.worker_count
|
||||||
sku = var.vm_type
|
|
||||||
instances = var.worker_count
|
|
||||||
# instance name prefix for instances in the set
|
# instance name prefix for instances in the set
|
||||||
computer_name_prefix = "${var.name}-worker"
|
computer_name_prefix = "${var.name}-worker"
|
||||||
single_placement_group = false
|
single_placement_group = false
|
||||||
custom_data = base64encode(data.ct_config.worker.rendered)
|
|
||||||
|
|
||||||
# storage
|
# storage
|
||||||
source_image_id = var.os_image
|
source_image_id = var.os_image
|
||||||
@ -22,13 +20,6 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
|||||||
caching = "ReadWrite"
|
caching = "ReadWrite"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
|
||||||
admin_username = "core"
|
|
||||||
admin_ssh_key {
|
|
||||||
username = "core"
|
|
||||||
public_key = var.azure_authorized_key
|
|
||||||
}
|
|
||||||
|
|
||||||
# network
|
# network
|
||||||
network_interface {
|
network_interface {
|
||||||
name = "nic0"
|
name = "nic0"
|
||||||
@ -36,13 +27,33 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
|||||||
network_security_group_id = var.security_group_id
|
network_security_group_id = var.security_group_id
|
||||||
|
|
||||||
ip_configuration {
|
ip_configuration {
|
||||||
name = "ip0"
|
name = "ipv4"
|
||||||
|
version = "IPv4"
|
||||||
primary = true
|
primary = true
|
||||||
subnet_id = var.subnet_id
|
subnet_id = var.subnet_id
|
||||||
|
|
||||||
# backend address pool to which the NIC should be added
|
# backend address pool to which the NIC should be added
|
||||||
load_balancer_backend_address_pool_ids = [var.backend_address_pool_id]
|
load_balancer_backend_address_pool_ids = var.backend_address_pool_ids.ipv4
|
||||||
}
|
}
|
||||||
|
ip_configuration {
|
||||||
|
name = "ipv6"
|
||||||
|
version = "IPv6"
|
||||||
|
subnet_id = var.subnet_id
|
||||||
|
# backend address pool to which the NIC should be added
|
||||||
|
load_balancer_backend_address_pool_ids = var.backend_address_pool_ids.ipv6
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# boot
|
||||||
|
custom_data = base64encode(data.ct_config.worker.rendered)
|
||||||
|
boot_diagnostics {
|
||||||
|
# defaults to a managed storage account
|
||||||
|
}
|
||||||
|
|
||||||
|
# Azure requires an RSA admin_ssh_key
|
||||||
|
admin_username = "core"
|
||||||
|
admin_ssh_key {
|
||||||
|
username = "core"
|
||||||
|
public_key = local.azure_authorized_key
|
||||||
}
|
}
|
||||||
|
|
||||||
# lifecycle
|
# lifecycle
|
||||||
@ -50,22 +61,22 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
|||||||
# eviction policy may only be set when priority is Spot
|
# eviction policy may only be set when priority is Spot
|
||||||
priority = var.priority
|
priority = var.priority
|
||||||
eviction_policy = var.priority == "Spot" ? "Delete" : null
|
eviction_policy = var.priority == "Spot" ? "Delete" : null
|
||||||
|
termination_notification {
|
||||||
|
enabled = true
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Scale up or down to maintain desired number, tolerating deallocations.
|
# Scale up or down to maintain desired number, tolerating deallocations.
|
||||||
resource "azurerm_monitor_autoscale_setting" "workers" {
|
resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||||
|
name = "${var.name}-maintain-desired"
|
||||||
resource_group_name = var.resource_group_name
|
resource_group_name = var.resource_group_name
|
||||||
|
location = var.region
|
||||||
name = "${var.name}-maintain-desired"
|
|
||||||
location = var.region
|
|
||||||
|
|
||||||
# autoscale
|
# autoscale
|
||||||
enabled = true
|
enabled = true
|
||||||
target_resource_id = azurerm_linux_virtual_machine_scale_set.workers.id
|
target_resource_id = azurerm_linux_virtual_machine_scale_set.workers.id
|
||||||
|
|
||||||
profile {
|
profile {
|
||||||
name = "default"
|
name = "default"
|
||||||
|
|
||||||
capacity {
|
capacity {
|
||||||
minimum = var.worker_count
|
minimum = var.worker_count
|
||||||
default = var.worker_count
|
default = var.worker_count
|
||||||
|
@ -1,19 +1,3 @@
|
|||||||
# Discrete DNS records for each controller's private IPv4 for etcd usage
|
|
||||||
resource "azurerm_dns_a_record" "etcds" {
|
|
||||||
count = var.controller_count
|
|
||||||
resource_group_name = var.dns_zone_group
|
|
||||||
|
|
||||||
# DNS Zone name where record should be created
|
|
||||||
zone_name = var.dns_zone
|
|
||||||
|
|
||||||
# DNS record
|
|
||||||
name = format("%s-etcd%d", var.cluster_name, count.index)
|
|
||||||
ttl = 300
|
|
||||||
|
|
||||||
# private IPv4 address for etcd
|
|
||||||
records = [azurerm_network_interface.controllers.*.private_ip_address[count.index]]
|
|
||||||
}
|
|
||||||
|
|
||||||
locals {
|
locals {
|
||||||
# Container Linux derivative
|
# Container Linux derivative
|
||||||
# flatcar-stable -> Flatcar Linux Stable
|
# flatcar-stable -> Flatcar Linux Stable
|
||||||
@ -28,11 +12,26 @@ locals {
|
|||||||
azure_authorized_key = var.azure_authorized_key == "" ? var.ssh_authorized_key : var.azure_authorized_key
|
azure_authorized_key = var.azure_authorized_key == "" ? var.ssh_authorized_key : var.azure_authorized_key
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Discrete DNS records for each controller's private IPv4 for etcd usage
|
||||||
|
resource "azurerm_dns_a_record" "etcds" {
|
||||||
|
count = var.controller_count
|
||||||
|
resource_group_name = var.dns_zone_group
|
||||||
|
|
||||||
|
# DNS Zone name where record should be created
|
||||||
|
zone_name = var.dns_zone
|
||||||
|
|
||||||
|
# DNS record
|
||||||
|
name = format("%s-etcd%d", var.cluster_name, count.index)
|
||||||
|
ttl = 300
|
||||||
|
|
||||||
|
# private IPv4 address for etcd
|
||||||
|
records = [azurerm_network_interface.controllers[count.index].private_ip_address]
|
||||||
|
}
|
||||||
|
|
||||||
# Controller availability set to spread controllers
|
# Controller availability set to spread controllers
|
||||||
resource "azurerm_availability_set" "controllers" {
|
resource "azurerm_availability_set" "controllers" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "${var.cluster_name}-controllers"
|
name = "${var.cluster_name}-controllers"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
location = var.region
|
location = var.region
|
||||||
platform_fault_domain_count = 2
|
platform_fault_domain_count = 2
|
||||||
platform_update_domain_count = 4
|
platform_update_domain_count = 4
|
||||||
@ -41,18 +40,13 @@ resource "azurerm_availability_set" "controllers" {
|
|||||||
|
|
||||||
# Controller instances
|
# Controller instances
|
||||||
resource "azurerm_linux_virtual_machine" "controllers" {
|
resource "azurerm_linux_virtual_machine" "controllers" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "${var.cluster_name}-controller-${count.index}"
|
name = "${var.cluster_name}-controller-${count.index}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
location = var.region
|
location = var.region
|
||||||
availability_set_id = azurerm_availability_set.controllers.id
|
availability_set_id = azurerm_availability_set.controllers.id
|
||||||
|
size = var.controller_type
|
||||||
size = var.controller_type
|
|
||||||
custom_data = base64encode(data.ct_config.controllers.*.rendered[count.index])
|
|
||||||
boot_diagnostics {
|
|
||||||
# defaults to a managed storage account
|
|
||||||
}
|
|
||||||
|
|
||||||
# storage
|
# storage
|
||||||
os_disk {
|
os_disk {
|
||||||
@ -84,7 +78,13 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
|||||||
azurerm_network_interface.controllers[count.index].id
|
azurerm_network_interface.controllers[count.index].id
|
||||||
]
|
]
|
||||||
|
|
||||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
# boot
|
||||||
|
custom_data = base64encode(data.ct_config.controllers[count.index].rendered)
|
||||||
|
boot_diagnostics {
|
||||||
|
# defaults to a managed storage account
|
||||||
|
}
|
||||||
|
|
||||||
|
# Azure requires an RSA admin_ssh_key
|
||||||
admin_username = "core"
|
admin_username = "core"
|
||||||
admin_ssh_key {
|
admin_ssh_key {
|
||||||
username = "core"
|
username = "core"
|
||||||
@ -99,31 +99,52 @@ resource "azurerm_linux_virtual_machine" "controllers" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller public IPv4 addresses
|
# Controller node public IPv4 addresses
|
||||||
resource "azurerm_public_ip" "controllers" {
|
resource "azurerm_public_ip" "controllers-ipv4" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "${var.cluster_name}-controller-${count.index}"
|
name = "${var.cluster_name}-controller-${count.index}-ipv4"
|
||||||
location = azurerm_resource_group.cluster.location
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
sku = "Standard"
|
location = azurerm_resource_group.cluster.location
|
||||||
allocation_method = "Static"
|
ip_version = "IPv4"
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Controller NICs with public and private IPv4
|
# Controller node public IPv6 addresses
|
||||||
resource "azurerm_network_interface" "controllers" {
|
resource "azurerm_public_ip" "controllers-ipv6" {
|
||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "${var.cluster_name}-controller-${count.index}"
|
name = "${var.cluster_name}-controller-${count.index}-ipv6"
|
||||||
location = azurerm_resource_group.cluster.location
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
ip_version = "IPv6"
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Controllers' network interfaces
|
||||||
|
resource "azurerm_network_interface" "controllers" {
|
||||||
|
count = var.controller_count
|
||||||
|
|
||||||
|
name = "${var.cluster_name}-controller-${count.index}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
|
|
||||||
ip_configuration {
|
ip_configuration {
|
||||||
name = "ip0"
|
name = "ipv4"
|
||||||
|
primary = true
|
||||||
subnet_id = azurerm_subnet.controller.id
|
subnet_id = azurerm_subnet.controller.id
|
||||||
private_ip_address_allocation = "Dynamic"
|
private_ip_address_allocation = "Dynamic"
|
||||||
# instance public IPv4
|
private_ip_address_version = "IPv4"
|
||||||
public_ip_address_id = azurerm_public_ip.controllers.*.id[count.index]
|
public_ip_address_id = azurerm_public_ip.controllers-ipv4[count.index].id
|
||||||
|
}
|
||||||
|
ip_configuration {
|
||||||
|
name = "ipv6"
|
||||||
|
subnet_id = azurerm_subnet.controller.id
|
||||||
|
private_ip_address_allocation = "Dynamic"
|
||||||
|
private_ip_address_version = "IPv6"
|
||||||
|
public_ip_address_id = azurerm_public_ip.controllers-ipv6[count.index].id
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -140,7 +161,7 @@ resource "azurerm_network_interface_backend_address_pool_association" "controlle
|
|||||||
count = var.controller_count
|
count = var.controller_count
|
||||||
|
|
||||||
network_interface_id = azurerm_network_interface.controllers[count.index].id
|
network_interface_id = azurerm_network_interface.controllers[count.index].id
|
||||||
ip_configuration_name = "ip0"
|
ip_configuration_name = "ipv4"
|
||||||
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
backend_address_pool_id = azurerm_lb_backend_address_pool.controller.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -15,31 +15,39 @@ resource "azurerm_dns_a_record" "apiserver" {
|
|||||||
|
|
||||||
# Static IPv4 address for the apiserver frontend
|
# Static IPv4 address for the apiserver frontend
|
||||||
resource "azurerm_public_ip" "apiserver-ipv4" {
|
resource "azurerm_public_ip" "apiserver-ipv4" {
|
||||||
|
name = "${var.cluster_name}-apiserver-ipv4"
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = var.region
|
||||||
name = "${var.cluster_name}-apiserver-ipv4"
|
sku = "Standard"
|
||||||
location = var.region
|
allocation_method = "Static"
|
||||||
sku = "Standard"
|
|
||||||
allocation_method = "Static"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Static IPv4 address for the ingress frontend
|
# Static IPv4 address for the ingress frontend
|
||||||
resource "azurerm_public_ip" "ingress-ipv4" {
|
resource "azurerm_public_ip" "ingress-ipv4" {
|
||||||
|
name = "${var.cluster_name}-ingress-ipv4"
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = var.region
|
||||||
|
ip_version = "IPv4"
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
|
}
|
||||||
|
|
||||||
name = "${var.cluster_name}-ingress-ipv4"
|
# Static IPv6 address for the ingress frontend
|
||||||
location = var.region
|
resource "azurerm_public_ip" "ingress-ipv6" {
|
||||||
sku = "Standard"
|
name = "${var.cluster_name}-ingress-ipv6"
|
||||||
allocation_method = "Static"
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = var.region
|
||||||
|
ip_version = "IPv6"
|
||||||
|
sku = "Standard"
|
||||||
|
allocation_method = "Static"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Network Load Balancer for apiservers and ingress
|
# Network Load Balancer for apiservers and ingress
|
||||||
resource "azurerm_lb" "cluster" {
|
resource "azurerm_lb" "cluster" {
|
||||||
|
name = var.cluster_name
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = var.region
|
||||||
name = var.cluster_name
|
sku = "Standard"
|
||||||
location = var.region
|
|
||||||
sku = "Standard"
|
|
||||||
|
|
||||||
frontend_ip_configuration {
|
frontend_ip_configuration {
|
||||||
name = "apiserver"
|
name = "apiserver"
|
||||||
@ -47,15 +55,21 @@ resource "azurerm_lb" "cluster" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
frontend_ip_configuration {
|
frontend_ip_configuration {
|
||||||
name = "ingress"
|
name = "ingress-ipv4"
|
||||||
public_ip_address_id = azurerm_public_ip.ingress-ipv4.id
|
public_ip_address_id = azurerm_public_ip.ingress-ipv4.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "ingress-ipv6"
|
||||||
|
public_ip_address_id = azurerm_public_ip.ingress-ipv6.id
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_lb_rule" "apiserver" {
|
resource "azurerm_lb_rule" "apiserver" {
|
||||||
name = "apiserver"
|
name = "apiserver"
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
frontend_ip_configuration_name = "apiserver"
|
frontend_ip_configuration_name = "apiserver"
|
||||||
|
disable_outbound_snat = true
|
||||||
|
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
frontend_port = 6443
|
frontend_port = 6443
|
||||||
@ -64,53 +78,74 @@ resource "azurerm_lb_rule" "apiserver" {
|
|||||||
probe_id = azurerm_lb_probe.apiserver.id
|
probe_id = azurerm_lb_probe.apiserver.id
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_lb_rule" "ingress-http" {
|
resource "azurerm_lb_rule" "ingress-http-ipv4" {
|
||||||
name = "ingress-http"
|
name = "ingress-http-ipv4"
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
frontend_ip_configuration_name = "ingress"
|
frontend_ip_configuration_name = "ingress-ipv4"
|
||||||
disable_outbound_snat = true
|
disable_outbound_snat = true
|
||||||
|
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
frontend_port = 80
|
frontend_port = 80
|
||||||
backend_port = 80
|
backend_port = 80
|
||||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker.id]
|
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker-ipv4.id]
|
||||||
probe_id = azurerm_lb_probe.ingress.id
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_lb_rule" "ingress-https" {
|
resource "azurerm_lb_rule" "ingress-https-ipv4" {
|
||||||
name = "ingress-https"
|
name = "ingress-https-ipv4"
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
frontend_ip_configuration_name = "ingress"
|
frontend_ip_configuration_name = "ingress-ipv4"
|
||||||
disable_outbound_snat = true
|
disable_outbound_snat = true
|
||||||
|
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
frontend_port = 443
|
frontend_port = 443
|
||||||
backend_port = 443
|
backend_port = 443
|
||||||
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker.id]
|
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker-ipv4.id]
|
||||||
probe_id = azurerm_lb_probe.ingress.id
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
}
|
}
|
||||||
|
|
||||||
# Worker outbound TCP/UDP SNAT
|
resource "azurerm_lb_rule" "ingress-http-ipv6" {
|
||||||
resource "azurerm_lb_outbound_rule" "worker-outbound" {
|
name = "ingress-http-ipv6"
|
||||||
name = "worker"
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
frontend_ip_configuration_name = "ingress-ipv6"
|
||||||
frontend_ip_configuration {
|
disable_outbound_snat = true
|
||||||
name = "ingress"
|
|
||||||
}
|
|
||||||
|
|
||||||
protocol = "All"
|
protocol = "Tcp"
|
||||||
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
frontend_port = 80
|
||||||
|
backend_port = 80
|
||||||
|
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker-ipv6.id]
|
||||||
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_rule" "ingress-https-ipv6" {
|
||||||
|
name = "ingress-https-ipv6"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
frontend_ip_configuration_name = "ingress-ipv6"
|
||||||
|
disable_outbound_snat = true
|
||||||
|
|
||||||
|
protocol = "Tcp"
|
||||||
|
frontend_port = 443
|
||||||
|
backend_port = 443
|
||||||
|
backend_address_pool_ids = [azurerm_lb_backend_address_pool.worker-ipv6.id]
|
||||||
|
probe_id = azurerm_lb_probe.ingress.id
|
||||||
|
}
|
||||||
|
|
||||||
|
# Backend Address Pools
|
||||||
|
|
||||||
# Address pool of controllers
|
# Address pool of controllers
|
||||||
resource "azurerm_lb_backend_address_pool" "controller" {
|
resource "azurerm_lb_backend_address_pool" "controller" {
|
||||||
name = "controller"
|
name = "controller"
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
}
|
}
|
||||||
|
|
||||||
# Address pool of workers
|
# Address pools for workers
|
||||||
resource "azurerm_lb_backend_address_pool" "worker" {
|
resource "azurerm_lb_backend_address_pool" "worker-ipv4" {
|
||||||
name = "worker"
|
name = "worker-ipv4"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_backend_address_pool" "worker-ipv6" {
|
||||||
|
name = "worker-ipv6"
|
||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -122,10 +157,8 @@ resource "azurerm_lb_probe" "apiserver" {
|
|||||||
loadbalancer_id = azurerm_lb.cluster.id
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
port = 6443
|
port = 6443
|
||||||
|
|
||||||
# unhealthy threshold
|
# unhealthy threshold
|
||||||
number_of_probes = 3
|
number_of_probes = 3
|
||||||
|
|
||||||
interval_in_seconds = 5
|
interval_in_seconds = 5
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -136,10 +169,29 @@ resource "azurerm_lb_probe" "ingress" {
|
|||||||
protocol = "Http"
|
protocol = "Http"
|
||||||
port = 10254
|
port = 10254
|
||||||
request_path = "/healthz"
|
request_path = "/healthz"
|
||||||
|
|
||||||
# unhealthy threshold
|
# unhealthy threshold
|
||||||
number_of_probes = 3
|
number_of_probes = 3
|
||||||
|
|
||||||
interval_in_seconds = 5
|
interval_in_seconds = 5
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Outbound SNAT
|
||||||
|
|
||||||
|
resource "azurerm_lb_outbound_rule" "outbound-ipv4" {
|
||||||
|
name = "outbound-ipv4"
|
||||||
|
protocol = "All"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.worker-ipv4.id
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "ingress-ipv4"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "azurerm_lb_outbound_rule" "outbound-ipv6" {
|
||||||
|
name = "outbound-ipv6"
|
||||||
|
protocol = "All"
|
||||||
|
loadbalancer_id = azurerm_lb.cluster.id
|
||||||
|
backend_address_pool_id = azurerm_lb_backend_address_pool.worker-ipv6.id
|
||||||
|
frontend_ip_configuration {
|
||||||
|
name = "ingress-ipv6"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
6
azure/flatcar-linux/kubernetes/locals.tf
Normal file
6
azure/flatcar-linux/kubernetes/locals.tf
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
locals {
|
||||||
|
backend_address_pool_ids = {
|
||||||
|
ipv4 = [azurerm_lb_backend_address_pool.worker-ipv4.id]
|
||||||
|
ipv6 = [azurerm_lb_backend_address_pool.worker-ipv6.id]
|
||||||
|
}
|
||||||
|
}
|
@ -1,3 +1,21 @@
|
|||||||
|
locals {
|
||||||
|
# Subdivide the virtual network into subnets
|
||||||
|
# - controllers use netnum 0
|
||||||
|
# - workers use netnum 1
|
||||||
|
controller_subnets = {
|
||||||
|
ipv4 = [for i, cidr in var.network_cidr.ipv4 : cidrsubnet(cidr, 1, 0)]
|
||||||
|
ipv6 = [for i, cidr in var.network_cidr.ipv6 : cidrsubnet(cidr, 16, 0)]
|
||||||
|
}
|
||||||
|
worker_subnets = {
|
||||||
|
ipv4 = [for i, cidr in var.network_cidr.ipv4 : cidrsubnet(cidr, 1, 1)]
|
||||||
|
ipv6 = [for i, cidr in var.network_cidr.ipv6 : cidrsubnet(cidr, 16, 1)]
|
||||||
|
}
|
||||||
|
cluster_subnets = {
|
||||||
|
ipv4 = concat(local.controller_subnets.ipv4, local.worker_subnets.ipv4)
|
||||||
|
ipv6 = concat(local.controller_subnets.ipv6, local.worker_subnets.ipv6)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
# Organize cluster into a resource group
|
# Organize cluster into a resource group
|
||||||
resource "azurerm_resource_group" "cluster" {
|
resource "azurerm_resource_group" "cluster" {
|
||||||
name = var.cluster_name
|
name = var.cluster_name
|
||||||
@ -5,23 +23,28 @@ resource "azurerm_resource_group" "cluster" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_virtual_network" "network" {
|
resource "azurerm_virtual_network" "network" {
|
||||||
|
name = var.cluster_name
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
name = var.cluster_name
|
address_space = concat(
|
||||||
location = azurerm_resource_group.cluster.location
|
var.network_cidr.ipv4,
|
||||||
address_space = [var.host_cidr]
|
var.network_cidr.ipv6
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
# Subnets - separate subnets for controller and workers because Azure
|
# Subnets - separate subnets for controllers and workers because Azure
|
||||||
# network security groups are based on IPv4 CIDR rather than instance
|
# network security groups are oriented around address prefixes rather
|
||||||
# tags like GCP or security group membership like AWS
|
# than instance tags (GCP) or security group membership (AWS)
|
||||||
|
|
||||||
resource "azurerm_subnet" "controller" {
|
resource "azurerm_subnet" "controller" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "controller"
|
name = "controller"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
virtual_network_name = azurerm_virtual_network.network.name
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
address_prefixes = [cidrsubnet(var.host_cidr, 1, 0)]
|
address_prefixes = concat(
|
||||||
|
local.controller_subnets.ipv4,
|
||||||
|
local.controller_subnets.ipv6,
|
||||||
|
)
|
||||||
|
default_outbound_access_enabled = false
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet_network_security_group_association" "controller" {
|
resource "azurerm_subnet_network_security_group_association" "controller" {
|
||||||
@ -30,11 +53,14 @@ resource "azurerm_subnet_network_security_group_association" "controller" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet" "worker" {
|
resource "azurerm_subnet" "worker" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
|
||||||
|
|
||||||
name = "worker"
|
name = "worker"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
virtual_network_name = azurerm_virtual_network.network.name
|
virtual_network_name = azurerm_virtual_network.network.name
|
||||||
address_prefixes = [cidrsubnet(var.host_cidr, 1, 1)]
|
address_prefixes = concat(
|
||||||
|
local.worker_subnets.ipv4,
|
||||||
|
local.worker_subnets.ipv6,
|
||||||
|
)
|
||||||
|
default_outbound_access_enabled = false
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_subnet_network_security_group_association" "worker" {
|
resource "azurerm_subnet_network_security_group_association" "worker" {
|
||||||
|
@ -10,6 +10,11 @@ output "ingress_static_ipv4" {
|
|||||||
description = "IPv4 address of the load balancer for distributing traffic to Ingress controllers"
|
description = "IPv4 address of the load balancer for distributing traffic to Ingress controllers"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
output "ingress_static_ipv6" {
|
||||||
|
value = azurerm_public_ip.ingress-ipv6.ip_address
|
||||||
|
description = "IPv6 address of the load balancer for distributing traffic to Ingress controllers"
|
||||||
|
}
|
||||||
|
|
||||||
# Outputs for worker pools
|
# Outputs for worker pools
|
||||||
|
|
||||||
output "region" {
|
output "region" {
|
||||||
@ -51,12 +56,12 @@ output "worker_security_group_name" {
|
|||||||
|
|
||||||
output "controller_address_prefixes" {
|
output "controller_address_prefixes" {
|
||||||
description = "Controller network subnet CIDR addresses (for source/destination)"
|
description = "Controller network subnet CIDR addresses (for source/destination)"
|
||||||
value = azurerm_subnet.controller.address_prefixes
|
value = local.controller_subnets
|
||||||
}
|
}
|
||||||
|
|
||||||
output "worker_address_prefixes" {
|
output "worker_address_prefixes" {
|
||||||
description = "Worker network subnet CIDR addresses (for source/destination)"
|
description = "Worker network subnet CIDR addresses (for source/destination)"
|
||||||
value = azurerm_subnet.worker.address_prefixes
|
value = local.worker_subnets
|
||||||
}
|
}
|
||||||
|
|
||||||
# Outputs for custom load balancing
|
# Outputs for custom load balancing
|
||||||
@ -66,9 +71,12 @@ output "loadbalancer_id" {
|
|||||||
value = azurerm_lb.cluster.id
|
value = azurerm_lb.cluster.id
|
||||||
}
|
}
|
||||||
|
|
||||||
output "backend_address_pool_id" {
|
output "backend_address_pool_ids" {
|
||||||
description = "ID of the worker backend address pool"
|
description = "IDs of the worker backend address pools"
|
||||||
value = azurerm_lb_backend_address_pool.worker.id
|
value = {
|
||||||
|
ipv4 = [azurerm_lb_backend_address_pool.worker-ipv4.id]
|
||||||
|
ipv6 = [azurerm_lb_backend_address_pool.worker-ipv6.id]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Outputs for debug
|
# Outputs for debug
|
||||||
|
@ -1,214 +1,223 @@
|
|||||||
# Controller security group
|
# Controller security group
|
||||||
|
|
||||||
resource "azurerm_network_security_group" "controller" {
|
resource "azurerm_network_security_group" "controller" {
|
||||||
|
name = "${var.cluster_name}-controller"
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
name = "${var.cluster_name}-controller"
|
|
||||||
location = azurerm_resource_group.cluster.location
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-icmp" {
|
resource "azurerm_network_security_rule" "controller-icmp" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-icmp"
|
name = "allow-icmp-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "1995"
|
priority = 1995 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Icmp"
|
protocol = "Icmp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "*"
|
destination_port_range = "*"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-ssh" {
|
resource "azurerm_network_security_rule" "controller-ssh" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-ssh"
|
name = "allow-ssh-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2000"
|
priority = 2000 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "22"
|
destination_port_range = "22"
|
||||||
source_address_prefix = "*"
|
source_address_prefix = "*"
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-etcd" {
|
resource "azurerm_network_security_rule" "controller-etcd" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-etcd"
|
name = "allow-etcd-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2005"
|
priority = 2005 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "2379-2380"
|
destination_port_range = "2379-2380"
|
||||||
source_address_prefixes = azurerm_subnet.controller.address_prefixes
|
source_address_prefixes = local.controller_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape etcd metrics
|
# Allow Prometheus to scrape etcd metrics
|
||||||
resource "azurerm_network_security_rule" "controller-etcd-metrics" {
|
resource "azurerm_network_security_rule" "controller-etcd-metrics" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-etcd-metrics"
|
name = "allow-etcd-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2010"
|
priority = 2010 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "2381"
|
destination_port_range = "2381"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-proxy metrics
|
# Allow Prometheus to scrape kube-proxy metrics
|
||||||
resource "azurerm_network_security_rule" "controller-kube-proxy" {
|
resource "azurerm_network_security_rule" "controller-kube-proxy" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-kube-proxy-metrics"
|
name = "allow-kube-proxy-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2011"
|
priority = 2012 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10249"
|
destination_port_range = "10249"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
|
# Allow Prometheus to scrape kube-scheduler and kube-controller-manager metrics
|
||||||
resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
resource "azurerm_network_security_rule" "controller-kube-metrics" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-kube-metrics"
|
name = "allow-kube-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2012"
|
priority = 2014 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10257-10259"
|
destination_port_range = "10257-10259"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-apiserver" {
|
resource "azurerm_network_security_rule" "controller-apiserver" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-apiserver"
|
name = "allow-apiserver-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2015"
|
priority = 2016 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "6443"
|
destination_port_range = "6443"
|
||||||
source_address_prefix = "*"
|
source_address_prefix = "*"
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = var.networking == "cilium" ? local.controller_subnets : {}
|
||||||
count = var.networking == "cilium" ? 1 : 0
|
|
||||||
|
|
||||||
name = "allow-cilium-health"
|
name = "allow-cilium-health-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2018"
|
priority = 2018 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "4240"
|
destination_port_range = "4240"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-cilium-metrics" {
|
resource "azurerm_network_security_rule" "controller-cilium-metrics" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = var.networking == "cilium" ? local.controller_subnets : {}
|
||||||
count = var.networking == "cilium" ? 1 : 0
|
|
||||||
|
|
||||||
name = "allow-cilium-metrics"
|
name = "allow-cilium-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2019"
|
priority = 2035 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "9962-9965"
|
destination_port_range = "9962-9965"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-vxlan"
|
name = "allow-vxlan-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2020"
|
priority = 2020 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Udp"
|
protocol = "Udp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "4789"
|
destination_port_range = "4789"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
resource "azurerm_network_security_rule" "controller-linux-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-linux-vxlan"
|
name = "allow-linux-vxlan-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2021"
|
priority = 2022 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Udp"
|
protocol = "Udp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "8472"
|
destination_port_range = "8472"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
resource "azurerm_network_security_rule" "controller-node-exporter" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-node-exporter"
|
name = "allow-node-exporter-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2025"
|
priority = 2025 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "9100"
|
destination_port_range = "9100"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||||
resource "azurerm_network_security_rule" "controller-kubelet" {
|
resource "azurerm_network_security_rule" "controller-kubelet" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.controller_subnets
|
||||||
|
|
||||||
name = "allow-kubelet"
|
name = "allow-kubelet-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.controller.name
|
network_security_group_name = azurerm_network_security_group.controller.name
|
||||||
priority = "2030"
|
priority = 2030 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10250"
|
destination_port_range = "10250"
|
||||||
|
|
||||||
# allow Prometheus to scrape kubelet metrics too
|
# allow Prometheus to scrape kubelet metrics too
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
destination_address_prefixes = local.controller_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||||
@ -247,182 +256,189 @@ resource "azurerm_network_security_rule" "controller-deny-all" {
|
|||||||
# Worker security group
|
# Worker security group
|
||||||
|
|
||||||
resource "azurerm_network_security_group" "worker" {
|
resource "azurerm_network_security_group" "worker" {
|
||||||
|
name = "${var.cluster_name}-worker"
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
|
location = azurerm_resource_group.cluster.location
|
||||||
name = "${var.cluster_name}-worker"
|
|
||||||
location = azurerm_resource_group.cluster.location
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-icmp" {
|
resource "azurerm_network_security_rule" "worker-icmp" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-icmp"
|
name = "allow-icmp-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "1995"
|
priority = 1995 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Icmp"
|
protocol = "Icmp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "*"
|
destination_port_range = "*"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-ssh" {
|
resource "azurerm_network_security_rule" "worker-ssh" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-ssh"
|
name = "allow-ssh-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2000"
|
priority = 2000 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "22"
|
destination_port_range = "22"
|
||||||
source_address_prefixes = azurerm_subnet.controller.address_prefixes
|
source_address_prefixes = local.controller_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-http" {
|
resource "azurerm_network_security_rule" "worker-http" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-http"
|
name = "allow-http-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2005"
|
priority = 2005 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "80"
|
destination_port_range = "80"
|
||||||
source_address_prefix = "*"
|
source_address_prefix = "*"
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-https" {
|
resource "azurerm_network_security_rule" "worker-https" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-https"
|
name = "allow-https-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2010"
|
priority = 2010 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "443"
|
destination_port_range = "443"
|
||||||
source_address_prefix = "*"
|
source_address_prefix = "*"
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = var.networking == "cilium" ? local.worker_subnets : {}
|
||||||
count = var.networking == "cilium" ? 1 : 0
|
|
||||||
|
|
||||||
name = "allow-cilium-health"
|
name = "allow-cilium-health-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2013"
|
priority = 2012 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "4240"
|
destination_port_range = "4240"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-cilium-metrics" {
|
resource "azurerm_network_security_rule" "worker-cilium-metrics" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = var.networking == "cilium" ? local.worker_subnets : {}
|
||||||
count = var.networking == "cilium" ? 1 : 0
|
|
||||||
|
|
||||||
name = "allow-cilium-metrics"
|
name = "allow-cilium-metrics-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2014"
|
priority = 2014 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "9962-9965"
|
destination_port_range = "9962-9965"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-vxlan"
|
name = "allow-vxlan-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2015"
|
priority = 2016 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Udp"
|
protocol = "Udp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "4789"
|
destination_port_range = "4789"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
resource "azurerm_network_security_rule" "worker-linux-vxlan" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-linux-vxlan"
|
name = "allow-linux-vxlan-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2016"
|
priority = 2018 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Udp"
|
protocol = "Udp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "8472"
|
destination_port_range = "8472"
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape node-exporter daemonset
|
# Allow Prometheus to scrape node-exporter daemonset
|
||||||
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
resource "azurerm_network_security_rule" "worker-node-exporter" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-node-exporter"
|
name = "allow-node-exporter-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2020"
|
priority = 2020 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "9100"
|
destination_port_range = "9100"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow Prometheus to scrape kube-proxy
|
# Allow Prometheus to scrape kube-proxy
|
||||||
resource "azurerm_network_security_rule" "worker-kube-proxy" {
|
resource "azurerm_network_security_rule" "worker-kube-proxy" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-kube-proxy"
|
name = "allow-kube-proxy-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2024"
|
priority = 2024 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10249"
|
destination_port_range = "10249"
|
||||||
source_address_prefixes = azurerm_subnet.worker.address_prefixes
|
source_address_prefixes = local.worker_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Allow apiserver to access kubelet's for exec, log, port-forward
|
# Allow apiserver to access kubelet's for exec, log, port-forward
|
||||||
resource "azurerm_network_security_rule" "worker-kubelet" {
|
resource "azurerm_network_security_rule" "worker-kubelet" {
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
for_each = local.worker_subnets
|
||||||
|
|
||||||
name = "allow-kubelet"
|
name = "allow-kubelet-${each.key}"
|
||||||
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
network_security_group_name = azurerm_network_security_group.worker.name
|
network_security_group_name = azurerm_network_security_group.worker.name
|
||||||
priority = "2025"
|
priority = 2026 + (each.key == "ipv4" ? 0 : 1)
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
direction = "Inbound"
|
direction = "Inbound"
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "10250"
|
destination_port_range = "10250"
|
||||||
|
|
||||||
# allow Prometheus to scrape kubelet metrics too
|
# allow Prometheus to scrape kubelet metrics too
|
||||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
source_address_prefixes = local.cluster_subnets[each.key]
|
||||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
destination_address_prefixes = local.worker_subnets[each.key]
|
||||||
}
|
}
|
||||||
|
|
||||||
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
# Override Azure AllowVNetInBound and AllowAzureLoadBalancerInBound
|
||||||
|
@ -18,7 +18,7 @@ resource "null_resource" "copy-controller-secrets" {
|
|||||||
|
|
||||||
connection {
|
connection {
|
||||||
type = "ssh"
|
type = "ssh"
|
||||||
host = azurerm_public_ip.controllers.*.ip_address[count.index]
|
host = azurerm_public_ip.controllers-ipv4[count.index].ip_address
|
||||||
user = "core"
|
user = "core"
|
||||||
timeout = "15m"
|
timeout = "15m"
|
||||||
}
|
}
|
||||||
@ -45,7 +45,7 @@ resource "null_resource" "bootstrap" {
|
|||||||
|
|
||||||
connection {
|
connection {
|
||||||
type = "ssh"
|
type = "ssh"
|
||||||
host = azurerm_public_ip.controllers.*.ip_address[0]
|
host = azurerm_public_ip.controllers-ipv4[0].ip_address
|
||||||
user = "core"
|
user = "core"
|
||||||
timeout = "15m"
|
timeout = "15m"
|
||||||
}
|
}
|
||||||
|
@ -100,10 +100,15 @@ variable "networking" {
|
|||||||
default = "cilium"
|
default = "cilium"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "host_cidr" {
|
variable "network_cidr" {
|
||||||
type = string
|
type = object({
|
||||||
description = "CIDR IPv4 range to assign to instances"
|
ipv4 = list(string)
|
||||||
default = "10.0.0.0/16"
|
ipv6 = optional(list(string), ["fd9a:0d2f:b7dc::/48"])
|
||||||
|
})
|
||||||
|
description = "Virtual network CIDR ranges"
|
||||||
|
default = {
|
||||||
|
ipv4 = ["10.0.0.0/16"]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "pod_cidr" {
|
variable "pod_cidr" {
|
||||||
|
@ -3,11 +3,11 @@ module "workers" {
|
|||||||
name = var.cluster_name
|
name = var.cluster_name
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
region = azurerm_resource_group.cluster.location
|
region = azurerm_resource_group.cluster.location
|
||||||
subnet_id = azurerm_subnet.worker.id
|
subnet_id = azurerm_subnet.worker.id
|
||||||
security_group_id = azurerm_network_security_group.worker.id
|
security_group_id = azurerm_network_security_group.worker.id
|
||||||
backend_address_pool_id = azurerm_lb_backend_address_pool.worker.id
|
backend_address_pool_ids = local.backend_address_pool_ids
|
||||||
|
|
||||||
worker_count = var.worker_count
|
worker_count = var.worker_count
|
||||||
vm_type = var.worker_type
|
vm_type = var.worker_type
|
||||||
|
@ -25,9 +25,12 @@ variable "security_group_id" {
|
|||||||
description = "Must be set to the `worker_security_group_id` output by cluster"
|
description = "Must be set to the `worker_security_group_id` output by cluster"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "backend_address_pool_id" {
|
variable "backend_address_pool_ids" {
|
||||||
type = string
|
type = object({
|
||||||
description = "Must be set to the `worker_backend_address_pool_id` output by cluster"
|
ipv4 = list(string)
|
||||||
|
ipv6 = list(string)
|
||||||
|
})
|
||||||
|
description = "Must be set to the `backend_address_pool_ids` output by cluster"
|
||||||
}
|
}
|
||||||
|
|
||||||
# instances
|
# instances
|
||||||
|
@ -9,19 +9,14 @@ locals {
|
|||||||
|
|
||||||
# Workers scale set
|
# Workers scale set
|
||||||
resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
||||||
|
name = "${var.name}-worker"
|
||||||
resource_group_name = var.resource_group_name
|
resource_group_name = var.resource_group_name
|
||||||
|
location = var.region
|
||||||
name = "${var.name}-worker"
|
sku = var.vm_type
|
||||||
location = var.region
|
instances = var.worker_count
|
||||||
sku = var.vm_type
|
|
||||||
instances = var.worker_count
|
|
||||||
# instance name prefix for instances in the set
|
# instance name prefix for instances in the set
|
||||||
computer_name_prefix = "${var.name}-worker"
|
computer_name_prefix = "${var.name}-worker"
|
||||||
single_placement_group = false
|
single_placement_group = false
|
||||||
custom_data = base64encode(data.ct_config.worker.rendered)
|
|
||||||
boot_diagnostics {
|
|
||||||
# defaults to a managed storage account
|
|
||||||
}
|
|
||||||
|
|
||||||
# storage
|
# storage
|
||||||
os_disk {
|
os_disk {
|
||||||
@ -46,13 +41,6 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Azure requires setting admin_ssh_key, though Ignition custom_data handles it too
|
|
||||||
admin_username = "core"
|
|
||||||
admin_ssh_key {
|
|
||||||
username = "core"
|
|
||||||
public_key = local.azure_authorized_key
|
|
||||||
}
|
|
||||||
|
|
||||||
# network
|
# network
|
||||||
network_interface {
|
network_interface {
|
||||||
name = "nic0"
|
name = "nic0"
|
||||||
@ -60,13 +48,33 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
|||||||
network_security_group_id = var.security_group_id
|
network_security_group_id = var.security_group_id
|
||||||
|
|
||||||
ip_configuration {
|
ip_configuration {
|
||||||
name = "ip0"
|
name = "ipv4"
|
||||||
|
version = "IPv4"
|
||||||
primary = true
|
primary = true
|
||||||
subnet_id = var.subnet_id
|
subnet_id = var.subnet_id
|
||||||
|
|
||||||
# backend address pool to which the NIC should be added
|
# backend address pool to which the NIC should be added
|
||||||
load_balancer_backend_address_pool_ids = [var.backend_address_pool_id]
|
load_balancer_backend_address_pool_ids = var.backend_address_pool_ids.ipv4
|
||||||
}
|
}
|
||||||
|
ip_configuration {
|
||||||
|
name = "ipv6"
|
||||||
|
version = "IPv6"
|
||||||
|
subnet_id = var.subnet_id
|
||||||
|
# backend address pool to which the NIC should be added
|
||||||
|
load_balancer_backend_address_pool_ids = var.backend_address_pool_ids.ipv6
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# boot
|
||||||
|
custom_data = base64encode(data.ct_config.worker.rendered)
|
||||||
|
boot_diagnostics {
|
||||||
|
# defaults to a managed storage account
|
||||||
|
}
|
||||||
|
|
||||||
|
# Azure requires an RSA admin_ssh_key
|
||||||
|
admin_username = "core"
|
||||||
|
admin_ssh_key {
|
||||||
|
username = "core"
|
||||||
|
public_key = local.azure_authorized_key
|
||||||
}
|
}
|
||||||
|
|
||||||
# lifecycle
|
# lifecycle
|
||||||
@ -81,18 +89,15 @@ resource "azurerm_linux_virtual_machine_scale_set" "workers" {
|
|||||||
|
|
||||||
# Scale up or down to maintain desired number, tolerating deallocations.
|
# Scale up or down to maintain desired number, tolerating deallocations.
|
||||||
resource "azurerm_monitor_autoscale_setting" "workers" {
|
resource "azurerm_monitor_autoscale_setting" "workers" {
|
||||||
|
name = "${var.name}-maintain-desired"
|
||||||
resource_group_name = var.resource_group_name
|
resource_group_name = var.resource_group_name
|
||||||
|
location = var.region
|
||||||
name = "${var.name}-maintain-desired"
|
|
||||||
location = var.region
|
|
||||||
|
|
||||||
# autoscale
|
# autoscale
|
||||||
enabled = true
|
enabled = true
|
||||||
target_resource_id = azurerm_linux_virtual_machine_scale_set.workers.id
|
target_resource_id = azurerm_linux_virtual_machine_scale_set.workers.id
|
||||||
|
|
||||||
profile {
|
profile {
|
||||||
name = "default"
|
name = "default"
|
||||||
|
|
||||||
capacity {
|
capacity {
|
||||||
minimum = var.worker_count
|
minimum = var.worker_count
|
||||||
default = var.worker_count
|
default = var.worker_count
|
||||||
|
@ -37,7 +37,7 @@ resource "google_dns_record_set" "some-application" {
|
|||||||
|
|
||||||
## Azure
|
## Azure
|
||||||
|
|
||||||
On Azure, a load balancer distributes traffic across a backend address pool of worker nodes running an Ingress controller deployment. Security group rules allow traffic to ports 80 and 443. Health probes ensure only workers with a healthy Ingress controller receive traffic.
|
On Azure, an Azure Load Balancer distributes IPv4/IPv6 traffic across backend address pools of worker nodes running an Ingress controller deployment. Security group rules allow traffic to ports 80 and 443. Health probes ensure only workers with a healthy Ingress controller receive traffic.
|
||||||
|
|
||||||
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, and namespace.
|
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, and namespace.
|
||||||
|
|
||||||
@ -53,10 +53,10 @@ app2.example.com -> 11.22.33.44
|
|||||||
app3.example.com -> 11.22.33.44
|
app3.example.com -> 11.22.33.44
|
||||||
```
|
```
|
||||||
|
|
||||||
Find the load balancer's IPv4 address with the Azure console or use the Typhoon module's output `ingress_static_ipv4`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
Find the load balancer's addresses with the Azure console or use the Typhoon module's outputs `ingress_static_ipv4` or `ingress_static_ipv6`. For example, you might use Terraform to manage a Google Cloud DNS record:
|
||||||
|
|
||||||
```tf
|
```tf
|
||||||
resource "google_dns_record_set" "some-application" {
|
resource "google_dns_record_set" "app-record-a" {
|
||||||
# DNS zone name
|
# DNS zone name
|
||||||
managed_zone = "example-zone"
|
managed_zone = "example-zone"
|
||||||
|
|
||||||
@ -66,6 +66,17 @@ resource "google_dns_record_set" "some-application" {
|
|||||||
ttl = 300
|
ttl = 300
|
||||||
rrdatas = [module.ramius.ingress_static_ipv4]
|
rrdatas = [module.ramius.ingress_static_ipv4]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resource "google_dns_record_set" "app-record-aaaa" {
|
||||||
|
# DNS zone name
|
||||||
|
managed_zone = "example-zone"
|
||||||
|
|
||||||
|
# DNS record
|
||||||
|
name = "app.example.com."
|
||||||
|
type = "AAAA"
|
||||||
|
ttl = 300
|
||||||
|
rrdatas = [module.ramius.ingress_static_ipv6]
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Bare-Metal
|
## Bare-Metal
|
||||||
|
@ -114,11 +114,11 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
|||||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.30.2"
|
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.30.2"
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
region = module.ramius.region
|
region = module.ramius.region
|
||||||
resource_group_name = module.ramius.resource_group_name
|
resource_group_name = module.ramius.resource_group_name
|
||||||
subnet_id = module.ramius.subnet_id
|
subnet_id = module.ramius.subnet_id
|
||||||
security_group_id = module.ramius.security_group_id
|
security_group_id = module.ramius.security_group_id
|
||||||
backend_address_pool_id = module.ramius.backend_address_pool_id
|
backend_address_pool_ids = module.ramius.backend_address_pool_ids
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
name = "ramius-spot"
|
name = "ramius-spot"
|
||||||
@ -127,7 +127,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
|||||||
|
|
||||||
# optional
|
# optional
|
||||||
worker_count = 2
|
worker_count = 2
|
||||||
vm_type = "Standard_F4"
|
vm_type = "Standard_D2as_v5"
|
||||||
priority = "Spot"
|
priority = "Spot"
|
||||||
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-31.20200323.3.2"
|
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-31.20200323.3.2"
|
||||||
}
|
}
|
||||||
@ -140,11 +140,11 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
|||||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.30.2"
|
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.30.2"
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
region = module.ramius.region
|
region = module.ramius.region
|
||||||
resource_group_name = module.ramius.resource_group_name
|
resource_group_name = module.ramius.resource_group_name
|
||||||
subnet_id = module.ramius.subnet_id
|
subnet_id = module.ramius.subnet_id
|
||||||
security_group_id = module.ramius.security_group_id
|
security_group_id = module.ramius.security_group_id
|
||||||
backend_address_pool_id = module.ramius.backend_address_pool_id
|
backend_address_pool_ids = module.ramius.backend_address_pool_ids
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
name = "ramius-spot"
|
name = "ramius-spot"
|
||||||
@ -153,7 +153,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
|||||||
|
|
||||||
# optional
|
# optional
|
||||||
worker_count = 2
|
worker_count = 2
|
||||||
vm_type = "Standard_F4"
|
vm_type = "Standard_D2as_v5"
|
||||||
priority = "Spot"
|
priority = "Spot"
|
||||||
os_image = "flatcar-beta"
|
os_image = "flatcar-beta"
|
||||||
}
|
}
|
||||||
@ -180,7 +180,7 @@ The Azure internal `workers` module supports a number of [variables](https://git
|
|||||||
| resource_group_name | Must be set to `resource_group_name` output by cluster | module.cluster.resource_group_name |
|
| resource_group_name | Must be set to `resource_group_name` output by cluster | module.cluster.resource_group_name |
|
||||||
| subnet_id | Must be set to `subnet_id` output by cluster | module.cluster.subnet_id |
|
| subnet_id | Must be set to `subnet_id` output by cluster | module.cluster.subnet_id |
|
||||||
| security_group_id | Must be set to `security_group_id` output by cluster | module.cluster.security_group_id |
|
| security_group_id | Must be set to `security_group_id` output by cluster | module.cluster.security_group_id |
|
||||||
| backend_address_pool_id | Must be set to `backend_address_pool_id` output by cluster | module.cluster.backend_address_pool_id |
|
| backend_address_pool_ids | Must be set to `backend_address_pool_ids` output by cluster | module.cluster.backend_address_pool_ids |
|
||||||
| kubeconfig | Must be set to `kubeconfig` output by cluster | module.cluster.kubeconfig |
|
| kubeconfig | Must be set to `kubeconfig` output by cluster | module.cluster.kubeconfig |
|
||||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-ed25519 AAAAB3NZ..." |
|
| ssh_authorized_key | SSH public key for user 'core' | "ssh-ed25519 AAAAB3NZ..." |
|
||||||
|
|
||||||
|
@ -10,9 +10,9 @@ A load balancer distributes IPv4 TCP/6443 traffic across a backend address pool
|
|||||||
|
|
||||||
### HTTP/HTTPS Ingress
|
### HTTP/HTTPS Ingress
|
||||||
|
|
||||||
A load balancer distributes IPv4 TCP/80 and TCP/443 traffic across a backend address pool of workers with a healthy Ingress controller.
|
An Azure Load Balancer distributes IPv4/IPv6 TCP/80 and TCP/443 traffic across backend address pools of workers with a healthy Ingress controller.
|
||||||
|
|
||||||
The Azure LB IPv4 address is output as `ingress_static_ipv4` for use in DNS A records. See [Ingress on Azure](/addons/ingress/#azure).
|
The load balancer addresses are output as `ingress_static_ipv4` and `ingress_static_ipv6` for use in DNS A and AAAA records. See [Ingress on Azure](/addons/ingress/#azure).
|
||||||
|
|
||||||
### TCP/UDP Services
|
### TCP/UDP Services
|
||||||
|
|
||||||
@ -21,27 +21,25 @@ Load balance TCP/UDP applications by adding rules to the Azure LB (output). A ru
|
|||||||
```tf
|
```tf
|
||||||
# Forward traffic to the worker backend address pool
|
# Forward traffic to the worker backend address pool
|
||||||
resource "azurerm_lb_rule" "some-app-tcp" {
|
resource "azurerm_lb_rule" "some-app-tcp" {
|
||||||
resource_group_name = module.ramius.resource_group_name
|
|
||||||
|
|
||||||
name = "some-app-tcp"
|
name = "some-app-tcp"
|
||||||
|
resource_group_name = module.ramius.resource_group_name
|
||||||
loadbalancer_id = module.ramius.loadbalancer_id
|
loadbalancer_id = module.ramius.loadbalancer_id
|
||||||
frontend_ip_configuration_name = "ingress"
|
frontend_ip_configuration_name = "ingress-ipv4"
|
||||||
|
|
||||||
protocol = "Tcp"
|
protocol = "Tcp"
|
||||||
frontend_port = 3333
|
frontend_port = 3333
|
||||||
backend_port = 30333
|
backend_port = 30333
|
||||||
backend_address_pool_id = module.ramius.backend_address_pool_id
|
backend_address_pool_ids = module.ramius.backend_address_pool_ids.ipv4
|
||||||
probe_id = azurerm_lb_probe.some-app.id
|
probe_id = azurerm_lb_probe.some-app.id
|
||||||
}
|
}
|
||||||
|
|
||||||
# Health check some-app
|
# Health check some-app
|
||||||
resource "azurerm_lb_probe" "some-app" {
|
resource "azurerm_lb_probe" "some-app" {
|
||||||
|
name = "some-app"
|
||||||
resource_group_name = module.ramius.resource_group_name
|
resource_group_name = module.ramius.resource_group_name
|
||||||
|
loadbalancer_id = module.ramius.loadbalancer_id
|
||||||
name = "some-app"
|
protocol = "Tcp"
|
||||||
loadbalancer_id = module.ramius.loadbalancer_id
|
port = 30333
|
||||||
protocol = "Tcp"
|
|
||||||
port = 30333
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -51,9 +49,8 @@ Add firewall rules to the worker security group.
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
resource "azurerm_network_security_rule" "some-app" {
|
resource "azurerm_network_security_rule" "some-app" {
|
||||||
resource_group_name = module.ramius.resource_group_name
|
|
||||||
|
|
||||||
name = "some-app"
|
name = "some-app"
|
||||||
|
resource_group_name = module.ramius.resource_group_name
|
||||||
network_security_group_name = module.ramius.worker_security_group_name
|
network_security_group_name = module.ramius.worker_security_group_name
|
||||||
priority = "3001"
|
priority = "3001"
|
||||||
access = "Allow"
|
access = "Allow"
|
||||||
@ -62,7 +59,7 @@ resource "azurerm_network_security_rule" "some-app" {
|
|||||||
source_port_range = "*"
|
source_port_range = "*"
|
||||||
destination_port_range = "30333"
|
destination_port_range = "30333"
|
||||||
source_address_prefix = "*"
|
source_address_prefix = "*"
|
||||||
destination_address_prefixes = module.ramius.worker_address_prefixes
|
destination_address_prefixes = module.ramius.worker_address_prefixes.ipv4
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -72,6 +69,6 @@ Azure does not provide public IPv6 addresses at the standard SKU.
|
|||||||
|
|
||||||
| IPv6 Feature | Supported |
|
| IPv6 Feature | Supported |
|
||||||
|-------------------------|-----------|
|
|-------------------------|-----------|
|
||||||
| Node IPv6 address | No |
|
| Node IPv6 address | Yes |
|
||||||
| Node Outbound IPv6 | No |
|
| Node Outbound IPv6 | Yes |
|
||||||
| Kubernetes Ingress IPv6 | No |
|
| Kubernetes Ingress IPv6 | Yes |
|
||||||
|
@ -67,15 +67,15 @@ Fedora CoreOS publishes images for Azure, but does not yet upload them. Azure al
|
|||||||
[Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image, decompress it, and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow).
|
[Download](https://getfedora.org/en/coreos/download?tab=cloud_operators&stream=stable) a Fedora CoreOS Azure VHD image, decompress it, and upload it to an Azure storage account container (i.e. bucket) via the UI (quite slow).
|
||||||
|
|
||||||
```
|
```
|
||||||
xz -d fedora-coreos-36.20220716.3.1-azure.x86_64.vhd.xz
|
xz -d fedora-coreos-40.20240616.3.0-azure.x86_64.vhd.xz
|
||||||
```
|
```
|
||||||
|
|
||||||
Create an Azure disk (note disk ID) and create an Azure image from it (note image ID).
|
Create an Azure disk (note disk ID) and create an Azure image from it (note image ID).
|
||||||
|
|
||||||
```
|
```
|
||||||
az disk create --name fedora-coreos-36.20220716.3.1 -g GROUP --source https://BUCKET.blob.core.windows.net/fedora-coreos/fedora-coreos-36.20220716.3.1-azure.x86_64.vhd
|
az disk create --name fedora-coreos-40.20240616.3.0 -g GROUP --source https://BUCKET.blob.core.windows.net/images/fedora-coreos-40.20240616.3.0-azure.x86_64.vhd
|
||||||
|
|
||||||
az image create --name fedora-coreos-36.20220716.3.1 -g GROUP --os-type=linux --source /subscriptions/some/path/providers/Microsoft.Compute/disks/fedora-coreos-36.20220716.3.1
|
az image create --name fedora-coreos-40.20240616.3.0 -g GROUP --os-type linux --source /subscriptions/some/path/Microsoft.Compute/disks/fedora-coreos-40.20240616.3.0
|
||||||
```
|
```
|
||||||
|
|
||||||
Set the [os_image](#variables) in the next step.
|
Set the [os_image](#variables) in the next step.
|
||||||
@ -100,7 +100,9 @@ module "ramius" {
|
|||||||
|
|
||||||
# optional
|
# optional
|
||||||
worker_count = 2
|
worker_count = 2
|
||||||
host_cidr = "10.0.0.0/20"
|
network_cidr = {
|
||||||
|
ipv4 = ["10.0.0.0/20"]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -246,7 +248,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
|||||||
| controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) |
|
| controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
| worker_snippets | Worker Butane snippets | [] | [example](/advanced/customization/#usage) |
|
| worker_snippets | Worker Butane snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
|
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
|
||||||
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
|
| network_cidr | Virtual network CIDR ranges | { ipv4 = ["10.0.0.0/16"], ipv6 = [ULA, ...] } | { ipv4 = ["10.0.0.0/20"] } |
|
||||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||||
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
|
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
|
||||||
|
@ -88,7 +88,9 @@ module "ramius" {
|
|||||||
|
|
||||||
# optional
|
# optional
|
||||||
worker_count = 2
|
worker_count = 2
|
||||||
host_cidr = "10.0.0.0/20"
|
network_cidr = {
|
||||||
|
ipv4 = ["10.0.0.0/20"]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -234,7 +236,7 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
|||||||
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
|
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
|
||||||
| host_cidr | CIDR IPv4 range to assign to instances | "10.0.0.0/16" | "10.0.0.0/20" |
|
| network_cidr | Virtual network CIDR ranges | { ipv4 = ["10.0.0.0/16"], ipv6 = [ULA, ...] } | { ipv4 = ["10.0.0.0/20"] } |
|
||||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||||
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
|
| worker_node_labels | List of initial worker node labels | [] | ["worker-pool=default"] |
|
||||||
|
Loading…
Reference in New Issue
Block a user