Use global HTTP/TCP proxy load balancing for Ingress on GCP

* Switch Ingress from regional network load balancers to global
HTTP/TCP Proxy load balancing
* Reduce cost by ~$19/month per cluster. Google bills the first 5
global and regional forwarding rules separately. Typhoon clusters now
use 3 global and 0 regional forwarding rules.
* Worker pools no longer include an extraneous load balancer. Remove
worker module's `ingress_static_ip` output.
* Add `ingress_static_ipv4` output variable
* Add `worker_instance_group` output to allow custom global load
balancing
* Deprecate `controllers_ipv4_public` module output
* Deprecate `ingress_static_ip` module output. Use `ingress_static_ipv4`
This commit is contained in:
Dalton Hubble 2018-05-06 23:21:53 -07:00
parent 2eaf04c68b
commit 0c4d59db87
13 changed files with 322 additions and 134 deletions

View File

@ -11,12 +11,10 @@ Notable changes between versions.
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248)) * Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
* Combine apiserver and ingress NLBs ([#249](https://github.com/poseidon/typhoon/pull/249)) * Combine apiserver and ingress NLBs ([#249](https://github.com/poseidon/typhoon/pull/249))
* Simplify clusters to come with one NLB. Reduce cost by ~$18/month per cluster. * Reduce cost by ~$18/month per cluster. Typhoon AWS clusters now use one network load balancer
* Users may keep using CNAME records to `ingress_dns_name` and the `nginx-ingress` addon for Ingress (up to a few million RPS) * Users may keep using CNAME records to `ingress_dns_name` and the `nginx-ingress` addon for Ingress (up to a few million RPS)
* Users with heavy traffic (many million RPS) should create a separate NLB(s) for Ingress instead * Users with heavy traffic (many million RPS) should create a separate NLB(s) for Ingress instead
* Listen for apiserver traffic on port 6443 and forward to controllers (with healthy apiserver) * Worker pools no longer include an extraneous load balancer
* Listen for ingress traffic on ports 80/443 and forward to workers (with healthy ingress controller)
* Worker pools (advanced) no longer include an extraneous load balancer
* Disable detailed (paid) monitoring on worker nodes ([#251](https://github.com/poseidon/typhoon/pull/251)) * Disable detailed (paid) monitoring on worker nodes ([#251](https://github.com/poseidon/typhoon/pull/251))
* Favor Prometheus for cloud-agnostic metrics, aggregation, alerting, and visualization * Favor Prometheus for cloud-agnostic metrics, aggregation, alerting, and visualization
@ -31,6 +29,18 @@ Notable changes between versions.
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248)) * Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
* Update firewall rules and generated kubeconfig's * Update firewall rules and generated kubeconfig's
#### Google Cloud
* Use global HTTP and TCP proxy load balancing for Kubernetes Ingress ([#252](https://github.com/poseidon/typhoon/pull/252))
* Switch Ingress from regional network load balancers to global HTTP/TCP Proxy load balancing
* Reduce cost by ~$19/month per cluster. Google bills the first 5 global and regional forwarding rules separately. Typhoon clusters now use 3 global and 0 regional forwarding rules.
* Worker pools no longer include an extraneous load balancer. Remove worker module's `ingress_static_ip` output
* Allow using nginx-ingress addon on Typhoon for Fedora Atomic ([#200](https://github.com/poseidon/typhoon/issues/200))
* Add `ingress_static_ipv4` module output
* Add `worker_instance_group` module output to allow custom global load balancing
* Deprecate `controllers_ipv4_public` module output
* Deprecate `ingress_static_ip` module output. Use `ingress_static_ipv4`
#### Addons #### Addons
* Update CLUO from v0.6.0 to v0.7.0 ([#242](https://github.com/poseidon/typhoon/pull/242)) * Update CLUO from v0.6.0 to v0.7.0 ([#242](https://github.com/poseidon/typhoon/pull/242))

View File

@ -0,0 +1,96 @@
# Static IPv4 address for the TCP Proxy Load Balancer
resource "google_compute_global_address" "ingress-ipv4" {
name = "${var.cluster_name}-ingress-ip"
ip_version = "IPV4"
}
# Forward IPv4 TCP traffic to the HTTP proxy load balancer
# Google Cloud does not allow TCP proxies for port 80. Must use HTTP proxy.
resource "google_compute_global_forwarding_rule" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
ip_address = "${google_compute_global_address.ingress-ipv4.address}"
ip_protocol = "TCP"
port_range = "80"
target = "${google_compute_target_http_proxy.ingress-http.self_link}"
}
# Forward IPv4 TCP traffic to the TCP proxy load balancer
resource "google_compute_global_forwarding_rule" "ingress-https" {
name = "${var.cluster_name}-ingress-https"
ip_address = "${google_compute_global_address.ingress-ipv4.address}"
ip_protocol = "TCP"
port_range = "443"
target = "${google_compute_target_tcp_proxy.ingress-https.self_link}"
}
# HTTP proxy load balancer for ingress controllers
resource "google_compute_target_http_proxy" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
description = "Distribute HTTP load across ${var.cluster_name} workers"
url_map = "${google_compute_url_map.ingress-http.self_link}"
}
# TCP proxy load balancer for ingress controllers
resource "google_compute_target_tcp_proxy" "ingress-https" {
name = "${var.cluster_name}-ingress-https"
description = "Distribute HTTPS load across ${var.cluster_name} workers"
backend_service = "${google_compute_backend_service.ingress-https.self_link}"
}
# HTTP URL Map (required)
resource "google_compute_url_map" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
# Do not add host/path rules for applications here. Use Ingress resources.
default_service = "${google_compute_backend_service.ingress-http.self_link}"
}
# Backend service backed by managed instance group of workers
resource "google_compute_backend_service" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
description = "${var.cluster_name} ingress service"
protocol = "HTTP"
port_name = "http"
session_affinity = "NONE"
timeout_sec = "60"
backend {
group = "${module.workers.instance_group}"
}
health_checks = ["${google_compute_health_check.ingress.self_link}"]
}
# Backend service backed by managed instance group of workers
resource "google_compute_backend_service" "ingress-https" {
name = "${var.cluster_name}-ingress-https"
description = "${var.cluster_name} ingress service"
protocol = "TCP"
port_name = "https"
session_affinity = "NONE"
timeout_sec = "60"
backend {
group = "${module.workers.instance_group}"
}
health_checks = ["${google_compute_health_check.ingress.self_link}"]
}
# Ingress HTTP Health Check
resource "google_compute_health_check" "ingress" {
name = "${var.cluster_name}-ingress-health"
description = "Health check for Ingress controller"
timeout_sec = 5
check_interval_sec = 5
healthy_threshold = 2
unhealthy_threshold = 4
http_health_check {
port = 10254
request_path = "/healthz"
}
}

View File

@ -161,3 +161,17 @@ resource "google_compute_firewall" "internal-kubelet-readonly" {
source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"] source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"] target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
} }
resource "google_compute_firewall" "google-health-checks" {
name = "${var.cluster_name}-google-health-checks"
network = "${google_compute_network.network.name}"
allow {
protocol = "tcp"
ports = [10254]
}
# https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/tcp-proxy#health-checking
source_ranges = ["130.211.0.0/22", "35.191.0.0/16"]
target_tags = ["${var.cluster_name}-worker"]
}

View File

@ -3,12 +3,17 @@ output "controllers_ipv4_public" {
value = ["${google_compute_instance.controllers.*.network_interface.0.access_config.0.assigned_nat_ip}"] value = ["${google_compute_instance.controllers.*.network_interface.0.access_config.0.assigned_nat_ip}"]
} }
output "ingress_static_ip" { # Outputs for Kubernetes Ingress
value = "${module.workers.ingress_static_ip}"
output "ingress_static_ipv4" {
description = "Global IPv4 address for proxy load balancing to the nearest Ingress controller"
value = "${google_compute_global_address.ingress-ipv4.address}"
} }
output "network_self_link" { # Deprecated, use ingress_static_ipv4
value = "${google_compute_network.network.self_link}" output "ingress_static_ip" {
description = "Global IPv4 address for proxy load balancing to the nearest Ingress controller"
value = "${google_compute_global_address.ingress-ipv4.address}"
} }
# Outputs for worker pools # Outputs for worker pools
@ -20,3 +25,16 @@ output "network_name" {
output "kubeconfig" { output "kubeconfig" {
value = "${module.bootkube.kubeconfig}" value = "${module.bootkube.kubeconfig}"
} }
# Outputs for custom firewalling
output "network_self_link" {
value = "${google_compute_network.network.self_link}"
}
# Outputs for custom load balancing
output "worker_instance_group" {
description = "Full URL of the worker managed instance group"
value = "${module.workers.instance_group}"
}

View File

@ -1,45 +0,0 @@
# Static IPv4 address for the Network Load Balancer
resource "google_compute_address" "ingress-ip" {
name = "${var.name}-ingress-ip"
}
# Network Load Balancer (i.e. forwarding rules)
resource "google_compute_forwarding_rule" "worker-http-lb" {
name = "${var.name}-worker-http-rule"
ip_address = "${google_compute_address.ingress-ip.address}"
port_range = "80"
target = "${google_compute_target_pool.workers.self_link}"
}
resource "google_compute_forwarding_rule" "worker-https-lb" {
name = "${var.name}-worker-https-rule"
ip_address = "${google_compute_address.ingress-ip.address}"
port_range = "443"
target = "${google_compute_target_pool.workers.self_link}"
}
# Network Load Balancer target pool of instances.
resource "google_compute_target_pool" "workers" {
name = "${var.name}-worker-pool"
health_checks = [
"${google_compute_http_health_check.ingress.name}",
]
session_affinity = "NONE"
}
# Ingress HTTP Health Check
resource "google_compute_http_health_check" "ingress" {
name = "${var.name}-ingress-health"
description = "Health check Ingress controller health host port"
timeout_sec = 5
check_interval_sec = 5
healthy_threshold = 2
unhealthy_threshold = 4
port = 10254
request_path = "/healthz"
}

View File

@ -1,3 +1,4 @@
output "ingress_static_ip" { output "instance_group" {
value = "${google_compute_address.ingress-ip.address}" description = "Full URL of the worker managed instance group"
value = "${google_compute_region_instance_group_manager.workers.instance_group}"
} }

View File

@ -1,5 +1,4 @@
# Regional managed instance group maintains a homogeneous set of workers that # Regional managed instance group of workers
# span the zones in the region.
resource "google_compute_region_instance_group_manager" "workers" { resource "google_compute_region_instance_group_manager" "workers" {
name = "${var.name}-worker-group" name = "${var.name}-worker-group"
description = "Compute instance group of ${var.name} workers" description = "Compute instance group of ${var.name} workers"
@ -11,30 +10,18 @@ resource "google_compute_region_instance_group_manager" "workers" {
target_size = "${var.count}" target_size = "${var.count}"
# target pool to which instances in the group should be added named_port {
target_pools = [ name = "http"
"${google_compute_target_pool.workers.self_link}", port = "80"
]
} }
# Worker Container Linux Config named_port {
data "template_file" "worker_config" { name = "https"
template = "${file("${path.module}/cl/worker.yaml.tmpl")}" port = "443"
vars = {
kubeconfig = "${indent(10, var.kubeconfig)}"
ssh_authorized_key = "${var.ssh_authorized_key}"
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
} }
} }
data "ct_config" "worker_ign" { # Worker instance template
content = "${data.template_file.worker_config.rendered}"
pretty_print = false
snippets = ["${var.clc_snippets}"]
}
resource "google_compute_instance_template" "worker" { resource "google_compute_instance_template" "worker" {
name_prefix = "${var.name}-worker-" name_prefix = "${var.name}-worker-"
description = "Worker Instance template" description = "Worker Instance template"
@ -76,3 +63,21 @@ resource "google_compute_instance_template" "worker" {
create_before_destroy = true create_before_destroy = true
} }
} }
# Worker Container Linux Config
data "template_file" "worker_config" {
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
vars = {
kubeconfig = "${indent(10, var.kubeconfig)}"
ssh_authorized_key = "${var.ssh_authorized_key}"
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
cluster_domain_suffix = "${var.cluster_domain_suffix}"
}
}
data "ct_config" "worker_ign" {
content = "${data.template_file.worker_config.rendered}"
pretty_print = false
snippets = ["${var.clc_snippets}"]
}

View File

@ -0,0 +1,96 @@
# Static IPv4 address for the TCP Proxy Load Balancer
resource "google_compute_global_address" "ingress-ipv4" {
name = "${var.cluster_name}-ingress-ip"
ip_version = "IPV4"
}
# Forward IPv4 TCP traffic to the HTTP proxy load balancer
# Google Cloud does not allow TCP proxies for port 80. Must use HTTP proxy.
resource "google_compute_global_forwarding_rule" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
ip_address = "${google_compute_global_address.ingress-ipv4.address}"
ip_protocol = "TCP"
port_range = "80"
target = "${google_compute_target_http_proxy.ingress-http.self_link}"
}
# Forward IPv4 TCP traffic to the TCP proxy load balancer
resource "google_compute_global_forwarding_rule" "ingress-https" {
name = "${var.cluster_name}-ingress-https"
ip_address = "${google_compute_global_address.ingress-ipv4.address}"
ip_protocol = "TCP"
port_range = "443"
target = "${google_compute_target_tcp_proxy.ingress-https.self_link}"
}
# HTTP proxy load balancer for ingress controllers
resource "google_compute_target_http_proxy" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
description = "Distribute HTTP load across ${var.cluster_name} workers"
url_map = "${google_compute_url_map.ingress-http.self_link}"
}
# TCP proxy load balancer for ingress controllers
resource "google_compute_target_tcp_proxy" "ingress-https" {
name = "${var.cluster_name}-ingress-https"
description = "Distribute HTTPS load across ${var.cluster_name} workers"
backend_service = "${google_compute_backend_service.ingress-https.self_link}"
}
# HTTP URL Map (required)
resource "google_compute_url_map" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
# Do not add host/path rules for applications here. Use Ingress resources.
default_service = "${google_compute_backend_service.ingress-http.self_link}"
}
# Backend service backed by managed instance group of workers
resource "google_compute_backend_service" "ingress-http" {
name = "${var.cluster_name}-ingress-http"
description = "${var.cluster_name} ingress service"
protocol = "HTTP"
port_name = "http"
session_affinity = "NONE"
timeout_sec = "60"
backend {
group = "${module.workers.instance_group}"
}
health_checks = ["${google_compute_health_check.ingress.self_link}"]
}
# Backend service backed by managed instance group of workers
resource "google_compute_backend_service" "ingress-https" {
name = "${var.cluster_name}-ingress-https"
description = "${var.cluster_name} ingress service"
protocol = "TCP"
port_name = "https"
session_affinity = "NONE"
timeout_sec = "60"
backend {
group = "${module.workers.instance_group}"
}
health_checks = ["${google_compute_health_check.ingress.self_link}"]
}
# Ingress HTTP Health Check
resource "google_compute_health_check" "ingress" {
name = "${var.cluster_name}-ingress-health"
description = "Health check for Ingress controller"
timeout_sec = 5
check_interval_sec = 5
healthy_threshold = 2
unhealthy_threshold = 4
http_health_check {
port = 10254
request_path = "/healthz"
}
}

View File

@ -161,3 +161,17 @@ resource "google_compute_firewall" "internal-kubelet-readonly" {
source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"] source_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"] target_tags = ["${var.cluster_name}-controller", "${var.cluster_name}-worker"]
} }
resource "google_compute_firewall" "google-health-checks" {
name = "${var.cluster_name}-google-health-checks"
network = "${google_compute_network.network.name}"
allow {
protocol = "tcp"
ports = [10254]
}
# https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/tcp-proxy#health-checking
source_ranges = ["130.211.0.0/22", "35.191.0.0/16"]
target_tags = ["${var.cluster_name}-worker"]
}

View File

@ -1,9 +1,14 @@
output "ingress_static_ip" { # Outputs for Kubernetes Ingress
value = "${module.workers.ingress_static_ip}"
output "ingress_static_ipv4" {
description = "Global IPv4 address for proxy load balancing to the nearest Ingress controller"
value = "${google_compute_global_address.ingress-ipv4.address}"
} }
output "network_self_link" { # Deprecated, use ingress_static_ipv4
value = "${google_compute_network.network.self_link}" output "ingress_static_ip" {
description = "Global IPv4 address for proxy load balancing to the nearest Ingress controller"
value = "${google_compute_global_address.ingress-ipv4.address}"
} }
# Outputs for worker pools # Outputs for worker pools
@ -15,3 +20,16 @@ output "network_name" {
output "kubeconfig" { output "kubeconfig" {
value = "${module.bootkube.kubeconfig}" value = "${module.bootkube.kubeconfig}"
} }
# Outputs for custom firewalling
output "network_self_link" {
value = "${google_compute_network.network.self_link}"
}
# Outputs for custom load balancing
output "worker_instance_group" {
description = "Full URL of the worker managed instance group"
value = "${module.workers.instance_group}"
}

View File

@ -1,45 +0,0 @@
# Static IPv4 address for the Network Load Balancer
resource "google_compute_address" "ingress-ip" {
name = "${var.name}-ingress-ip"
}
# Network Load Balancer (i.e. forwarding rules)
resource "google_compute_forwarding_rule" "worker-http-lb" {
name = "${var.name}-worker-http-rule"
ip_address = "${google_compute_address.ingress-ip.address}"
port_range = "80"
target = "${google_compute_target_pool.workers.self_link}"
}
resource "google_compute_forwarding_rule" "worker-https-lb" {
name = "${var.name}-worker-https-rule"
ip_address = "${google_compute_address.ingress-ip.address}"
port_range = "443"
target = "${google_compute_target_pool.workers.self_link}"
}
# Network Load Balancer target pool of instances.
resource "google_compute_target_pool" "workers" {
name = "${var.name}-worker-pool"
health_checks = [
"${google_compute_http_health_check.ingress.name}",
]
session_affinity = "NONE"
}
# Ingress HTTP Health Check
resource "google_compute_http_health_check" "ingress" {
name = "${var.name}-ingress-health"
description = "Health check Ingress controller health host port"
timeout_sec = 5
check_interval_sec = 5
healthy_threshold = 2
unhealthy_threshold = 4
port = 10254
request_path = "/healthz"
}

View File

@ -1,3 +1,4 @@
output "ingress_static_ip" { output "instance_group" {
value = "${google_compute_address.ingress-ip.address}" description = "Full URL of the worker managed instance group"
value = "${google_compute_region_instance_group_manager.workers.instance_group}"
} }

View File

@ -1,5 +1,4 @@
# Regional managed instance group maintains a homogeneous set of workers that # Regional managed instance group of workers
# span the zones in the region.
resource "google_compute_region_instance_group_manager" "workers" { resource "google_compute_region_instance_group_manager" "workers" {
name = "${var.name}-worker-group" name = "${var.name}-worker-group"
description = "Compute instance group of ${var.name} workers" description = "Compute instance group of ${var.name} workers"
@ -11,12 +10,18 @@ resource "google_compute_region_instance_group_manager" "workers" {
target_size = "${var.count}" target_size = "${var.count}"
# target pool to which instances in the group should be added named_port {
target_pools = [ name = "http"
"${google_compute_target_pool.workers.self_link}", port = "80"
]
} }
named_port {
name = "https"
port = "443"
}
}
# Worker instance template
resource "google_compute_instance_template" "worker" { resource "google_compute_instance_template" "worker" {
name_prefix = "${var.name}-worker-" name_prefix = "${var.name}-worker-"
description = "Worker Instance template" description = "Worker Instance template"