Improve apiserver backend service zone spanning
* google_compute_backend_services use nested blocks to define backends (instance groups heterogeneous controllers) * Use Terraform v0.12.x dynamic blocks so the apiserver backend service can refer to (up to zone-many) controller instance groups * Previously, with Terraform v0.11.x, the apiserver backend service had to list a fixed set of backends to span controller nodes across zones in multi-controller setups. 3 backends were used because each GCP region offered at least 3 zones. Single-controller clusters had the cosmetic ugliness of unused instance groups * Allow controllers to span more than 3 zones if avilable in a region (e.g. currently only us-central1, with 4 zones) Related: * https://www.terraform.io/docs/providers/google/r/compute_backend_service.html * https://www.terraform.io/docs/configuration/expressions.html#dynamic-blocks
This commit is contained in:
parent
8d373b5850
commit
3fcb04f68c
|
@ -6,6 +6,11 @@ Notable changes between versions.
|
||||||
|
|
||||||
* Update Calico from v3.7.3 to [v3.7.4](https://docs.projectcalico.org/v3.7/release-notes/)
|
* Update Calico from v3.7.3 to [v3.7.4](https://docs.projectcalico.org/v3.7/release-notes/)
|
||||||
|
|
||||||
|
#### Google Cloud
|
||||||
|
|
||||||
|
* Allow controller nodes to span more than 3 zones if available in a region ([#504](https://github.com/poseidon/typhoon/pull/504))
|
||||||
|
* Eliminate extraneous controller instance groups in single-controller clusters ([#504](https://github.com/poseidon/typhoon/pull/504))
|
||||||
|
|
||||||
#### Addons
|
#### Addons
|
||||||
|
|
||||||
* Update Grafana from v6.2.4 to v6.2.5
|
* Update Grafana from v6.2.4 to v6.2.5
|
||||||
|
|
|
@ -45,16 +45,11 @@ resource "google_compute_backend_service" "apiserver" {
|
||||||
timeout_sec = "300"
|
timeout_sec = "300"
|
||||||
|
|
||||||
# controller(s) spread across zonal instance groups
|
# controller(s) spread across zonal instance groups
|
||||||
backend {
|
dynamic "backend" {
|
||||||
group = google_compute_instance_group.controllers[0].self_link
|
for_each = google_compute_instance_group.controllers
|
||||||
}
|
content {
|
||||||
|
group = backend.value.self_link
|
||||||
backend {
|
}
|
||||||
group = google_compute_instance_group.controllers[1].self_link
|
|
||||||
}
|
|
||||||
|
|
||||||
backend {
|
|
||||||
group = google_compute_instance_group.controllers[2].self_link
|
|
||||||
}
|
}
|
||||||
|
|
||||||
health_checks = [google_compute_health_check.apiserver.self_link]
|
health_checks = [google_compute_health_check.apiserver.self_link]
|
||||||
|
@ -64,11 +59,7 @@ resource "google_compute_backend_service" "apiserver" {
|
||||||
resource "google_compute_instance_group" "controllers" {
|
resource "google_compute_instance_group" "controllers" {
|
||||||
count = length(local.zones)
|
count = length(local.zones)
|
||||||
|
|
||||||
name = format(
|
name = format("%s-controllers-%s", var.cluster_name, element(local.zones, count.index))
|
||||||
"%s-controllers-%s",
|
|
||||||
var.cluster_name,
|
|
||||||
element(local.zones, count.index),
|
|
||||||
)
|
|
||||||
zone = element(local.zones, count.index)
|
zone = element(local.zones, count.index)
|
||||||
|
|
||||||
named_port {
|
named_port {
|
||||||
|
|
|
@ -20,9 +20,7 @@ data "google_compute_zones" "all" {
|
||||||
}
|
}
|
||||||
|
|
||||||
locals {
|
locals {
|
||||||
# TCP proxy load balancers require a fixed number of zonal backends. Spread
|
zones = data.google_compute_zones.all.names
|
||||||
# controllers over up to 3 zones, since all GCP regions have at least 3.
|
|
||||||
zones = slice(data.google_compute_zones.all.names, 0, 3)
|
|
||||||
|
|
||||||
controllers_ipv4_public = google_compute_instance.controllers.*.network_interface.0.access_config.0.nat_ip
|
controllers_ipv4_public = google_compute_instance.controllers.*.network_interface.0.access_config.0.nat_ip
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in New Issue