Clarify AWS module output names and changes

This commit is contained in:
Dalton Hubble 2018-06-23 15:15:57 -07:00
parent 0c4d59db87
commit 855aec5af3
9 changed files with 50 additions and 38 deletions

View File

@ -4,6 +4,8 @@ Notable changes between versions.
## Latest
## v1.10.5
* Kubernetes [v1.10.5](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1105)
* Update etcd from v3.3.6 to v3.3.8 ([#243](https://github.com/poseidon/typhoon/pull/243), [#247](https://github.com/poseidon/typhoon/pull/247))
@ -11,12 +13,14 @@ Notable changes between versions.
* Switch `kube-apiserver` port from 443 to 6443 ([#248](https://github.com/poseidon/typhoon/pull/248))
* Combine apiserver and ingress NLBs ([#249](https://github.com/poseidon/typhoon/pull/249))
* Reduce cost by ~$18/month per cluster. Typhoon AWS clusters now use one network load balancer
* Users may keep using CNAME records to `ingress_dns_name` and the `nginx-ingress` addon for Ingress (up to a few million RPS)
* Users with heavy traffic (many million RPS) should create a separate NLB(s) for Ingress instead
* Worker pools no longer include an extraneous load balancer
* Reduce cost by ~$18/month per cluster. Typhoon AWS clusters now use one network load balancer.
* Ingress addon users may keep using CNAME records to the `ingress_dns_name` module output (few million RPS)
* Ingress users with heavy traffic (many million RPS) should create a separate NLB(s)
* Worker pools no longer include an extraneous load balancer. Remove worker module's `ingress_dns_name` output
* Disable detailed (paid) monitoring on worker nodes ([#251](https://github.com/poseidon/typhoon/pull/251))
* Favor Prometheus for cloud-agnostic metrics, aggregation, alerting, and visualization
* Favor Prometheus for cloud-agnostic metrics, aggregation, and alerting
* Add `worker_target_group_http` and `worker_target_group_https` module outputs to allow custom load balancing
* Add `target_group_http` and `target_group_https` worker module outputs to allow custom load balancing
#### Bare-Metal
@ -35,11 +39,11 @@ Notable changes between versions.
* Switch Ingress from regional network load balancers to global HTTP/TCP Proxy load balancing
* Reduce cost by ~$19/month per cluster. Google bills the first 5 global and regional forwarding rules separately. Typhoon clusters now use 3 global and 0 regional forwarding rules.
* Worker pools no longer include an extraneous load balancer. Remove worker module's `ingress_static_ip` output
* Allow using nginx-ingress addon on Typhoon for Fedora Atomic ([#200](https://github.com/poseidon/typhoon/issues/200))
* Add `ingress_static_ipv4` module output
* Allow using nginx-ingress addon on Fedora Atomic clusters ([#200](https://github.com/poseidon/typhoon/issues/200))
* Add `worker_instance_group` module output to allow custom global load balancing
* Add `instance_group` worker module output to allow custom global load balancing
* Deprecate `ingress_static_ip` module output. Add `ingress_static_ipv4` module output instead.
* Deprecate `controllers_ipv4_public` module output
* Deprecate `ingress_static_ip` module output. Use `ingress_static_ipv4`
#### Addons

View File

@ -44,7 +44,7 @@ resource "aws_lb_listener" "ingress-http" {
default_action {
type = "forward"
target_group_arn = "${module.workers.target_group_http_arn}"
target_group_arn = "${module.workers.target_group_http}"
}
}
@ -56,7 +56,7 @@ resource "aws_lb_listener" "ingress-https" {
default_action {
type = "forward"
target_group_arn = "${module.workers.target_group_https_arn}"
target_group_arn = "${module.workers.target_group_https}"
}
}

View File

@ -1,18 +1,10 @@
# Outputs for Kubernetes Ingress
output "ingress_dns_name" {
value = "${aws_lb.nlb.dns_name}"
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
}
output "target_group_http_arn" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${module.workers.target_group_http_arn}"
}
output "target_group_https_arn" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${module.workers.target_group_https_arn}"
}
# Outputs for worker pools
output "vpc_id" {
@ -33,3 +25,15 @@ output "worker_security_groups" {
output "kubeconfig" {
value = "${module.bootkube.kubeconfig}"
}
# Outputs for custom load balancing
output "worker_target_group_http" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${module.workers.target_group_http}"
}
output "worker_target_group_https" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${module.workers.target_group_https}"
}

View File

@ -1,9 +1,9 @@
output "target_group_http_arn" {
output "target_group_http" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${aws_lb_target_group.workers-http.arn}"
}
output "target_group_https_arn" {
output "target_group_https" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${aws_lb_target_group.workers-https.arn}"
}

View File

@ -44,7 +44,7 @@ resource "aws_lb_listener" "ingress-http" {
default_action {
type = "forward"
target_group_arn = "${module.workers.target_group_http_arn}"
target_group_arn = "${module.workers.target_group_http}"
}
}
@ -56,7 +56,7 @@ resource "aws_lb_listener" "ingress-https" {
default_action {
type = "forward"
target_group_arn = "${module.workers.target_group_https_arn}"
target_group_arn = "${module.workers.target_group_https}"
}
}

View File

@ -1,18 +1,10 @@
# Outputs for Kubernetes Ingress
output "ingress_dns_name" {
value = "${aws_lb.nlb.dns_name}"
description = "DNS name of the network load balancer for distributing traffic to Ingress controllers"
}
output "target_group_http_arn" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${module.workers.target_group_http_arn}"
}
output "target_group_https_arn" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${module.workers.target_group_https_arn}"
}
# Outputs for worker pools
output "vpc_id" {
@ -33,3 +25,15 @@ output "worker_security_groups" {
output "kubeconfig" {
value = "${module.bootkube.kubeconfig}"
}
# Outputs for custom load balancing
output "worker_target_group_http" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${module.workers.target_group_http}"
}
output "worker_target_group_https" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${module.workers.target_group_https}"
}

View File

@ -1,9 +1,9 @@
output "target_group_http_arn" {
output "target_group_http" {
description = "ARN of a target group of workers for HTTP traffic"
value = "${aws_lb_target_group.workers-http.arn}"
}
output "target_group_https_arn" {
output "target_group_https" {
description = "ARN of a target group of workers for HTTPS traffic"
value = "${aws_lb_target_group.workers-https.arn}"
}

View File

@ -5,7 +5,7 @@
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on AWS with Fedora Atomic.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. Instances are provisioned on first boot with cloud-init.
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.

View File

@ -2,7 +2,7 @@
In this tutorial, we'll create a Kubernetes v1.10.5 cluster on AWS with Container Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.