Add support for worker pools on google-cloud
* Set defaults for internal worker module's count, machine_type, and os_image * Allow "pools" of homogeneous workers to be created using the google-cloud/kubernetes/workers module
This commit is contained in:
parent
06d40c5b44
commit
160ae34e71
|
@ -24,6 +24,7 @@ Notable changes between versions.
|
||||||
|
|
||||||
#### Google Cloud
|
#### Google Cloud
|
||||||
|
|
||||||
|
* Add support for "worker pools" - groups of homogeneous workers joined to an existing cluster ([#148](https://github.com/poseidon/typhoon/pull/148))
|
||||||
* Add kubelet `--volume-plugin-dir` flag to allow flexvolume plugins ([#142](https://github.com/poseidon/typhoon/pull/142))
|
* Add kubelet `--volume-plugin-dir` flag to allow flexvolume plugins ([#142](https://github.com/poseidon/typhoon/pull/142))
|
||||||
* Add `kubeconfig` variable to `controllers` and `workers` submodules ([#147](https://github.com/poseidon/typhoon/pull/147))
|
* Add `kubeconfig` variable to `controllers` and `workers` submodules ([#147](https://github.com/poseidon/typhoon/pull/147))
|
||||||
* Remove `kubeconfig_*` variables from `controllers` and `workers` submodules ([#147](https://github.com/poseidon/typhoon/pull/147))
|
* Remove `kubeconfig_*` variables from `controllers` and `workers` submodules ([#147](https://github.com/poseidon/typhoon/pull/147))
|
||||||
|
|
|
@ -0,0 +1,6 @@
|
||||||
|
# Advanced
|
||||||
|
|
||||||
|
Typhoon clusters offer several advanced features for skilled users.
|
||||||
|
|
||||||
|
* [Customization](customization.md)
|
||||||
|
* [Worker Pools](worker-pools.md)
|
|
@ -0,0 +1,71 @@
|
||||||
|
# Worker Pools
|
||||||
|
|
||||||
|
Typhoon can create "worker pools", groups of homogeneous workers that are part of an existing cluster. For example, you may wish to augment a Kubernetes cluster with groups of workers with a different machine type, larger disks, or preemptibility.
|
||||||
|
|
||||||
|
## Google Cloud
|
||||||
|
|
||||||
|
Create a cluster following the Google Cloud [tutorial](../google-cloud.md#cluster). Then define a worker pool using the internal `workers` Terraform module.
|
||||||
|
|
||||||
|
```tf
|
||||||
|
module "yavin-worker-pool" {
|
||||||
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes/workers?ref=v1.9.4"
|
||||||
|
|
||||||
|
# Google Cloud
|
||||||
|
network = "${module.google-cloud-yavin.network_name}"
|
||||||
|
region = "us-central1"
|
||||||
|
count = 2
|
||||||
|
machine_type = "n1-standard-16"
|
||||||
|
preemptible = true
|
||||||
|
|
||||||
|
cluster_name = "yavin-16x"
|
||||||
|
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||||
|
|
||||||
|
kubeconfig = "${module.google-cloud-yavin.kubeconfig}"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Apply the change.
|
||||||
|
|
||||||
|
```
|
||||||
|
terraform apply
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify a managed instance group of workers joins the cluster within a few minutes.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl get nodes
|
||||||
|
NAME STATUS AGE VERSION
|
||||||
|
yavin-controller-0.c.example-com.internal Ready 6m v1.9.3
|
||||||
|
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.9.3
|
||||||
|
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.9.3
|
||||||
|
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.9.3
|
||||||
|
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.9.3
|
||||||
|
```
|
||||||
|
|
||||||
|
### Variables
|
||||||
|
|
||||||
|
The Google Cloud internal `workers` module supports a number of [variables](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/workers/variables.tf).
|
||||||
|
|
||||||
|
#### Required
|
||||||
|
|
||||||
|
| Name | Description | Example |
|
||||||
|
|:-----|:------------|:--------|
|
||||||
|
| cluster_name | Unique name | "yavin-worker-pool" |
|
||||||
|
| region | Must match region of cluster | "us-central1" |
|
||||||
|
| network | Must match network name output by cluster | "${module.cluster.network_name}" |
|
||||||
|
| ssh_authorized_key | SSH public key for ~/.ssh_authorized_keys | "ssh-rsa AAAAB3NZ..." |
|
||||||
|
|
||||||
|
#### Optional
|
||||||
|
|
||||||
|
| Name | Description | Default | Example |
|
||||||
|
|:-----|:------------|:--------|:--------|
|
||||||
|
| count | Number of workers | 1 | 3 |
|
||||||
|
| machine_type | Machine type for compute instances | "n1-standard-1" | See below |
|
||||||
|
| os_image | OS image for compute instances | "coreos-stable" | "coreos-alpha" |
|
||||||
|
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||||
|
| preemptible | If enabled, Compute Engine will terminate instances randomly within 24 hours | false | true |
|
||||||
|
| service_cidr | Must match service_cidr of cluster | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||||
|
| cluster_domain_suffix | Must match domain suffix of cluster | "cluster.local" | "k8s.example.com" |
|
||||||
|
|
||||||
|
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
|
||||||
|
|
|
@ -257,7 +257,7 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||||
| machine_type | Machine type for compute instances | "n1-standard-1" | See below |
|
| machine_type | Machine type for compute instances | "n1-standard-1" | See below |
|
||||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||||
| worker_count | Number of workers | 1 | 3 |
|
| worker_count | Number of workers | 1 | 3 |
|
||||||
| worker_preemptible | If enabled, Compute Engine will terminate controllers randomly within 24 hours | false | true |
|
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
||||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||||
| pod_cidr | CIDR range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
| pod_cidr | CIDR range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||||
| service_cidr | CIDR range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
| service_cidr | CIDR range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
module "controllers" {
|
module "controllers" {
|
||||||
source = "controllers"
|
source = "controllers"
|
||||||
cluster_name = "${var.cluster_name}"
|
cluster_name = "${var.cluster_name}"
|
||||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
|
||||||
|
|
||||||
# GCE
|
# GCE
|
||||||
network = "${google_compute_network.network.name}"
|
network = "${google_compute_network.network.name}"
|
||||||
|
@ -14,15 +13,15 @@ module "controllers" {
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
networking = "${var.networking}"
|
networking = "${var.networking}"
|
||||||
|
kubeconfig = "${module.bootkube.kubeconfig}"
|
||||||
|
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||||
service_cidr = "${var.service_cidr}"
|
service_cidr = "${var.service_cidr}"
|
||||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||||
kubeconfig = "${module.bootkube.kubeconfig}"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
module "workers" {
|
module "workers" {
|
||||||
source = "workers"
|
source = "workers"
|
||||||
cluster_name = "${var.cluster_name}"
|
cluster_name = "${var.cluster_name}"
|
||||||
ssh_authorized_key = "${var.ssh_authorized_key}"
|
|
||||||
|
|
||||||
# GCE
|
# GCE
|
||||||
network = "${google_compute_network.network.name}"
|
network = "${google_compute_network.network.name}"
|
||||||
|
@ -33,7 +32,8 @@ module "workers" {
|
||||||
preemptible = "${var.worker_preemptible}"
|
preemptible = "${var.worker_preemptible}"
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
|
kubeconfig = "${module.bootkube.kubeconfig}"
|
||||||
|
ssh_authorized_key = "${var.ssh_authorized_key}"
|
||||||
service_cidr = "${var.service_cidr}"
|
service_cidr = "${var.service_cidr}"
|
||||||
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
cluster_domain_suffix = "${var.cluster_domain_suffix}"
|
||||||
kubeconfig = "${module.bootkube.kubeconfig}"
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -13,3 +13,7 @@ output "network_name" {
|
||||||
output "network_self_link" {
|
output "network_self_link" {
|
||||||
value = "${google_compute_network.network.self_link}"
|
value = "${google_compute_network.network.self_link}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
output "kubeconfig" {
|
||||||
|
value = "${module.bootkube.kubeconfig}"
|
||||||
|
}
|
||||||
|
|
|
@ -17,6 +17,7 @@ variable "network" {
|
||||||
|
|
||||||
variable "count" {
|
variable "count" {
|
||||||
type = "string"
|
type = "string"
|
||||||
|
default = "1"
|
||||||
description = "Number of worker compute instances the instance group should manage"
|
description = "Number of worker compute instances the instance group should manage"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -27,11 +28,13 @@ variable "region" {
|
||||||
|
|
||||||
variable "machine_type" {
|
variable "machine_type" {
|
||||||
type = "string"
|
type = "string"
|
||||||
|
default = "n1-standard-1"
|
||||||
description = "Machine type for compute instances (e.g. gcloud compute machine-types list)"
|
description = "Machine type for compute instances (e.g. gcloud compute machine-types list)"
|
||||||
}
|
}
|
||||||
|
|
||||||
variable "os_image" {
|
variable "os_image" {
|
||||||
type = "string"
|
type = "string"
|
||||||
|
default = "coreos-stable"
|
||||||
description = "OS image from which to initialize the disk (e.g. gcloud compute images list)"
|
description = "OS image from which to initialize the disk (e.g. gcloud compute images list)"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -58,4 +58,6 @@ pages:
|
||||||
- 'Performance': 'topics/performance.md'
|
- 'Performance': 'topics/performance.md'
|
||||||
- 'FAQ': 'faq.md'
|
- 'FAQ': 'faq.md'
|
||||||
- 'Advanced':
|
- 'Advanced':
|
||||||
|
- 'Overview': 'advanced/overview.md'
|
||||||
- 'Customization': 'advanced/customization.md'
|
- 'Customization': 'advanced/customization.md'
|
||||||
|
- 'Worker Pools': 'advanced/worker-pools.md'
|
||||||
|
|
Loading…
Reference in New Issue