Organize docs by operating system

This commit is contained in:
Dalton Hubble 2018-04-23 19:55:28 -07:00
parent 7198b9016c
commit af54efec28
14 changed files with 1266 additions and 46 deletions

View File

@ -24,21 +24,21 @@ Typhoon provides a Terraform Module for each supported operating system and plat
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Container Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | stable |
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](aws/fedora-atomic/kubernetes) | alpha |
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | stable |
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | alpha |
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | alpha |
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | beta |
| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | very alpha |
## Usage
## Documentation
* [Docs](https://typhoon.psdn.io)
* [Concepts](https://typhoon.psdn.io/concepts/)
* Tutorials
* [AWS](https://typhoon.psdn.io/aws/)
* [Bare-Metal](https://typhoon.psdn.io/bare-metal/)
* [Digital Ocean](https://typhoon.psdn.io/digital-ocean/)
* [Google-Cloud](https://typhoon.psdn.io/google-cloud/)
* Tutorials for [AWS](https://typhoon.psdn.io/cl/aws/), [Bare-Metal](https://typhoon.psdn.io/cl/bare-metal/), [Digital Ocean](https://typhoon.psdn.io/cl/digital-ocean/), and [Google-Cloud](https://typhoon.psdn.io/cl/google-cloud/)
## Example
## Usage
Define a Kubernetes cluster by using the Terraform module for your chosen platform and operating system. Here's a minimal example:

View File

@ -9,7 +9,7 @@ Internal Terraform Modules:
## AWS
Create a cluster following the AWS [tutorial](../aws.md#cluster). Define a worker pool using the AWS internal `workers` module.
Create a cluster following the AWS [tutorial](../cl/aws.md#cluster). Define a worker pool using the AWS internal `workers` module.
```tf
module "tempest-worker-pool" {
@ -73,7 +73,7 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
## Google Cloud
Create a cluster following the Google Cloud [tutorial](../google-cloud.md#cluster). Define a worker pool using the Google Cloud internal `workers` module.
Create a cluster following the Google Cloud [tutorial](../cl/google-cloud.md#cluster). Define a worker pool using the Google Cloud internal `workers` module.
```tf
module "yavin-worker-pool" {

View File

@ -37,7 +37,7 @@ providers {
}
```
Read [concepts](concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
```
cd infra/clusters
@ -209,7 +209,7 @@ kube-system pod-checkpointer-4kxtl-ip-10-0-12-221 1/1 Running 0
## Going Further
Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md).
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
!!! note
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.

View File

@ -129,7 +129,7 @@ providers {
}
```
Read [concepts](concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
```
cd infra/clusters
@ -350,7 +350,7 @@ kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0
## Going Further
Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md).
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
!!! note
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.

View File

@ -37,7 +37,7 @@ providers {
}
```
Read [concepts](concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
```
cd infra/clusters
@ -203,7 +203,7 @@ kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0
## Going Further
Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md).
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
!!! note
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.

View File

@ -37,7 +37,7 @@ providers {
}
```
Read [concepts](concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
```
cd infra/clusters
@ -211,7 +211,7 @@ kube-system pod-checkpointer-l6lrt 1/1 Running 0
## Going Further
Learn about [maintenance](topics/maintenance.md) and [addons](addons/overview.md).
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
!!! note
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.

270
docs/cl/aws.md Normal file
View File

@ -0,0 +1,270 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on AWS.
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a VPC, gateway, subnets, auto-scaling groups of controllers and workers, network load balancers for controllers and workers, and security groups will be created.
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
* AWS Account and IAM credentials
* AWS Route53 DNS Zone (registered Domain Name or delegated subdomain)
* Terraform v0.11.x and [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) installed locally
## Terraform Setup
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
```sh
$ terraform version
Terraform v0.11.1
```
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
```sh
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
```
Add the plugin to your `~/.terraformrc`.
```
providers {
ct = "/usr/local/bin/terraform-provider-ct"
}
```
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
```
cd infra/clusters
```
## Provider
Login to your AWS IAM dashboard and find your IAM user. Select "Security Credentials" and create an access key. Save the id and secret to a file that can be referenced in configs.
```
[default]
aws_access_key_id = xxx
aws_secret_access_key = yyy
```
Configure the AWS provider to use your access key credentials in a `providers.tf` file.
```tf
provider "aws" {
version = "~> 1.11.0"
alias = "default"
region = "eu-central-1"
shared_credentials_file = "/home/user/.config/aws/credentials"
}
provider "local" {
version = "~> 1.0"
alias = "default"
}
provider "null" {
version = "~> 1.0"
alias = "default"
}
provider "template" {
version = "~> 1.0"
alias = "default"
}
provider "tls" {
version = "~> 1.0"
alias = "default"
}
```
Additional configuration options are described in the `aws` provider [docs](https://www.terraform.io/docs/providers/aws/).
!!! tip
Regions are listed in [docs](http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region) or with `aws ec2 describe-regions`.
## Cluster
Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
```tf
module "aws-tempest" {
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.10.1"
providers = {
aws = "aws.default"
local = "local.default"
null = "null.default"
template = "template.default"
tls = "tls.default"
}
# AWS
cluster_name = "tempest"
dns_zone = "aws.example.com"
dns_zone_id = "Z3PAABBCFAKEC0"
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/tempest"
# optional
worker_count = 2
worker_type = "t2.medium"
}
```
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/container-linux/kubernetes/variables.tf) source.
## ssh-agent
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
```sh
ssh-add ~/.ssh/id_rsa
ssh-add -L
```
!!! warning
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
## Apply
Initialize the config directory if this is the first use with Terraform.
```sh
terraform init
```
Get or update Terraform modules.
```sh
$ terraform get # downloads missing modules
$ terraform get --update # updates all modules
Get: git::https://github.com/poseidon/typhoon (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
```
Plan the resources to be created.
```sh
$ terraform plan
Plan: 98 to add, 0 to change, 0 to destroy.
```
Apply the changes to create the cluster.
```sh
$ terraform apply
...
module.aws-tempest.null_resource.bootkube-start: Still creating... (4m50s elapsed)
module.aws-tempest.null_resource.bootkube-start: Still creating... (5m0s elapsed)
module.aws-tempest.null_resource.bootkube-start: Creation complete after 11m8s (ID: 3961816482286168143)
Apply complete! Resources: 98 added, 0 changed, 0 destroyed.
```
In 4-8 minutes, the Kubernetes cluster will be ready.
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
```
$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
ip-10-0-12-221 Ready 34m v1.10.1
ip-10-0-19-112 Ready 34m v1.10.1
ip-10-0-4-22 Ready 34m v1.10.1
```
List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1m5bf 2/2 Running 0 34m
kube-system calico-node-7jmr1 2/2 Running 0 34m
kube-system calico-node-bknc8 2/2 Running 0 34m
kube-system kube-apiserver-4mjbk 1/1 Running 0 34m
kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m
kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m
kube-system kube-dns-1187388186-wx1lg 3/3 Running 0 34m
kube-system kube-proxy-14wxv 1/1 Running 0 34m
kube-system kube-proxy-9vxh2 1/1 Running 0 34m
kube-system kube-proxy-sbbsh 1/1 Running 0 34m
kube-system kube-scheduler-3359497473-5plhf 1/1 Running 0 34m
kube-system kube-scheduler-3359497473-r7zg7 1/1 Running 1 34m
kube-system pod-checkpointer-4kxtl 1/1 Running 0 34m
kube-system pod-checkpointer-4kxtl-ip-10-0-12-221 1/1 Running 0 33m
```
## Going Further
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
!!! note
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
## Variables
### Required
| Name | Description | Example |
|:-----|:------------|:--------|
| cluster_name | Unique cluster name (prepended to dns_zone) | "tempest" |
| dns_zone | AWS Route53 DNS zone | "aws.example.com" |
| dns_zone_id | AWS Route53 DNS zone id | "Z3PAABBCFAKEC0" |
| ssh_authorized_key | SSH public key for ~/.ssh_authorized_keys | "ssh-rsa AAAAB3NZ..." |
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/tempest" |
#### DNS Zone
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `tempest.aws.example.com`.
You'll need a registered domain name or subdomain registered in a AWS Route53 DNS zone. You can set this up once and create many clusters with unique names.
```tf
resource "aws_route53_zone" "zone-for-clusters" {
name = "aws.example.com."
}
```
Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`.
!!! tip ""
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on Route53 (e.g. aws.mydomain.com) and [update nameservers](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html).
### Optional
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | EC2 instance type for controllers | "t2.small" | See below |
| worker_type | EC2 instance type for workers | "t2.small" | See below |
| os_channel | Container Linux AMI channel | stable | stable, beta, alpha |
| disk_size | Size of the EBS volume in GB | "40" | "100" |
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
| controller_clc_snippets | Controller Container Linux Config snippets | [] | |
| worker_clc_snippets | Worker Container Linux Config snippets | [] | |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
!!! tip "MTU"
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8991! You will get better pod-to-pod bandwidth.

391
docs/cl/bare-metal.md Normal file
View File

@ -0,0 +1,391 @@
# Bare-Metal
In this tutorial, we'll network boot and provision a Kubernetes v1.10.1 cluster on bare-metal.
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers.
Controllers are provisioned as etcd peers and run `etcd-member` (etcd3) and `kubelet`. Workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
* Machines with 2GB RAM, 30GB disk, PXE-enabled NIC, IPMI
* PXE-enabled [network boot](https://coreos.com/matchbox/docs/latest/network-setup.html) environment
* Matchbox v0.6+ deployment with API enabled
* Matchbox credentials `client.crt`, `client.key`, `ca.crt`
* Terraform v0.11.x and [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) installed locally
## Machines
Collect a MAC address from each machine. For machines with multiple PXE-enabled NICs, pick one of the MAC addresses. MAC addresses will be used to match machines to profiles during network boot.
* 52:54:00:a1:9c:ae (node1)
* 52:54:00:b2:2f:86 (node2)
* 52:54:00:c3:61:77 (node3)
Configure each machine to boot from the disk through IPMI or the BIOS menu.
```
ipmitool -H node1 -U USER -P PASS chassis bootdev disk options=persistent
```
During provisioning, you'll explicitly set the boot device to `pxe` for the next boot only. Machines will install (overwrite) the operating system to disk on PXE boot and reboot into the disk install.
!!! tip ""
Ask your hardware vendor to provide MACs and preconfigure IPMI, if possible. With it, you can rack new servers, `terraform apply` with new info, and power on machines that network boot and provision into clusters.
## DNS
Create a DNS A (or AAAA) record for each node's default interface. Create a record that resolves to each controller node (or re-use the node record if there's one controller).
* node1.example.com (node1)
* node2.example.com (node2)
* node3.example.com (node3)
* myk8s.example.com (node1)
Cluster nodes will be configured to refer to the control plane and themselves by these fully qualified names and they'll be used in generated TLS certificates.
## Matchbox
Matchbox is an open-source app that matches network-booted bare-metal machines (based on labels like MAC, UUID, etc.) to profiles to automate cluster provisioning.
Install Matchbox on a Kubernetes cluster or dedicated server.
* Installing on [Kubernetes](https://coreos.com/matchbox/docs/latest/deployment.html#kubernetes) (recommended)
* Installing on a [server](https://coreos.com/matchbox/docs/latest/deployment.html#download)
!!! tip
Deploy Matchbox as service that can be accessed by all of your bare-metal machines globally. This provides a single endpoint to use Terraform to manage bare-metal clusters at different sites. Typhoon will never include secrets in provisioning user-data so you may even deploy matchbox publicly.
Matchbox provides a TLS client-authenticated API that clients, like Terraform, can use to manage machine matching and profiles. Think of it like a cloud provider API, but for creating bare-metal instances.
[Generate TLS](https://coreos.com/matchbox/docs/latest/deployment.html#generate-tls-certificates) client credentials. Save the `ca.crt`, `client.crt`, and `client.key` where they can be referenced in Terraform configs.
```sh
mv ca.crt client.crt client.key ~/.config/matchbox/
```
Verify the matchbox read-only HTTP endpoints are accessible (port is configurable).
```sh
$ curl http://matchbox.example.com:8080
matchbox
```
Verify your TLS client certificate and key can be used to access the Matchbox API (port is configurable).
```sh
$ openssl s_client -connect matchbox.example.com:8081 \
-CAfile ~/.config/matchbox/ca.crt \
-cert ~/.config/matchbox/client.crt \
-key ~/.config/matchbox/client.key
```
## PXE Environment
Create a iPXE-enabled network boot environment. Configure PXE clients to chainload [iPXE](http://ipxe.org/cmd) and instruct iPXE clients to chainload from your Matchbox service's `/boot.ipxe` endpoint.
For networks already supporting iPXE clients, you can add a `default.ipxe` config.
```ini
# /var/www/html/ipxe/default.ipxe
chain http://matchbox.foo:8080/boot.ipxe
```
For networks with Ubiquiti Routers, you can [configure the router](/topics/hardware.md#ubiquiti) itself to chainload machines to iPXE and Matchbox.
For a small lab, you may wish to checkout the [quay.io/coreos/dnsmasq](https://quay.io/repository/coreos/dnsmasq) container image and [copy-paste examples](https://github.com/coreos/matchbox/blob/master/Documentation/network-setup.md#coreosdnsmasq).
Read about the [many ways](https://coreos.com/matchbox/docs/latest/network-setup.html) to setup a compliant iPXE-enabled network. There is quite a bit of flexibility:
* Continue using existing DHCP, TFTP, or DNS services
* Configure specific machines, subnets, or architectures to chainload from Matchbox
* Place Matchbox behind a menu entry (timeout and default to Matchbox)
!!! note ""
TFTP chainloading to modern boot firmware, like iPXE, avoids issues with old NICs and allows faster transfer protocols like HTTP to be used.
## Terraform Setup
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
```sh
$ terraform version
Terraform v0.11.1
```
Add the [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) plugin binary for your system.
```sh
wget https://github.com/coreos/terraform-provider-matchbox/releases/download/v0.2.2/terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
tar xzf terraform-provider-matchbox-v0.2.2-linux-amd64.tar.gz
sudo mv terraform-provider-matchbox-v0.2.2-linux-amd64/terraform-provider-matchbox /usr/local/bin/
```
Add the plugin to your `~/.terraformrc`.
```
providers {
matchbox = "/usr/local/bin/terraform-provider-matchbox"
}
```
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
```
cd infra/clusters
```
## Provider
Configure the Matchbox provider to use your Matchbox API endpoint and client certificate in a `providers.tf` file.
```tf
provider "matchbox" {
endpoint = "matchbox.example.com:8081"
client_cert = "${file("~/.config/matchbox/client.crt")}"
client_key = "${file("~/.config/matchbox/client.key")}"
ca = "${file("~/.config/matchbox/ca.crt")}"
}
provider "local" {
version = "~> 1.0"
alias = "default"
}
provider "null" {
version = "~> 1.0"
alias = "default"
}
provider "template" {
version = "~> 1.0"
alias = "default"
}
provider "tls" {
version = "~> 1.0"
alias = "default"
}
```
## Cluster
Define a Kubernetes cluster using the module `bare-metal/container-linux/kubernetes`.
```tf
module "bare-metal-mercury" {
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.1"
providers = {
local = "local.default"
null = "null.default"
template = "template.default"
tls = "tls.default"
}
# bare-metal
cluster_name = "mercury"
matchbox_http_endpoint = "http://matchbox.example.com"
container_linux_channel = "stable"
container_linux_version = "1632.3.0"
# configuration
k8s_domain_name = "node1.example.com"
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/mercury"
# machines
controller_names = ["node1"]
controller_macs = ["52:54:00:a1:9c:ae"]
controller_domains = ["node1.example.com"]
worker_names = [
"node2",
"node3",
]
worker_macs = [
"52:54:00:b2:2f:86",
"52:54:00:c3:61:77",
]
worker_domains = [
"node2.example.com",
"node3.example.com",
]
}
```
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/container-linux/kubernetes/variables.tf) source.
## ssh-agent
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
```sh
ssh-add ~/.ssh/id_rsa
ssh-add -L
```
!!! warning
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
## Apply
Initialize the config directory if this is the first use with Terraform.
```sh
terraform init
```
Get or update Terraform modules.
```sh
$ terraform get # downloads missing modules
$ terraform get --update # updates all modules
Get: git::https://github.com/poseidon/typhoon (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
```
Plan the resources to be created.
```sh
$ terraform plan
Plan: 55 to add, 0 to change, 0 to destroy.
```
Apply the changes. Terraform will generate bootkube assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
```sh
$ terraform apply
module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Provisioning with 'file'...
module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Provisioning with 'file'...
module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Still creating... (10s elapsed)
module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Still creating... (10s elapsed)
...
```
Apply will then loop until it can successfully copy credentials to each machine and start the one-time Kubernetes bootstrap service. Proceed to the next step while this loops.
### Power
Power on each machine with the boot device set to `pxe` for the next boot only.
```sh
ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe
ipmitool -H node1.example.com -U USER -P PASS power on
```
Machines will network boot, install Container Linux to disk, reboot into the disk install, and provision themselves as controllers or workers.
!!! tip ""
If this is the first test of your PXE-enabled network boot environment, watch the SOL console of a machine to spot any misconfigurations.
### Bootstrap
Wait for the `bootkube-start` step to finish bootstrapping the Kubernetes control plane. This may take 5-15 minutes depending on your network.
```
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m10s elapsed)
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m20s elapsed)
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m30s elapsed)
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m40s elapsed)
module.bare-metal-mercury.null_resource.bootkube-start: Creation complete (ID: 5441741360626669024)
Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
```
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
```
# before v1.10.1
$ ssh debug@node1.example.com
# after v1.10.1
$ ssh -p 2222 core@node1.example.com
```
To watch the bootstrap process in detail, SSH to the first controller and journal the logs.
```
$ ssh core@node1.example.com
$ journalctl -f -u bootkube
bootkube[5]: Pod Status: pod-checkpointer Running
bootkube[5]: Pod Status: kube-apiserver Running
bootkube[5]: Pod Status: kube-scheduler Running
bootkube[5]: Pod Status: kube-controller-manager Running
bootkube[5]: All self-hosted control plane components successfully started
bootkube[5]: Tearing down temporary bootstrap control plane...
```
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
```
$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
node1.example.com Ready 11m v1.10.1
node2.example.com Ready 11m v1.10.1
node3.example.com Ready 11m v1.10.1
```
List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-6qp7f 2/2 Running 1 11m
kube-system calico-node-gnjrm 2/2 Running 0 11m
kube-system calico-node-llbgt 2/2 Running 0 11m
kube-system kube-apiserver-7336w 1/1 Running 0 11m
kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m
kube-system kube-controller-manager-3271970485-v30js 1/1 Running 1 11m
kube-system kube-dns-1187388186-mx9rt 3/3 Running 0 11m
kube-system kube-proxy-50sd4 1/1 Running 0 11m
kube-system kube-proxy-bczhp 1/1 Running 0 11m
kube-system kube-proxy-mp2fw 1/1 Running 0 11m
kube-system kube-scheduler-3895335239-fd3l7 1/1 Running 1 11m
kube-system kube-scheduler-3895335239-hfjv0 1/1 Running 0 11m
kube-system pod-checkpointer-wf65d 1/1 Running 0 11m
kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0 11m
```
## Going Further
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
!!! note
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
## Variables
### Required
| Name | Description | Example |
|:-----|:------------|:--------|
| cluster_name | Unique cluster name | mercury |
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | http://matchbox.example.com:8080 |
| container_linux_channel | Container Linux channel | stable, beta, alpha |
| container_linux_version | Container Linux version of the kernel/initrd to PXE and the image to install | 1632.3.0 |
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
| controller_names | Ordered list of controller short names | ["node1"] |
| controller_macs | Ordered list of controller identifying MAC addresses | ["52:54:00:a1:9c:ae"] |
| controller_domains | Ordered list of controller FQDNs | ["node1.example.com"] |
| worker_names | Ordered list of worker short names | ["node2", "node3"] |
| worker_macs | Ordered list of worker identifying MAC addresses | ["52:54:00:b2:2f:86", "52:54:00:c3:61:77"] |
| worker_domains | Ordered list of worker FQDNs | ["node2.example.com", "node3.example.com"] |
### Optional
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| cached_install | Whether machines should PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux images into the cache to use this | false | true |
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |
| container_linux_oem | Specify alternative OEM image ids for the disk install | "" | "vmware_raw", "xen" |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
| kernel_args | Additional kernel args to provide at PXE boot | [] | "kvm-intel.nested=1" |

276
docs/cl/digital-ocean.md Normal file
View File

@ -0,0 +1,276 @@
# Digital Ocean
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Digital Ocean.
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, firewall rules, DNS records, tags, and droplets for Kubernetes controllers and workers will be created.
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
* Digital Ocean Account and Token
* Digital Ocean Domain (registered Domain Name or delegated subdomain)
* Terraform v0.11.x and [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) installed locally
## Terraform Setup
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
```sh
$ terraform version
Terraform v0.11.1
```
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
```sh
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
```
Add the plugin to your `~/.terraformrc`.
```
providers {
ct = "/usr/local/bin/terraform-provider-ct"
}
```
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
```
cd infra/clusters
```
## Provider
Login to [DigitalOcean](https://cloud.digitalocean.com) or create an [account](https://cloud.digitalocean.com/registrations/new), if you don't have one.
Generate a Personal Access Token with read/write scope from the [API tab](https://cloud.digitalocean.com/settings/api/tokens). Write the token to a file that can be referenced in configs.
```sh
mkdir -p ~/.config/digital-ocean
echo "TOKEN" > ~/.config/digital-ocean/token
```
Configure the DigitalOcean provider to use your token in a `providers.tf` file.
```tf
provider "digitalocean" {
version = "0.1.3"
token = "${chomp(file("~/.config/digital-ocean/token"))}"
alias = "default"
}
provider "local" {
version = "~> 1.0"
alias = "default"
}
provider "null" {
version = "~> 1.0"
alias = "default"
}
provider "template" {
version = "~> 1.0"
alias = "default"
}
provider "tls" {
version = "~> 1.0"
alias = "default"
}
```
## Cluster
Define a Kubernetes cluster using the module `digital-ocean/container-linux/kubernetes`.
```tf
module "digital-ocean-nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.10.1"
providers = {
digitalocean = "digitalocean.default"
local = "local.default"
null = "null.default"
template = "template.default"
tls = "tls.default"
}
# Digital Ocean
cluster_name = "nemo"
region = "nyc3"
dns_zone = "digital-ocean.example.com"
# configuration
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
asset_dir = "/home/user/.secrets/clusters/nemo"
# optional
worker_count = 2
worker_type = "s-1vcpu-1gb"
}
```
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/container-linux/kubernetes/variables.tf) source.
## ssh-agent
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
```sh
ssh-add ~/.ssh/id_rsa
ssh-add -L
```
!!! warning
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
## Apply
Initialize the config directory if this is the first use with Terraform.
```sh
terraform init
```
Get or update Terraform modules.
```sh
$ terraform get # downloads missing modules
$ terraform get --update # updates all modules
Get: git::https://github.com/poseidon/typhoon (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
```
Plan the resources to be created.
```sh
$ terraform plan
Plan: 54 to add, 0 to change, 0 to destroy.
```
Apply the changes to create the cluster.
```sh
$ terraform apply
module.digital-ocean-nemo.null_resource.bootkube-start: Still creating... (30s elapsed)
module.digital-ocean-nemo.null_resource.bootkube-start: Provisioning with 'remote-exec'...
...
module.digital-ocean-nemo.null_resource.bootkube-start: Still creating... (6m20s elapsed)
module.digital-ocean-nemo.null_resource.bootkube-start: Creation complete (ID: 7599298447329218468)
Apply complete! Resources: 54 added, 0 changed, 0 destroyed.
```
In 3-6 minutes, the Kubernetes cluster will be ready.
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
```
$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
10.132.110.130 Ready 10m v1.10.1
10.132.115.81 Ready 10m v1.10.1
10.132.124.107 Ready 10m v1.10.1
```
List the pods.
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-apiserver-n10qr 1/1 Running 0 11m
kube-system kube-controller-manager-3271970485-37gtw 1/1 Running 1 11m
kube-system kube-controller-manager-3271970485-p52t5 1/1 Running 0 11m
kube-system kube-dns-1187388186-ld1j7 3/3 Running 0 11m
kube-system kube-flannel-1cq1v 2/2 Running 0 11m
kube-system kube-flannel-hq9t0 2/2 Running 1 11m
kube-system kube-flannel-v0g9w 2/2 Running 0 11m
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
kube-system kube-proxy-fh3td 1/1 Running 0 11m
kube-system kube-proxy-k35rc 1/1 Running 0 11m
kube-system kube-scheduler-3895335239-2bc4c 1/1 Running 0 11m
kube-system kube-scheduler-3895335239-b7q47 1/1 Running 1 11m
kube-system pod-checkpointer-pr1lq 1/1 Running 0 11m
kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0 10m
```
## Going Further
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
!!! note
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
## Variables
### Required
| Name | Description | Example |
|:-----|:------------|:--------|
| cluster_name | Unique cluster name (prepended to dns_zone) | nemo |
| region | Digital Ocean region | nyc1, sfo2, fra1, tor1 |
| dns_zone | Digital Ocean domain (i.e. DNS zone) | do.example.com |
| ssh_fingerprints | SSH public key fingerprints | ["d7:9d..."] |
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | /home/user/.secrets/nemo |
#### DNS Zone
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
You'll need a registered domain name or subdomain registered in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
```tf
resource "digitalocean_domain" "zone-for-clusters" {
name = "do.example.com"
# Digital Ocean oddly requires an IP here. You may have to delete the A record it makes. :(
ip_address = "8.8.8.8"
}
```
!!! tip ""
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean).
#### SSH Fingerprints
DigitalOcean droplets are created with your SSH public key "fingerprint" (i.e. MD5 hash) to allow access. If your SSH public key is at `~/.ssh/id_rsa`, find the fingerprint with,
```bash
ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}'
MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7
```
If you use `ssh-agent` (e.g. Yubikey for SSH), find the fingerprint with,
```
ssh-add -l -E md5
2048 MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7 cardno:000603633110 (RSA)
```
Digital Ocean requires the SSH public key be uploaded to your account, so you may also find the fingerprint under Settings -> Security. Finally, if you don't have an SSH key, [create one now](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/).
### Optional
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Droplet type for controllers | s-2vcpu-2gb | s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb, ... |
| worker_type | Droplet type for workers | s-1vcpu-1gb | s-1vcpu-1gb, s-1vcpu-2gb, s-2vcpu-2gb, ... |
| image | Container Linux image for instances | "coreos-stable" | coreos-stable, coreos-beta, coreos-alpha |
| controller_clc_snippets | Controller Container Linux Config snippets | [] | |
| worker_clc_snippets | Worker Container Linux Config snippets | [] | |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
Check the list of valid [droplet types](https://developers.digitalocean.com/documentation/changelog/api-v2/new-size-slugs-for-droplet-plan-changes/) or use `doctl compute size list`.
!!! warning
Do not choose a `controller_type` smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail.

274
docs/cl/google-cloud.md Normal file
View File

@ -0,0 +1,274 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Google Compute Engine (not GKE).
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a network, firewall rules, managed instance groups of Kubernetes controllers and workers, network load balancers for controllers and workers, and health checks will be created.
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
## Requirements
* Google Cloud Account and Service Account
* Google Cloud DNS Zone (registered Domain Name or delegated subdomain)
* Terraform v0.11.x and [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) installed locally
## Terraform Setup
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
```sh
$ terraform version
Terraform v0.11.1
```
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
```sh
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
```
Add the plugin to your `~/.terraformrc`.
```
providers {
ct = "/usr/local/bin/terraform-provider-ct"
}
```
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
```
cd infra/clusters
```
## Provider
Login to your Google Console [API Manager](https://console.cloud.google.com/apis/dashboard) and select a project, or [signup](https://cloud.google.com/free/) if you don't have an account.
Select "Credentials", and create service account key credentials. Choose the "Compute Engine default service account" and save the JSON private key to a file that can be referenced in configs.
```sh
mv ~/Downloads/project-id-43048204.json ~/.config/google-cloud/terraform.json
```
Configure the Google Cloud provider to use your service account key, project-id, and region in a `providers.tf` file.
```tf
provider "google" {
version = "1.6"
alias = "default"
credentials = "${file("~/.config/google-cloud/terraform.json")}"
project = "project-id"
region = "us-central1"
}
provider "local" {
version = "~> 1.0"
alias = "default"
}
provider "null" {
version = "~> 1.0"
alias = "default"
}
provider "template" {
version = "~> 1.0"
alias = "default"
}
provider "tls" {
version = "~> 1.0"
alias = "default"
}
```
Additional configuration options are described in the `google` provider [docs](https://www.terraform.io/docs/providers/google/index.html).
!!! tip
A project may contain multiple clusters if you wish. Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`.
## Cluster
Define a Kubernetes cluster using the module `google-cloud/container-linux/kubernetes`.
```tf
module "google-cloud-yavin" {
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.1"
providers = {
google = "google.default"
local = "local.default"
null = "null.default"
template = "template.default"
tls = "tls.default"
}
# Google Cloud
cluster_name = "yavin"
region = "us-central1"
dns_zone = "example.com"
dns_zone_name = "example-zone"
# configuration
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
asset_dir = "/home/user/.secrets/clusters/yavin"
# optional
worker_count = 2
}
```
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/variables.tf) source.
## ssh-agent
Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`.
```sh
ssh-add ~/.ssh/id_rsa
ssh-add -L
```
!!! warning
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
## Apply
Initialize the config directory if this is the first use with Terraform.
```sh
terraform init
```
Get or update Terraform modules.
```sh
$ terraform get # downloads missing modules
$ terraform get --update # updates all modules
Get: git::https://github.com/poseidon/typhoon (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
```
Plan the resources to be created.
```sh
$ terraform plan
Plan: 64 to add, 0 to change, 0 to destroy.
```
Apply the changes to create the cluster.
```sh
$ terraform apply
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (10s elapsed)
...
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m30s elapsed)
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m40s elapsed)
module.google-cloud-yavin.null_resource.bootkube-start: Creation complete (ID: 5768638456220583358)
Apply complete! Resources: 64 added, 0 changed, 0 destroyed.
```
In 4-8 minutes, the Kubernetes cluster will be ready.
## Verify
[Install kubectl](https://coreos.com/kubernetes/docs/latest/configure-kubectl.html) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes.
```
$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-0.c.example-com.internal Ready 6m v1.10.1
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.10.1
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.10.1
```
List the pods.
```
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-1cs8z 2/2 Running 0 6m
kube-system calico-node-d1l5b 2/2 Running 0 6m
kube-system calico-node-sp9ps 2/2 Running 0 6m
kube-system kube-apiserver-zppls 1/1 Running 0 6m
kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m
kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m
kube-system kube-dns-1187388186-zj5dl 3/3 Running 0 6m
kube-system kube-proxy-117v6 1/1 Running 0 6m
kube-system kube-proxy-9886n 1/1 Running 0 6m
kube-system kube-proxy-njn47 1/1 Running 0 6m
kube-system kube-scheduler-3895335239-5x87r 1/1 Running 0 6m
kube-system kube-scheduler-3895335239-bzrrt 1/1 Running 1 6m
kube-system pod-checkpointer-l6lrt 1/1 Running 0 6m
```
## Going Further
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
!!! note
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
## Variables
### Required
| Name | Description | Example |
|:-----|:------------|:--------|
| cluster_name | Unique cluster name (prepended to dns_zone) | "yavin" |
| region | Google Cloud region | "us-central1" |
| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" |
| dns_zone_name | Google Cloud DNS zone name | "example-zone" |
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/yavin" |
Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-zones/regions-zones) and list Container Linux [images](https://cloud.google.com/compute/docs/images) with `gcloud compute images list | grep coreos`.
#### DNS Zone
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`.
You'll need a registered domain name or subdomain registered in a Google Cloud DNS zone. You can set this up once and create many clusters with unique names.
```tf
resource "google_dns_managed_zone" "zone-for-clusters" {
dns_name = "google-cloud.example.com."
name = "example-zone"
description = "Production DNS zone"
}
```
!!! tip ""
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers).
### Optional
| Name | Description | Default | Example |
|:-----|:------------|:--------|:--------|
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
| worker_count | Number of workers | 1 | 3 |
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
| worker_type | Machine type for workers | "n1-standard-1" | See below |
| os_image | Container Linux image for compute instances | "coreos-stable" | "coreos-stable-1632-3-0-v20180215" |
| disk_size | Size of the disk in GB | 40 | 100 |
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
| controller_clc_snippets | Controller Container Linux Config snippets | [] | |
| worker_clc_snippets | Worker Container Linux Config snippets | [] | |
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types).
#### Preemption
Add `worker_preemeptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`

View File

@ -24,19 +24,19 @@ Typhoon provides a Terraform Module for each supported operating system and plat
| Platform | Operating System | Terraform Module | Status |
|---------------|------------------|------------------|--------|
| AWS | Container Linux | [aws/container-linux/kubernetes](aws.md) | stable |
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal.md) | stable |
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean.md) | beta |
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud.md) | beta |
| AWS | Container Linux | [aws/container-linux/kubernetes](cl/aws.md) | stable |
| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](atomic/aws.md) | alpha |
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](cl/bare-metal.md) | stable |
| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](atomic/bare-metal.md) | alpha |
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | beta |
| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](atomic/digital-ocean.md) | alpha |
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | beta |
| Google Cloud | Fedora Atomic | [google-cloud/container-linux/kubernetes](atomic/google-cloud.md) | very alpha |
## Usage
## Documentation
* [Concepts](concepts.md)
* Tutorials
* [AWS](aws.md)
* [Bare-Metal](bare-metal.md)
* [Digital Ocean](digital-ocean.md)
* [Google-Cloud](google-cloud.md)
* Tutorials for [AWS](cl/aws.md), [Bare-Metal](cl/bare-metal.md), [Digital Ocean](cl/digital-ocean.md), and [Google-Cloud](cl/google-cloud.md)
## Example

View File

@ -1,6 +1,6 @@
# Hardware
While bare-metal Kubernetes clusters have no special hardware requirements (beyond the [min reqs](/bare-metal.md#requirements)), Typhoon does ensure certain router and server hardware integrates well with Kubernetes.
Typhoon ensures certain router and server hardware integrates well with bare-metal Kubernetes.
## Ubiquiti

View File

@ -78,7 +78,7 @@ $ terraform apply
Apply complete! Resources: 0 added, 0 changed, 55 destroyed.
```
Re-provision a new cluster by following the bare-metal [tutorial](../bare-metal.md#cluster).
Re-provision a new cluster by following the bare-metal [tutorial](../cl/bare-metal.md#cluster).
### Cloud

View File

@ -1,4 +1,8 @@
site_name: Typhoon
site_name: 'Typhoon'
site_description: 'A minimal and free Kubernetes distribution'
site_author: 'Dalton Hubble'
repo_name: 'poseidon/typhoon'
repo_url: 'https://github.com/poseidon/typhoon'
theme:
name: 'material'
palette:
@ -9,13 +13,12 @@ theme:
font:
text: 'Roboto Slab'
code: 'Roboto Mono'
extra:
social:
- type: 'github'
link: 'https://github.com/poseidon'
- type: 'twitter'
link: 'https://twitter.com/typhoon8s'
repo_name: 'poseidon/typhoon'
repo_url: 'https://github.com/poseidon/typhoon'
google_analytics:
- 'UA-38995133-6'
- 'auto'
@ -40,10 +43,26 @@ markdown_extensions:
pages:
- Home: 'index.md'
- 'Concepts': 'concepts.md'
- 'AWS': 'aws.md'
- 'Bare-Metal': 'bare-metal.md'
- 'Digital Ocean': 'digital-ocean.md'
- 'Google Cloud': 'google-cloud.md'
- 'Container Linux':
- 'AWS': 'cl/aws.md'
- 'Bare-Metal': 'cl/bare-metal.md'
- 'Digital Ocean': 'cl/digital-ocean.md'
- 'Google Cloud': 'cl/google-cloud.md'
- 'Fedora Atomic':
- 'AWS': 'atomic/aws.md'
- 'Bare-Metal': 'atomic/bare-metal.md'
- 'Digital Ocean': 'atomic/digital-ocean.md'
- 'Google Cloud': 'atomic/google-cloud.md'
- 'Topics':
- 'Maintenance': 'topics/maintenance.md'
- 'Hardware': 'topics/hardware.md'
- 'Security': 'topics/security.md'
- 'Performance': 'topics/performance.md'
- 'FAQ': 'faq.md'
- 'Advanced':
- 'Overview': 'advanced/overview.md'
- 'Customization': 'advanced/customization.md'
- 'Worker Pools': 'advanced/worker-pools.md'
- 'Addons':
- 'Overview': 'addons/overview.md'
- 'CLUO': 'addons/cluo.md'
@ -51,13 +70,3 @@ pages:
- 'Nginx Ingress': 'addons/ingress.md'
- 'Prometheus': 'addons/prometheus.md'
- 'Grafana': 'addons/grafana.md'
- 'Topics':
- 'Maintenance': 'topics/maintenance.md'
- 'Hardware': 'topics/hardware.md'
- 'Security': 'topics/security.md'
- 'Performance': 'topics/performance.md'
- 'FAQ': 'faq.md'
- 'Advanced':
- 'Overview': 'advanced/overview.md'
- 'Customization': 'advanced/customization.md'
- 'Worker Pools': 'advanced/worker-pools.md'