diff --git a/README.md b/README.md index 69b3ef8a..e6ab04f0 100644 --- a/README.md +++ b/README.md @@ -29,15 +29,6 @@ Typhoon provides a Terraform Module for each supported operating system and plat | Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta | | Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | stable | -Fedora Atomic support is alpha and will evolve as Fedora Atomic is replaced by Fedora CoreOS. - -| Platform | Operating System | Terraform Module | Status | -|---------------|------------------|------------------|--------| -| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](aws/fedora-atomic/kubernetes) | deprecated | -| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](bare-metal/fedora-atomic/kubernetes) | deprecated | -| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](digital-ocean/fedora-atomic/kubernetes) | deprecated | -| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](google-cloud/fedora-atomic/kubernetes) | deprecated | - ## Documentation * [Docs](https://typhoon.psdn.io) diff --git a/azure/container-linux/kubernetes/README.md b/azure/container-linux/kubernetes/README.md index ab7e0663..3ca37356 100644 --- a/azure/container-linux/kubernetes/README.md +++ b/azure/container-linux/kubernetes/README.md @@ -12,8 +12,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster ## Features * Kubernetes v1.14.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube)) -* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking -* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled +* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking +* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/cl/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Ready for Ingress, Prometheus, Grafana, and other optional [addons](https://typhoon.psdn.io/addons/overview/) diff --git a/digital-ocean/container-linux/kubernetes/README.md b/digital-ocean/container-linux/kubernetes/README.md index 26a10f39..5e07e4d9 100644 --- a/digital-ocean/container-linux/kubernetes/README.md +++ b/digital-ocean/container-linux/kubernetes/README.md @@ -12,8 +12,8 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster ## Features * Kubernetes v1.14.3 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube)) -* Single or multi-master, [flannel](https://github.com/coreos/flannel) networking -* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled +* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking +* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) * Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#container-linux) customization * Ready for Ingress, Prometheus, Grafana, CSI, and other [addons](https://typhoon.psdn.io/addons/overview/) diff --git a/docs/advanced/customization.md b/docs/advanced/customization.md index b653d9fb..e4ff1a6a 100644 --- a/docs/advanced/customization.md +++ b/docs/advanced/customization.md @@ -136,10 +136,6 @@ Container Linux Configs (and the CoreOS Ignition system) create immutable infras !!! danger Destroying and recreating controller instances is destructive! etcd runs on controller instances and stores data there. Do not modify controller snippets. See [blue/green](/topics/maintenance/#upgrades) clusters. -### Fedora Atomic - -Cloud-Init and kickstart (bare-metal only) declare how a Fedora Atomic instance should be provisioned. Customizing these declarations in ways beyond the provided Terraform variables is unsupported. - ## Architecture Typhoon chooses variables to expose with purpose. If you must customize clusters in ways that aren't supported by input variables, fork Typhoon and maintain a repository with customizations. Reference the repository by changing the username. diff --git a/docs/advanced/worker-pools.md b/docs/advanced/worker-pools.md index 13d1c39e..7a8170b1 100644 --- a/docs/advanced/worker-pools.md +++ b/docs/advanced/worker-pools.md @@ -5,10 +5,8 @@ Typhoon AWS, Azure, and Google Cloud allow additional groups of workers to be de Internal Terraform Modules: * `aws/container-linux/kubernetes/workers` -* `aws/fedora-atomic/kubernetes/workers` * `azure/container-linux/kubernetes/workers` * `google-cloud/container-linux/kubernetes/workers` -* `google-cloud/fedora-atomic/kubernetes/workers` ## AWS diff --git a/docs/atomic/aws.md b/docs/atomic/aws.md deleted file mode 100644 index 334a26ca..00000000 --- a/docs/atomic/aws.md +++ /dev/null @@ -1,243 +0,0 @@ -# AWS - -!!! danger - Typhoon for Fedora Atomic will not be updated much beyond Kubernetes v1.13. - -In this tutorial, we'll create a Kubernetes v1.14.3 cluster on AWS with Fedora Atomic. - -We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets. Instances are provisioned on first boot with cloud-init. - -Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster. - -## Requirements - -* AWS Account and IAM credentials -* AWS Route53 DNS Zone (registered Domain Name or delegated subdomain) -* Terraform v0.11.x installed locally - -## Terraform Setup - -Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system. - -```sh -$ terraform version -Terraform v0.11.12 -``` - -Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`). - -``` -cd infra/clusters -``` - -## Provider - -Login to your AWS IAM dashboard and find your IAM user. Select "Security Credentials" and create an access key. Save the id and secret to a file that can be referenced in configs. - -``` -[default] -aws_access_key_id = xxx -aws_secret_access_key = yyy -``` - -Configure the AWS provider to use your access key credentials in a `providers.tf` file. - -```tf -provider "aws" { - version = "~> 2.3.0" - alias = "default" - - region = "eu-central-1" - shared_credentials_file = "/home/user/.config/aws/credentials" -} - -provider "local" { - version = "~> 1.0" - alias = "default" -} - -provider "null" { - version = "~> 1.0" - alias = "default" -} - -provider "template" { - version = "~> 1.0" - alias = "default" -} - -provider "tls" { - version = "~> 1.0" - alias = "default" -} -``` - -Additional configuration options are described in the `aws` provider [docs](https://www.terraform.io/docs/providers/aws/). - -!!! tip - Regions are listed in [docs](http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region) or with `aws ec2 describe-regions`. - -## Cluster - -Define a Kubernetes cluster using the module `aws/fedora-atomic/kubernetes`. - -```tf -module "aws-tempest" { - source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.14.3" - - providers = { - aws = "aws.default" - local = "local.default" - null = "null.default" - template = "template.default" - tls = "tls.default" - } - - # AWS - cluster_name = "tempest" - dns_zone = "aws.example.com" - dns_zone_id = "Z3PAABBCFAKEC0" - - # configuration - ssh_authorized_key = "ssh-rsa AAAAB3Nz..." - asset_dir = "/home/user/.secrets/clusters/tempest" - - # optional - worker_count = 2 - worker_type = "t2.medium" -} -``` - -Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/fedora-atomic/kubernetes/variables.tf) source. - -## ssh-agent - -Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`. - -```sh -ssh-add ~/.ssh/id_rsa -ssh-add -L -``` - -## Apply - -Initialize the config directory if this is the first use with Terraform. - -```sh -terraform init -``` - -Plan the resources to be created. - -```sh -$ terraform plan -Plan: 106 to add, 0 to change, 0 to destroy. -``` - -Apply the changes to create the cluster. - -```sh -$ terraform apply -... -module.aws-tempest.null_resource.bootkube-start: Still creating... (4m50s elapsed) -module.aws-tempest.null_resource.bootkube-start: Still creating... (5m0s elapsed) -module.aws-tempest.null_resource.bootkube-start: Creation complete after 11m8s (ID: 3961816482286168143) - -Apply complete! Resources: 106 added, 0 changed, 0 destroyed. -``` - -In 5-10 minutes, the Kubernetes cluster will be ready. - -## Verify - -[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes. - -``` -$ export KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -ip-10-0-3-155 Ready controller,master 10m v1.14.3 -ip-10-0-26-65 Ready node 10m v1.14.3 -ip-10-0-41-21 Ready node 10m v1.14.3 -``` - -List the pods. - -``` -$ kubectl get pods --all-namespaces -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-node-1m5bf 2/2 Running 0 34m -kube-system calico-node-7jmr1 2/2 Running 0 34m -kube-system calico-node-bknc8 2/2 Running 0 34m -kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m -kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m -kube-system kube-apiserver-4mjbk 1/1 Running 0 34m -kube-system kube-controller-manager-3597210155-j2jbt 1/1 Running 1 34m -kube-system kube-controller-manager-3597210155-j7g7x 1/1 Running 0 34m -kube-system kube-proxy-14wxv 1/1 Running 0 34m -kube-system kube-proxy-9vxh2 1/1 Running 0 34m -kube-system kube-proxy-sbbsh 1/1 Running 0 34m -kube-system kube-scheduler-3359497473-5plhf 1/1 Running 0 34m -kube-system kube-scheduler-3359497473-r7zg7 1/1 Running 1 34m -kube-system pod-checkpointer-4kxtl 1/1 Running 0 34m -kube-system pod-checkpointer-4kxtl-ip-10-0-3-155 1/1 Running 0 33m -``` - -## Going Further - -Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/). - -## Variables - -Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/fedora-atomic/kubernetes/variables.tf) source. - -### Required - -| Name | Description | Example | -|:-----|:------------|:--------| -| cluster_name | Unique cluster name (prepended to dns_zone) | "tempest" | -| dns_zone | AWS Route53 DNS zone | "aws.example.com" | -| dns_zone_id | AWS Route53 DNS zone id | "Z3PAABBCFAKEC0" | -| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." | -| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/tempest" | - -#### DNS Zone - -Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `tempest.aws.example.com`. - -You'll need a registered domain name or delegated subdomain on AWS Route53. You can set this up once and create many clusters with unique names. - -```tf -resource "aws_route53_zone" "zone-for-clusters" { - name = "aws.example.com." -} -``` - -Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`. - -!!! tip "" - If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Route53 (e.g. aws.mydomain.com) and [update nameservers](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html). - -### Optional - -| Name | Description | Default | Example | -|:-----|:------------|:--------|:--------| -| controller_count | Number of controllers (i.e. masters) | 1 | 1 | -| worker_count | Number of workers | 1 | 3 | -| controller_type | EC2 instance type for controllers | "t3.small" | See below | -| worker_type | EC2 instance type for workers | "t3.small" | See below | -| disk_size | Size of the EBS volume in GB | "40" | "100" | -| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 | -| disk_iops | IOPS of the EBS volume | "0" (i.e. auto) | "400" | -| worker_price | Spot price in USD for workers. Leave as default empty string for regular on-demand instances | "" | "0.10" | -| networking | Choice of networking provider | "calico" | "calico" or "flannel" | -| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 | -| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" | -| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" | -| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" | -| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" | - -Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/). - -!!! warning - Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller. diff --git a/docs/atomic/bare-metal.md b/docs/atomic/bare-metal.md deleted file mode 100644 index d7b9424a..00000000 --- a/docs/atomic/bare-metal.md +++ /dev/null @@ -1,419 +0,0 @@ -# Bare-Metal - -!!! danger - Typhoon for Fedora Atomic will not be updated much beyond Kubernetes v1.13. - -In this tutorial, we'll network boot and provision a Kubernetes v1.14.3 cluster on bare-metal with Fedora Atomic. - -First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora Atomic via kickstart, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via cloud-init. - -Controllers are provisioned to run `etcd` and `kubelet` [system containers](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/). Workers run just a `kubelet` system container. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster. - -## Requirements - -* Machines with 2GB RAM, 30GB disk, PXE-enabled NIC, IPMI -* PXE-enabled [network boot](https://coreos.com/matchbox/docs/latest/network-setup.html) environment -* Matchbox v0.7+ deployment with API enabled -* HTTP server for Fedora install assets and ostree repo -* Matchbox credentials `client.crt`, `client.key`, `ca.crt` -* Terraform v0.11.x and [terraform-provider-matchbox](https://github.com/poseidon/terraform-provider-matchbox) installed locally - -## Machines - -Collect a MAC address from each machine. For machines with multiple PXE-enabled NICs, pick one of the MAC addresses. MAC addresses will be used to match machines to profiles during network boot. - -* 52:54:00:a1:9c:ae (node1) -* 52:54:00:b2:2f:86 (node2) -* 52:54:00:c3:61:77 (node3) - -Configure each machine to boot from the disk through IPMI or the BIOS menu. - -``` -ipmitool -H node1 -U USER -P PASS chassis bootdev disk options=persistent -``` - -During provisioning, you'll explicitly set the boot device to `pxe` for the next boot only. Machines will install (overwrite) the operating system to disk on PXE boot and reboot into the disk install. - -!!! tip "" - Ask your hardware vendor to provide MACs and preconfigure IPMI, if possible. With it, you can rack new servers, `terraform apply` with new info, and power on machines that network boot and provision into clusters. - -## DNS - -Create a DNS A (or AAAA) record for each node's default interface. Create a record that resolves to each controller node (or re-use the node record if there's one controller). - -* node1.example.com (node1) -* node2.example.com (node2) -* node3.example.com (node3) -* myk8s.example.com (node1) - -Cluster nodes will be configured to refer to the control plane and themselves by these fully qualified names and they'll be used in generated TLS certificates. - -## Matchbox - -Matchbox is an open-source app that matches network-booted bare-metal machines (based on labels like MAC, UUID, etc.) to profiles to automate cluster provisioning. - -Install Matchbox on a Kubernetes cluster or dedicated server. - -* Installing on [Kubernetes](https://coreos.com/matchbox/docs/latest/deployment.html#kubernetes) (recommended) -* Installing on a [server](https://coreos.com/matchbox/docs/latest/deployment.html#download) - -!!! tip - Deploy Matchbox as service that can be accessed by all of your bare-metal machines globally. This provides a single endpoint to use Terraform to manage bare-metal clusters at different sites. Typhoon will never include secrets in provisioning user-data so you may even deploy matchbox publicly. - -Matchbox provides a TLS client-authenticated API that clients, like Terraform, can use to manage machine matching and profiles. Think of it like a cloud provider API, but for creating bare-metal instances. - -[Generate TLS](https://coreos.com/matchbox/docs/latest/deployment.html#generate-tls-certificates) client credentials. Save the `ca.crt`, `client.crt`, and `client.key` where they can be referenced in Terraform configs. - -```sh -mv ca.crt client.crt client.key ~/.config/matchbox/ -``` - -Verify the matchbox read-only HTTP endpoints are accessible (port is configurable). - -```sh -$ curl http://matchbox.example.com:8080 -matchbox -``` - -Verify your TLS client certificate and key can be used to access the Matchbox API (port is configurable). - -```sh -$ openssl s_client -connect matchbox.example.com:8081 \ - -CAfile ~/.config/matchbox/ca.crt \ - -cert ~/.config/matchbox/client.crt \ - -key ~/.config/matchbox/client.key -``` - -## PXE Environment - -Create a iPXE-enabled network boot environment. Configure PXE clients to chainload [iPXE](http://ipxe.org/cmd) and instruct iPXE clients to chainload from your Matchbox service's `/boot.ipxe` endpoint. - -For networks already supporting iPXE clients, you can add a `default.ipxe` config. - -```ini -# /var/www/html/ipxe/default.ipxe -chain http://matchbox.foo:8080/boot.ipxe -``` - -For networks with Ubiquiti Routers, you can [configure the router](/topics/hardware/#ubiquiti) itself to chainload machines to iPXE and Matchbox. - -For a small lab, you may wish to checkout the [quay.io/poseidon/dnsmasq](https://quay.io/repository/poseidon/dnsmasq) container image and [copy-paste examples](https://github.com/poseidon/matchbox/blob/master/Documentation/network-setup.md#coreosdnsmasq). - -Read about the [many ways](https://coreos.com/matchbox/docs/latest/network-setup.html) to setup a compliant iPXE-enabled network. There is quite a bit of flexibility: - -* Continue using existing DHCP, TFTP, or DNS services -* Configure specific machines, subnets, or architectures to chainload from Matchbox -* Place Matchbox behind a menu entry (timeout and default to Matchbox) - -!!! note "" - TFTP chainloading to modern boot firmware, like iPXE, avoids issues with old NICs and allows faster transfer protocols like HTTP to be used. - -## Atomic Assets - -Fedora Atomic network installations require a local mirror of assets. Configure an HTTP server to serve the Atomic install tree and ostree repo. - -``` -sudo dnf install -y httpd -sudo firewall-cmd --permenant --add-port=80/tcp -sudo systemctl enable httpd --now -``` - -Download the [Fedora Atomic](https://getfedora.org/en/atomic/download/) ISO which contains install files and add them to the serve directory. - -``` -sudo mount -o loop,ro Fedora-AtomicHost-ostree-*.iso /mnt -sudo mkdir -p /var/www/html/fedora/28 -sudo cp -av /mnt/* /var/www/html/fedora/28/ -sudo umount /mnt -``` - -Checkout the [fedora-atomic](https://pagure.io/fedora-atomic) ostree manifest repo. - -``` -git clone https://pagure.io/fedora-atomic.git && cd fedora-atomic -git checkout f28 -``` - -Compose an ostree repo from RPM sources. - -``` -mkdir repo -ostree init --repo=repo --mode=archive -sudo dnf install rpm-ostree -sudo rpm-ostree compose tree --repo=repo fedora-atomic-host.json -``` - -Serve the ostree `repo` as well. - -``` -sudo cp -r repo /var/www/html/fedora/28/ -tree /var/www/html/fedora/28/ -├── images -│   ├── pxeboot -│      ├── initrd.img -│      └── vmlinuz -├── isolinux/ -├── repo/ -``` - -Verify `vmlinuz`, `initrd.img`, and `repo` are accessible from the HTTP server (i.e. `atomic_assets_endpoint`). - -``` -curl http://example.com/fedora/28/ -``` - -!!! note - It is possible to use the Matchbox `/assets` [cache](https://github.com/poseidon/matchbox/blob/master/Documentation/matchbox.md#assets) as an HTTP server. - -## Terraform Setup - -Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system. - -```sh -$ terraform version -Terraform v0.11.12 -``` - -Add the [terraform-provider-matchbox](https://github.com/poseidon/terraform-provider-matchbox) plugin binary for your system to `~/.terraform.d/plugins/`, noting the final name. - -```sh -wget https://github.com/poseidon/terraform-provider-matchbox/releases/download/v0.2.3/terraform-provider-matchbox-v0.2.3-linux-amd64.tar.gz -tar xzf terraform-provider-matchbox-v0.2.3-linux-amd64.tar.gz -mv terraform-provider-matchbox-v0.2.3-linux-amd64/terraform-provider-matchbox ~/.terraform.d/plugins/terraform-provider-matchbox_v0.2.3 -``` - -Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`). - -``` -cd infra/clusters -``` - -## Provider - -Configure the Matchbox provider to use your Matchbox API endpoint and client certificate in a `providers.tf` file. - -```tf -provider "matchbox" { - version = "0.2.3" - endpoint = "matchbox.example.com:8081" - client_cert = "${file("~/.config/matchbox/client.crt")}" - client_key = "${file("~/.config/matchbox/client.key")}" - ca = "${file("~/.config/matchbox/ca.crt")}" -} - -provider "local" { - version = "~> 1.0" - alias = "default" -} - -provider "null" { - version = "~> 1.0" - alias = "default" -} - -provider "template" { - version = "~> 1.0" - alias = "default" -} - -provider "tls" { - version = "~> 1.0" - alias = "default" -} -``` - -## Cluster - -Define a Kubernetes cluster using the module `bare-metal/fedora-atomic/kubernetes`. - -```tf -module "bare-metal-mercury" { - source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.14.3" - - providers = { - local = "local.default" - null = "null.default" - template = "template.default" - tls = "tls.default" - } - - # bare-metal - cluster_name = "mercury" - matchbox_http_endpoint = "http://matchbox.example.com" - atomic_assets_endpoint = "http://example.com/fedora/28" - - # configuration - k8s_domain_name = "node1.example.com" - ssh_authorized_key = "ssh-rsa AAAAB3Nz..." - asset_dir = "/home/user/.secrets/clusters/mercury" - - # machines - controller_names = ["node1"] - controller_macs = ["52:54:00:a1:9c:ae"] - controller_domains = ["node1.example.com"] - worker_names = [ - "node2", - "node3", - ] - worker_macs = [ - "52:54:00:b2:2f:86", - "52:54:00:c3:61:77", - ] - worker_domains = [ - "node2.example.com", - "node3.example.com", - ] -} -``` - -Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/fedora-atomic/kubernetes/variables.tf) source. - -## ssh-agent - -Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`. - -```sh -ssh-add ~/.ssh/id_rsa -ssh-add -L -``` - -## Apply - -Initialize the config directory if this is the first use with Terraform. - -```sh -terraform init -``` - -Plan the resources to be created. - -```sh -$ terraform plan -Plan: 58 to add, 0 to change, 0 to destroy. -``` - -Apply the changes. Terraform will generate bootkube assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API. - -```sh -$ terraform apply -module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Provisioning with 'file'... -module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Provisioning with 'file'... -module.bare-metal-mercury.null_resource.copy-kubeconfig.0: Still creating... (10s elapsed) -module.bare-metal-mercury.null_resource.copy-etcd-secrets.0: Still creating... (10s elapsed) -... -``` - -Apply will then loop until it can successfully copy credentials to each machine and start the one-time Kubernetes bootstrap service. Proceed to the next step while this loops. - -### Power - -Power on each machine with the boot device set to `pxe` for the next boot only. - -```sh -ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe -ipmitool -H node1.example.com -U USER -P PASS power on -``` - -Machines will network boot, install Fedora Atomic to disk via kickstart, reboot into the disk install, and provision themselves as controllers or workers via cloud-init. - -!!! tip "" - If this is the first test of your PXE-enabled network boot environment, watch the SOL console of a machine to spot any misconfigurations. - -### Bootstrap - -Wait for the `bootkube-start` step to finish bootstrapping the Kubernetes control plane. This may take 5-15 minutes depending on your network. - -``` -module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m10s elapsed) -module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m20s elapsed) -module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m30s elapsed) -module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m40s elapsed) -module.bare-metal-mercury.null_resource.bootkube-start: Creation complete (ID: 5441741360626669024) - -Apply complete! Resources: 58 added, 0 changed, 0 destroyed. -``` - -To watch the bootstrap process in detail, SSH to the first controller and journal the logs. - -``` -$ ssh fedora@node1.example.com -$ journalctl -f -u bootkube -bootkube[5]: Pod Status: pod-checkpointer Running -bootkube[5]: Pod Status: kube-apiserver Running -bootkube[5]: Pod Status: kube-scheduler Running -bootkube[5]: Pod Status: kube-controller-manager Running -bootkube[5]: All self-hosted control plane components successfully started -bootkube[5]: Tearing down temporary bootstrap control plane... -``` - -## Verify - -[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes. - -``` -$ export KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -node1.example.com Ready controller,master 10m v1.14.3 -node2.example.com Ready node 10m v1.14.3 -node3.example.com Ready node 10m v1.14.3 -``` - -List the pods. - -``` -$ kubectl get pods --all-namespaces -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-node-6qp7f 2/2 Running 1 11m -kube-system calico-node-gnjrm 2/2 Running 0 11m -kube-system calico-node-llbgt 2/2 Running 0 11m -kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m -kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m -kube-system kube-apiserver-7336w 1/1 Running 0 11m -kube-system kube-controller-manager-3271970485-b9chx 1/1 Running 0 11m -kube-system kube-controller-manager-3271970485-v30js 1/1 Running 1 11m -kube-system kube-proxy-50sd4 1/1 Running 0 11m -kube-system kube-proxy-bczhp 1/1 Running 0 11m -kube-system kube-proxy-mp2fw 1/1 Running 0 11m -kube-system kube-scheduler-3895335239-fd3l7 1/1 Running 1 11m -kube-system kube-scheduler-3895335239-hfjv0 1/1 Running 0 11m -kube-system pod-checkpointer-wf65d 1/1 Running 0 11m -kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0 11m -``` - -## Going Further - -Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/). - -## Variables - -Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/fedora-atomic/kubernetes/variables.tf) source. - -### Required - -| Name | Description | Example | -|:-----|:------------|:--------| -| cluster_name | Unique cluster name | mercury | -| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | "http://matchbox.example.com:port" | -| atomic_assets_endpoint | HTTP endpoint serving the Fedora Atomic vmlinuz, initrd.img, and ostree repo | "http://example.com/fedora/28" | -| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" | -| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3Nz..." | -| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" | -| controller_names | Ordered list of controller short names | ["node1"] | -| controller_macs | Ordered list of controller identifying MAC addresses | ["52:54:00:a1:9c:ae"] | -| controller_domains | Ordered list of controller FQDNs | ["node1.example.com"] | -| worker_names | Ordered list of worker short names | ["node2", "node3"] | -| worker_macs | Ordered list of worker identifying MAC addresses | ["52:54:00:b2:2f:86", "52:54:00:c3:61:77"] | -| worker_domains | Ordered list of worker FQDNs | ["node2.example.com", "node3.example.com"] | - -### Optional - -| Name | Description | Default | Example | -|:-----|:------------|:--------|:--------| -| networking | Choice of networking provider | "calico" | "calico" or "flannel" | -| network_mtu | CNI interface MTU (calico-only) | 1480 | - | -| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" | -| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" | -| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" | -| kernel_args | Additional kernel args to provide at PXE boot | [] | "kvm-intel.nested=1" | - diff --git a/docs/atomic/digital-ocean.md b/docs/atomic/digital-ocean.md deleted file mode 100644 index 0a4111bb..00000000 --- a/docs/atomic/digital-ocean.md +++ /dev/null @@ -1,250 +0,0 @@ -# Digital Ocean - -!!! danger - Typhoon for Fedora Atomic will not be updated much beyond Kubernetes v1.13. - -In this tutorial, we'll create a Kubernetes v1.14.3 cluster on DigitalOcean with Fedora Atomic. - -We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. Instances are provisioned on first boot with cloud-init. - -Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster. - -## Requirements - -* Digital Ocean Account and Token -* Digital Ocean Domain (registered Domain Name or delegated subdomain) -* Terraform v0.11.x installed locally - -## Terraform Setup - -Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system. - -```sh -$ terraform version -Terraform v0.11.12 -``` - -Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`). - -``` -cd infra/clusters -``` - -## Provider - -Login to [DigitalOcean](https://cloud.digitalocean.com) or create an [account](https://cloud.digitalocean.com/registrations/new), if you don't have one. - -Generate a Personal Access Token with read/write scope from the [API tab](https://cloud.digitalocean.com/settings/api/tokens). Write the token to a file that can be referenced in configs. - -```sh -mkdir -p ~/.config/digital-ocean -echo "TOKEN" > ~/.config/digital-ocean/token -``` - -Configure the DigitalOcean provider to use your token in a `providers.tf` file. - -```tf -provider "digitalocean" { - version = "~> 1.1.0" - token = "${chomp(file("~/.config/digital-ocean/token"))}" - alias = "default" -} - -provider "local" { - version = "~> 1.0" - alias = "default" -} - -provider "null" { - version = "~> 1.0" - alias = "default" -} - -provider "template" { - version = "~> 1.0" - alias = "default" -} - -provider "tls" { - version = "~> 1.0" - alias = "default" -} -``` - -## Cluster - -Define a Kubernetes cluster using the module `digital-ocean/fedora-atomic/kubernetes`. - -```tf -module "digital-ocean-nemo" { - source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.14.3" - - providers = { - digitalocean = "digitalocean.default" - local = "local.default" - null = "null.default" - template = "template.default" - tls = "tls.default" - } - - # Digital Ocean - cluster_name = "nemo" - region = "nyc3" - dns_zone = "digital-ocean.example.com" - - # configuration - ssh_authorized_key = "ssh-rsa AAAAB3Nz..." - ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"] - asset_dir = "/home/user/.secrets/clusters/nemo" - - # optional - worker_count = 2 - worker_type = "s-1vcpu-1gb" -} -``` - -Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/fedora-atomic/kubernetes/variables.tf) source. - -## ssh-agent - -Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`. - -```sh -ssh-add ~/.ssh/id_rsa -ssh-add -L -``` - -## Apply - -Initialize the config directory if this is the first use with Terraform. - -```sh -terraform init -``` - -Plan the resources to be created. - -```sh -$ terraform plan -Plan: 54 to add, 0 to change, 0 to destroy. -``` - -Apply the changes to create the cluster. - -```sh -$ terraform apply -module.digital-ocean-nemo.null_resource.bootkube-start: Still creating... (30s elapsed) -module.digital-ocean-nemo.null_resource.bootkube-start: Provisioning with 'remote-exec'... -... -module.digital-ocean-nemo.null_resource.bootkube-start: Still creating... (6m20s elapsed) -module.digital-ocean-nemo.null_resource.bootkube-start: Creation complete (ID: 7599298447329218468) - -Apply complete! Resources: 54 added, 0 changed, 0 destroyed. -``` - -In 3-6 minutes, the Kubernetes cluster will be ready. - -## Verify - -[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes. - -``` -$ export KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -10.132.110.130 Ready controller,master 10m v1.14.3 -10.132.115.81 Ready node 10m v1.14.3 -10.132.124.107 Ready node 10m v1.14.3 -``` - -List the pods. - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m -kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m -kube-system flannel-1cq1v 2/2 Running 0 11m -kube-system flannel-hq9t0 2/2 Running 1 11m -kube-system flannel-v0g9w 2/2 Running 0 11m -kube-system kube-apiserver-n10qr 1/1 Running 0 11m -kube-system kube-controller-manager-3271970485-37gtw 1/1 Running 1 11m -kube-system kube-controller-manager-3271970485-p52t5 1/1 Running 0 11m -kube-system kube-proxy-6kxjf 1/1 Running 0 11m -kube-system kube-proxy-fh3td 1/1 Running 0 11m -kube-system kube-proxy-k35rc 1/1 Running 0 11m -kube-system kube-scheduler-3895335239-2bc4c 1/1 Running 0 11m -kube-system kube-scheduler-3895335239-b7q47 1/1 Running 1 11m -kube-system pod-checkpointer-pr1lq 1/1 Running 0 11m -kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0 10m -``` - -## Going Further - -Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/). - -## Variables - -Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/fedora-atomic/kubernetes/variables.tf) source. - -### Required - -| Name | Description | Example | -|:-----|:------------|:--------| -| cluster_name | Unique cluster name (prepended to dns_zone) | nemo | -| region | Digital Ocean region | nyc1, sfo2, fra1, tor1 | -| dns_zone | Digital Ocean domain (i.e. DNS zone) | do.example.com | -| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." | -| ssh_fingerprints | SSH public key fingerprints | ["d7:9d..."] | -| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | /home/user/.secrets/nemo | - -#### DNS Zone - -Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`. - -You'll need a registered domain name or delegated subdomain in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names. - -```tf -# Declare a DigitalOcean record to also create a zone file -resource "digitalocean_domain" "zone-for-clusters" { - name = "do.example.com" - ip_address = "8.8.8.8" -} -``` - -!!! tip "" - If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean). - -#### SSH Fingerprints - -DigitalOcean droplets are created with your SSH public key "fingerprint" (i.e. MD5 hash) to allow access. If your SSH public key is at `~/.ssh/id_rsa`, find the fingerprint with, - -```bash -ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}' -MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7 -``` - -If you use `ssh-agent` (e.g. Yubikey for SSH), find the fingerprint with, - -``` -ssh-add -l -E md5 -2048 MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7 cardno:000603633110 (RSA) -``` - -Digital Ocean requires the SSH public key be uploaded to your account, so you may also find the fingerprint under Settings -> Security. Finally, if you don't have an SSH key, [create one now](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/). - -### Optional - -| Name | Description | Default | Example | -|:-----|:------------|:--------|:--------| -| controller_count | Number of controllers (i.e. masters) | 1 | 1 | -| worker_count | Number of workers | 1 | 3 | -| controller_type | Droplet type for controllers | s-2vcpu-2gb | s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb, ... | -| worker_type | Droplet type for workers | s-1vcpu-1gb | s-1vcpu-1gb, s-1vcpu-2gb, s-2vcpu-2gb, ... | -| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" | -| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" | -| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" | - -Check the list of valid [droplet types](https://developers.digitalocean.com/documentation/changelog/api-v2/new-size-slugs-for-droplet-plan-changes/) or use `doctl compute size list`. - -!!! warning - Do not choose a `controller_type` smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail. diff --git a/docs/atomic/google-cloud.md b/docs/atomic/google-cloud.md deleted file mode 100644 index 1e4ae76c..00000000 --- a/docs/atomic/google-cloud.md +++ /dev/null @@ -1,285 +0,0 @@ -# Google Cloud - -!!! danger - Typhoon for Fedora Atomic will not be updated much beyond Kubernetes v1.13. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Expect rough edges and changes. - -In this tutorial, we'll create a Kubernetes v1.14.3 cluster on Google Compute Engine with Fedora Atomic. - -We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init. - -Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `coredns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster. - -## Requirements - -* Google Cloud Account and Service Account -* Google Cloud DNS Zone (registered main Name or delegated subdomain) -* Terraform v0.11.x installed locally -* `gcloud` and `gsutil` for uploading a disk image to Google Cloud (temporary) - -## Terraform Setup - -Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system. - -```sh -$ terraform version -Terraform v0.11.12 -``` - -Read [concepts](/architecture/concepts/) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`). - -``` -cd infra/clusters -``` - -## Provider - -Login to your Google Console [API Manager](https://console.cloud.google.com/apis/dashboard) and select a project, or [signup](https://cloud.google.com/free/) if you don't have an account. - -Select "Credentials" and create a service account key. Choose the "Compute Engine Admin" and "DNS Administrator" roles and save the JSON private key to a file that can be referenced in configs. - -```sh -mv ~/Downloads/project-id-43048204.json ~/.config/google-cloud/terraform.json -``` - -Configure the Google Cloud provider to use your service account key, project-id, and region in a `providers.tf` file. - -```tf -provider "google" { - version = "~> 2.2.0" - alias = "default" - - credentials = "${file("~/.config/google-cloud/terraform.json")}" - project = "project-id" - region = "us-central1" -} - -provider "local" { - version = "~> 1.0" - alias = "default" -} - -provider "null" { - version = "~> 1.0" - alias = "default" -} - -provider "template" { - version = "~> 1.0" - alias = "default" -} - -provider "tls" { - version = "~> 1.0" - alias = "default" -} -``` - -Additional configuration options are described in the `google` provider [docs](https://www.terraform.io/docs/providers/google/index.html). - -!!! tip - Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`. A project may container multiple clusters across different regions. - -## Atomic Image - -Project Atomic does not publish official Fedora Atomic images to Google Cloud. However, Google Cloud allows [custom boot images](https://cloud.google.com/compute/docs/images/import-existing-image) to be uploaded to a bucket and imported into your project. - -Download the Fedora Atomic 28 [raw image](https://getfedora.org/en/atomic/download/) and decompress the file. - -``` -xz -d Fedora-AtomicHost-28-20180528.0.x86_64.raw.xz -``` - -!!! warning - Download the exact dated version shown in docs. Fedora has no official Atomic images for Google Cloud. We've verified specific versions and found others to have problems. - -Rename the image `disk.raw`. Gzip compress and tar the image. - -``` -mv Fedora-AtomicHost-28-20180528.0.x86_64.raw disk.raw -tar cvzf fedora-atomic-28.tar.gz disk.raw -``` - -List available storage buckets and upload the tar.gz. - -``` -gsutil list -gsutil cp fedora-atomic-28.tar.gz gs://BUCKET_NAME -``` - -Create a Google Compute Engine image from the bucket file. - -``` -gcloud compute images list -gcloud compute images create fedora-atomic-28 --source-uri gs://BUCKET/fedora-atomic-28.tar.gz -``` - -Note your project id and the image name for setting `os_image` later (e.g. proj-id/fedora-atomic-28). - -## Cluster - -Define a Kubernetes cluster using the module `google-cloud/fedora-atomic/kubernetes`. - -```tf -module "google-cloud-yavin" { - source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.14.3" - - providers = { - google = "google.default" - local = "local.default" - null = "null.default" - template = "template.default" - tls = "tls.default" - } - - # Google Cloud - cluster_name = "yavin" - region = "us-central1" - dns_zone = "example.com" - dns_zone_name = "example-zone" - - # configuration - ssh_authorized_key = "ssh-rsa AAAAB3Nz..." - asset_dir = "/home/user/.secrets/clusters/yavin" - os_image = "MY-PROJECT_ID/fedora-atomic-28" - - # optional - worker_count = 2 -} -``` - -Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/fedora-atomic/kubernetes/variables.tf) source. - -## ssh-agent - -Initial bootstrapping requires `bootkube.service` be started on one controller node. Terraform uses `ssh-agent` to automate this step. Add your SSH private key to `ssh-agent`. - -```sh -ssh-add ~/.ssh/id_rsa -ssh-add -L -``` - -## Apply - -Initialize the config directory if this is the first use with Terraform. - -```sh -terraform init -``` - -Plan the resources to be created. - -```sh -$ terraform plan -Plan: 73 to add, 0 to change, 0 to destroy. -``` - -Apply the changes to create the cluster. - -```sh -$ terraform apply -module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (10s elapsed) -... - -module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m30s elapsed) -module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m40s elapsed) -module.google-cloud-yavin.null_resource.bootkube-start: Creation complete (ID: 5768638456220583358) - -Apply complete! Resources: 73 added, 0 changed, 0 destroyed. -``` - -In 5-10 minutes, the Kubernetes cluster will be ready. - -## Verify - -[Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your system. Use the generated `kubeconfig` credentials to access the Kubernetes cluster and list nodes. - -``` -$ export KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig -$ kubectl get nodes -NAME ROLES STATUS AGE VERSION -yavin-controller-0.c.example-com.internal controller,master Ready 6m v1.14.3 -yavin-worker-jrbf.c.example-com.internal node Ready 5m v1.14.3 -yavin-worker-mzdm.c.example-com.internal node Ready 5m v1.14.3 -``` - -List the pods. - -``` -$ kubectl get pods --all-namespaces -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-node-1cs8z 2/2 Running 0 6m -kube-system calico-node-d1l5b 2/2 Running 0 6m -kube-system calico-node-sp9ps 2/2 Running 0 6m -kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m -kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m -kube-system kube-apiserver-zppls 1/1 Running 0 6m -kube-system kube-controller-manager-3271970485-gh9kt 1/1 Running 0 6m -kube-system kube-controller-manager-3271970485-h90v8 1/1 Running 1 6m -kube-system kube-proxy-117v6 1/1 Running 0 6m -kube-system kube-proxy-9886n 1/1 Running 0 6m -kube-system kube-proxy-njn47 1/1 Running 0 6m -kube-system kube-scheduler-3895335239-5x87r 1/1 Running 0 6m -kube-system kube-scheduler-3895335239-bzrrt 1/1 Running 1 6m -kube-system pod-checkpointer-l6lrt 1/1 Running 0 6m -``` - -## Going Further - -Learn about [maintenance](/topics/maintenance/) and [addons](/addons/overview/). - -## Variables - -Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/fedora-atomic/kubernetes/variables.tf) source. - -### Required - -| Name | Description | Example | -|:-----|:------------|:--------| -| cluster_name | Unique cluster name (prepended to dns_zone) | "yavin" | -| region | Google Cloud region | "us-central1" | -| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" | -| dns_zone_name | Google Cloud DNS zone name | "example-zone" | -| os_image | Custom uploaded Fedora Atomic image | "PROJECT-ID/fedora-atomic-28" | -| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." | -| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/yavin" | - -Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-zones/regions-zones). - -#### DNS Zone - -Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`. - -You'll need a registered domain name or delegated subdomain on Google Cloud DNS. You can set this up once and create many clusters with unique names. - -```tf -resource "google_dns_managed_zone" "zone-for-clusters" { - dns_name = "google-cloud.example.com." - name = "example-zone" - description = "Production DNS zone" -} -``` - -!!! tip "" - If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers). - -### Optional - -| Name | Description | Default | Example | -|:-----|:------------|:--------|:--------| -| controller_count | Number of controllers (i.e. masters) | 1 | 3 | -| worker_count | Number of workers | 1 | 3 | -| controller_type | Machine type for controllers | "n1-standard-1" | See below | -| worker_type | Machine type for workers | "n1-standard-1" | See below | -| disk_size | Size of the disk in GB | 40 | 100 | -| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true | -| networking | Choice of networking provider | "calico" | "calico" or "flannel" | -| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" | -| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" | -| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by coredns. | "cluster.local" | "k8s.example.com" | - -Check the list of valid [machine types](https://cloud.google.com/compute/docs/machine-types). - -#### Preemption - -Add `worker_preemeptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.` - diff --git a/docs/index.md b/docs/index.md index c7bbe044..593faf28 100644 --- a/docs/index.md +++ b/docs/index.md @@ -29,15 +29,6 @@ Typhoon provides a Terraform Module for each supported operating system and plat | Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](cl/digital-ocean.md) | beta | | Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](cl/google-cloud.md) | stable | -Fedora Atomic support is alpha and will evolve as Fedora Atomic is replaced by Fedora CoreOS. - -| Platform | Operating System | Terraform Module | Status | -|---------------|------------------|------------------|--------| -| AWS | Fedora Atomic | [aws/fedora-atomic/kubernetes](atomic/aws.md) | deprecated | -| Bare-Metal | Fedora Atomic | [bare-metal/fedora-atomic/kubernetes](atomic/bare-metal.md) | deprecated | -| Digital Ocean | Fedora Atomic | [digital-ocean/fedora-atomic/kubernetes](atomic/digital-ocean.md) | deprecated | -| Google Cloud | Fedora Atomic | [google-cloud/fedora-atomic/kubernetes](atomic/google-cloud.md) | deprecated | - ## Documentation * Architecture [concepts](architecture/concepts.md) and [operating-systems](architecture/operating-systems.md) diff --git a/docs/topics/faq.md b/docs/topics/faq.md index 72d99c75..1a8eef26 100644 --- a/docs/topics/faq.md +++ b/docs/topics/faq.md @@ -8,18 +8,13 @@ Formats rise and evolve. Typhoon may choose to adapt the format over time (with ## Operating Systems -Typhoon supports Container Linux and Fedora Atomic 28. These two operating systems were chosen because they offer: +Typhoon supports Container Linux and the Flatcar Linux derivative. These operating systems were chosen because they offer: * Minimalism and focus on clustered operation * Automated and atomic operating system upgrades * Declarative and immutable configuration * Optimization for containerized applications -Together, they diversify Typhoon to support a range of container technologies. - -* Container Linux: Gentoo core, rkt-fly, docker -* Fedora Atomic: RHEL core, rpm-ostree, system containers (i.e. runc), CRI-O - ## Get Help Ask questions on the IRC #typhoon channel on [freenode.net](http://freenode.net/). diff --git a/docs/topics/security.md b/docs/topics/security.md index 3a3411f3..7eb0cdec 100644 --- a/docs/topics/security.md +++ b/docs/topics/security.md @@ -42,9 +42,7 @@ Typhoon limits exposure to many security threats, but it is not a silver bullet. ## OpenPGP Signing -Typhoon uses upstream container images and binaries. We do not distribute artifacts of our own, except where required for system container images ([etcd](https://quay.io/repository/poseidon/etcd), [kubelet](https://quay.io/repository/poseidon/kubelet), [bootkube](https://quay.io/repository/poseidon/bootkube)) for Fedora Atomic only. - -If you find artifacts claiming to be from Typhoon, please send a note. +Typhoon uses upstream container images and binaries. We do not distribute artifacts of our own. If you find artifacts claiming to be from Typhoon, please send a note. ## Disclosures