mirror of
https://github.com/puppetmaster/typhoon.git
synced 2024-12-24 18:19:33 +01:00
Write documentation for Fedora Atomic
This commit is contained in:
parent
af54efec28
commit
cd913986df
2
.github/ISSUE_TEMPLATE.md
vendored
2
.github/ISSUE_TEMPLATE.md
vendored
@ -5,7 +5,7 @@
|
||||
### Environment
|
||||
|
||||
* Platform: aws, bare-metal, google-cloud, digital-ocean
|
||||
* OS: container-linux, fedora-cloud
|
||||
* OS: container-linux, fedora-atomic
|
||||
* Terraform: `terraform version`
|
||||
* Plugins: Provider plugin versions
|
||||
* Ref: Git SHA (if applicable)
|
||||
|
@ -1,16 +1,19 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on AWS.
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a VPC, gateway, subnets, auto-scaling groups of controllers and workers, network load balancers for controllers and workers, and security groups will be created.
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on AWS with Fedora Atomic.
|
||||
|
||||
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
* AWS Account and IAM credentials
|
||||
* AWS Route53 DNS Zone (registered Domain Name or delegated subdomain)
|
||||
* Terraform v0.11.x and [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) installed locally
|
||||
* Terraform v0.11.x installed locally
|
||||
|
||||
## Terraform Setup
|
||||
|
||||
@ -18,23 +21,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.11.1
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
providers {
|
||||
ct = "/usr/local/bin/terraform-provider-ct"
|
||||
}
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
@ -92,11 +79,11 @@ Additional configuration options are described in the `aws` provider [docs](http
|
||||
|
||||
## Cluster
|
||||
|
||||
Define a Kubernetes cluster using the module `aws/container-linux/kubernetes`.
|
||||
Define a Kubernetes cluster using the module `aws/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "aws-tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-atomic/kubernetes?ref=v1.10.1"
|
||||
|
||||
providers = {
|
||||
aws = "aws.default"
|
||||
@ -121,7 +108,7 @@ module "aws-tempest" {
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/container-linux/kubernetes/variables.tf) source.
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
@ -132,9 +119,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -143,20 +127,11 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 98 to add, 0 to change, 0 to destroy.
|
||||
Plan: 106 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
@ -168,10 +143,10 @@ module.aws-tempest.null_resource.bootkube-start: Still creating... (4m50s elapse
|
||||
module.aws-tempest.null_resource.bootkube-start: Still creating... (5m0s elapsed)
|
||||
module.aws-tempest.null_resource.bootkube-start: Creation complete after 11m8s (ID: 3961816482286168143)
|
||||
|
||||
Apply complete! Resources: 98 added, 0 changed, 0 destroyed.
|
||||
Apply complete! Resources: 106 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
|
||||
## Verify
|
||||
|
||||
@ -211,11 +186,10 @@ kube-system pod-checkpointer-4kxtl-ip-10-0-12-221 1/1 Running 0
|
||||
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
@ -223,14 +197,14 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
| cluster_name | Unique cluster name (prepended to dns_zone) | "tempest" |
|
||||
| dns_zone | AWS Route53 DNS zone | "aws.example.com" |
|
||||
| dns_zone_id | AWS Route53 DNS zone id | "Z3PAABBCFAKEC0" |
|
||||
| ssh_authorized_key | SSH public key for ~/.ssh_authorized_keys | "ssh-rsa AAAAB3NZ..." |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/tempest" |
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `tempest.aws.example.com`.
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `tempest.aws.example.com`.
|
||||
|
||||
You'll need a registered domain name or subdomain registered in a AWS Route53 DNS zone. You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain on AWS Route53. You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "aws_route53_zone" "zone-for-clusters" {
|
||||
@ -241,7 +215,7 @@ resource "aws_route53_zone" "zone-for-clusters" {
|
||||
Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`.
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on Route53 (e.g. aws.mydomain.com) and [update nameservers](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html).
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Route53 (e.g. aws.mydomain.com) and [update nameservers](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html).
|
||||
|
||||
### Optional
|
||||
|
||||
@ -251,11 +225,8 @@ Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | EC2 instance type for controllers | "t2.small" | See below |
|
||||
| worker_type | EC2 instance type for workers | "t2.small" | See below |
|
||||
| os_channel | Container Linux AMI channel | stable | stable, beta, alpha |
|
||||
| disk_size | Size of the EBS volume in GB | "40" | "100" |
|
||||
| disk_type | Type of the EBS volume | "gp2" | standard, gp2, io1 |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico only) | 1480 | 8981 |
|
||||
| host_cidr | CIDR IPv4 range to assign to EC2 instances | "10.0.0.0/16" | "10.1.0.0/16" |
|
||||
|
@ -1,16 +1,20 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.1 cluster on bare-metal.
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.1 cluster on bare-metal with Fedora Atomic.
|
||||
|
||||
Controllers are provisioned as etcd peers and run `etcd-member` (etcd3) and `kubelet`. Workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora Atomic via kickstart, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via cloud-init.
|
||||
|
||||
Controllers are provisioned to run `etcd` and `kubelet` [system containers](http://www.projectatomic.io/blog/2016/09/intro-to-system-containers/). Workers run just a `kubelet` system container. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
* Machines with 2GB RAM, 30GB disk, PXE-enabled NIC, IPMI
|
||||
* PXE-enabled [network boot](https://coreos.com/matchbox/docs/latest/network-setup.html) environment
|
||||
* Matchbox v0.6+ deployment with API enabled
|
||||
* Matchbox v0.7+ deployment with API enabled
|
||||
* HTTP server for Fedora install assets and ostree repo
|
||||
* Matchbox credentials `client.crt`, `client.key`, `ca.crt`
|
||||
* Terraform v0.11.x and [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) installed locally
|
||||
|
||||
@ -104,13 +108,69 @@ Read about the [many ways](https://coreos.com/matchbox/docs/latest/network-setup
|
||||
!!! note ""
|
||||
TFTP chainloading to modern boot firmware, like iPXE, avoids issues with old NICs and allows faster transfer protocols like HTTP to be used.
|
||||
|
||||
## Atomic Assets
|
||||
|
||||
Fedora Atomic network installations require a local mirror of assets. Configure an HTTP server to serve the Atomic install tree and ostree repo.
|
||||
|
||||
```
|
||||
sudo dnf install -y httpd
|
||||
sudo firewall-cmd --permenant --add-port=80/tcp
|
||||
sudo systemctl enable httpd --now
|
||||
```
|
||||
|
||||
Download the [Fedora Atomic](https://getfedora.org/en/atomic/download/) ISO which contains install files and add them to the serve directory.
|
||||
|
||||
```
|
||||
sudo mount -o loop,ro Fedora-Atomic-ostree-*.iso /mnt
|
||||
sudo mkdir -p /var/www/html/fedora/27
|
||||
sudo cp -av /mnt/* /var/www/html/fedora/27/
|
||||
```
|
||||
|
||||
Checkout the [fedora-atomic](https://pagure.io/fedora-atomic) ostree manifest repo.
|
||||
|
||||
```
|
||||
git clone https://pagure.io/fedora-atomic.git && cd fedora-atomic
|
||||
git checkout f27
|
||||
```
|
||||
|
||||
Compose an ostree repo from RPM sources.
|
||||
|
||||
```
|
||||
mkdir repo
|
||||
ostree init --repo=repo --mode=archive
|
||||
sudo dnf install rpm-ostree
|
||||
sudo rpm-ostree compose tree --repo=repo fedora-atomic-host.json
|
||||
```
|
||||
|
||||
Serve the ostree `repo` as well.
|
||||
|
||||
```
|
||||
sudo cp -r repo /var/www/html/fedora/27/
|
||||
tree /var/www/html/fedora/27/
|
||||
├── images
|
||||
│ ├── pxeboot
|
||||
│ ├── initrd.img
|
||||
│ └── vmlinuz
|
||||
├── isolinux/
|
||||
├── repo/
|
||||
```
|
||||
|
||||
Verify `vmlinuz`, `initrd.img`, and `repo` are accessible from the HTTP server (i.e. `atomic_assets_endpoint`).
|
||||
|
||||
```
|
||||
curl http://example.com/fedora/27/
|
||||
```
|
||||
|
||||
!!! note
|
||||
It is possible to use the Matchbox `/assets` [cache](https://github.com/coreos/matchbox/blob/master/Documentation/matchbox.md#assets) as an HTTP server.
|
||||
|
||||
## Terraform Setup
|
||||
|
||||
Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your system.
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.11.1
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Add the [terraform-provider-matchbox](https://github.com/coreos/terraform-provider-matchbox) plugin binary for your system.
|
||||
@ -170,11 +230,11 @@ provider "tls" {
|
||||
|
||||
## Cluster
|
||||
|
||||
Define a Kubernetes cluster using the module `bare-metal/container-linux/kubernetes`.
|
||||
Define a Kubernetes cluster using the module `bare-metal/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "bare-metal-mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-atomic/kubernetes?ref=v1.10.1"
|
||||
|
||||
providers = {
|
||||
local = "local.default"
|
||||
@ -184,10 +244,9 @@ module "bare-metal-mercury" {
|
||||
}
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
matchbox_http_endpoint = "http://matchbox.example.com"
|
||||
container_linux_channel = "stable"
|
||||
container_linux_version = "1632.3.0"
|
||||
cluster_name = "mercury"
|
||||
matchbox_http_endpoint = "http://matchbox.example.com"
|
||||
atomic_assets_endpoint = "http://example.com/fedora/27"
|
||||
|
||||
# configuration
|
||||
k8s_domain_name = "node1.example.com"
|
||||
@ -213,7 +272,7 @@ module "bare-metal-mercury" {
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/container-linux/kubernetes/variables.tf) source.
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
@ -224,9 +283,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -235,20 +291,11 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 55 to add, 0 to change, 0 to destroy.
|
||||
Plan: 58 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes. Terraform will generate bootkube assets to `asset_dir` and create Matchbox profiles (e.g. controller, worker) and matching rules via the Matchbox API.
|
||||
@ -273,7 +320,7 @@ ipmitool -H node1.example.com -U USER -P PASS chassis bootdev pxe
|
||||
ipmitool -H node1.example.com -U USER -P PASS power on
|
||||
```
|
||||
|
||||
Machines will network boot, install Container Linux to disk, reboot into the disk install, and provision themselves as controllers or workers.
|
||||
Machines will network boot, install Fedora Atomic to disk via kickstart, reboot into the disk install, and provision themselves as controllers or workers via cloud-init.
|
||||
|
||||
!!! tip ""
|
||||
If this is the first test of your PXE-enabled network boot environment, watch the SOL console of a machine to spot any misconfigurations.
|
||||
@ -289,22 +336,13 @@ module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m30s
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Still creating... (6m40s elapsed)
|
||||
module.bare-metal-mercury.null_resource.bootkube-start: Creation complete (ID: 5441741360626669024)
|
||||
|
||||
Apply complete! Resources: 55 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
To watch the install to disk (until machines reboot from disk), SSH to port 2222.
|
||||
|
||||
```
|
||||
# before v1.10.1
|
||||
$ ssh debug@node1.example.com
|
||||
# after v1.10.1
|
||||
$ ssh -p 2222 core@node1.example.com
|
||||
Apply complete! Resources: 58 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
To watch the bootstrap process in detail, SSH to the first controller and journal the logs.
|
||||
|
||||
```
|
||||
$ ssh core@node1.example.com
|
||||
$ ssh fedora@node1.example.com
|
||||
$ journalctl -f -u bootkube
|
||||
bootkube[5]: Pod Status: pod-checkpointer Running
|
||||
bootkube[5]: Pod Status: kube-apiserver Running
|
||||
@ -352,21 +390,19 @@ kube-system pod-checkpointer-wf65d-node1.example.com 1/1 Running 0
|
||||
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name | mercury |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | http://matchbox.example.com:8080 |
|
||||
| container_linux_channel | Container Linux channel | stable, beta, alpha |
|
||||
| container_linux_version | Container Linux version of the kernel/initrd to PXE and the image to install | 1632.3.0 |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | "http://matchbox.example.com:port" |
|
||||
| atomic_assets_endpoint | HTTP endpoint serving the Fedora Atomic vmlinuz, initrd.img, and ostree repo | "http://example.com/fedora/27" |
|
||||
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
|
||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3Nz..." |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3Nz..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/mercury" |
|
||||
| controller_names | Ordered list of controller short names | ["node1"] |
|
||||
| controller_macs | Ordered list of controller identifying MAC addresses | ["52:54:00:a1:9c:ae"] |
|
||||
@ -379,9 +415,6 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
|
||||
| Name | Description | Default | Example |
|
||||
|:-----|:------------|:--------|:--------|
|
||||
| cached_install | Whether machines should PXE boot and install from the Matchbox `/assets` cache. Admin MUST have downloaded Container Linux images into the cache to use this | false | true |
|
||||
| install_disk | Disk device where Container Linux should be installed | "/dev/sda" | "/dev/sdb" |
|
||||
| container_linux_oem | Specify alternative OEM image ids for the disk install | "" | "vmware_raw", "xen" |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| network_mtu | CNI interface MTU (calico-only) | 1480 | - |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
|
@ -1,16 +1,19 @@
|
||||
# Digital Ocean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Digital Ocean.
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is alpha. Expect rough edges and changes.
|
||||
|
||||
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, firewall rules, DNS records, tags, and droplets for Kubernetes controllers and workers will be created.
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on DigitalOcean with Fedora Atomic.
|
||||
|
||||
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
* Digital Ocean Account and Token
|
||||
* Digital Ocean Domain (registered Domain Name or delegated subdomain)
|
||||
* Terraform v0.11.x and [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) installed locally
|
||||
* Terraform v0.11.x installed locally
|
||||
|
||||
## Terraform Setup
|
||||
|
||||
@ -18,23 +21,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.11.1
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
providers {
|
||||
ct = "/usr/local/bin/terraform-provider-ct"
|
||||
}
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
@ -86,11 +73,11 @@ provider "tls" {
|
||||
|
||||
## Cluster
|
||||
|
||||
Define a Kubernetes cluster using the module `digital-ocean/container-linux/kubernetes`.
|
||||
Define a Kubernetes cluster using the module `digital-ocean/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "digital-ocean-nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-atomic/kubernetes?ref=v1.10.1"
|
||||
|
||||
providers = {
|
||||
digitalocean = "digitalocean.default"
|
||||
@ -106,6 +93,7 @@ module "digital-ocean-nemo" {
|
||||
dns_zone = "digital-ocean.example.com"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
|
||||
asset_dir = "/home/user/.secrets/clusters/nemo"
|
||||
|
||||
@ -115,7 +103,7 @@ module "digital-ocean-nemo" {
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/container-linux/kubernetes/variables.tf) source.
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
@ -126,9 +114,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -137,15 +122,6 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
@ -205,11 +181,10 @@ kube-system pod-checkpointer-pr1lq-10.132.115.81 1/1 Running 0
|
||||
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
@ -217,14 +192,15 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
| cluster_name | Unique cluster name (prepended to dns_zone) | nemo |
|
||||
| region | Digital Ocean region | nyc1, sfo2, fra1, tor1 |
|
||||
| dns_zone | Digital Ocean domain (i.e. DNS zone) | do.example.com |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." |
|
||||
| ssh_fingerprints | SSH public key fingerprints | ["d7:9d..."] |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | /home/user/.secrets/nemo |
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
|
||||
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
|
||||
|
||||
You'll need a registered domain name or subdomain registered in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "digitalocean_domain" "zone-for-clusters" {
|
||||
@ -235,7 +211,7 @@ resource "digitalocean_domain" "zone-for-clusters" {
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean).
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean).
|
||||
|
||||
#### SSH Fingerprints
|
||||
|
||||
@ -263,9 +239,6 @@ Digital Ocean requires the SSH public key be uploaded to your account, so you ma
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Droplet type for controllers | s-2vcpu-2gb | s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb, ... |
|
||||
| worker_type | Droplet type for workers | s-1vcpu-1gb | s-1vcpu-1gb, s-1vcpu-2gb, s-2vcpu-2gb, ... |
|
||||
| image | Container Linux image for instances | "coreos-stable" | coreos-stable, coreos-beta, coreos-alpha |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
| cluster_domain_suffix | FQDN suffix for Kubernetes services answered by kube-dns. | "cluster.local" | "k8s.example.com" |
|
||||
|
@ -1,16 +1,20 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Google Compute Engine (not GKE).
|
||||
!!! danger
|
||||
Typhoon for Fedora Atomic is very alpha. Fedora does not publish official images for Google Cloud so you must prepare them yourself. Some addons don't work yet. Expect rough edges and changes.
|
||||
|
||||
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a network, firewall rules, managed instance groups of Kubernetes controllers and workers, network load balancers for controllers and workers, and health checks will be created.
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Google Compute Engine with Fedora Atomic.
|
||||
|
||||
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets. Instances are provisioned on first boot with cloud-init.
|
||||
|
||||
Controllers are provisioned to run an `etcd` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
* Google Cloud Account and Service Account
|
||||
* Google Cloud DNS Zone (registered Domain Name or delegated subdomain)
|
||||
* Terraform v0.11.x and [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) installed locally
|
||||
* Terraform v0.11.x installed locally
|
||||
* `gcloud` and `gsutil` for uploading a disk image to Google Cloud (temporary)
|
||||
|
||||
## Terraform Setup
|
||||
|
||||
@ -18,23 +22,7 @@ Install [Terraform](https://www.terraform.io/downloads.html) v0.11.x on your sys
|
||||
|
||||
```sh
|
||||
$ terraform version
|
||||
Terraform v0.11.1
|
||||
```
|
||||
|
||||
Add the [terraform-provider-ct](https://github.com/coreos/terraform-provider-ct) plugin binary for your system.
|
||||
|
||||
```sh
|
||||
wget https://github.com/coreos/terraform-provider-ct/releases/download/v0.2.1/terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
tar xzf terraform-provider-ct-v0.2.1-linux-amd64.tar.gz
|
||||
sudo mv terraform-provider-ct-v0.2.1-linux-amd64/terraform-provider-ct /usr/local/bin/
|
||||
```
|
||||
|
||||
Add the plugin to your `~/.terraformrc`.
|
||||
|
||||
```
|
||||
providers {
|
||||
ct = "/usr/local/bin/terraform-provider-ct"
|
||||
}
|
||||
Terraform v0.11.7
|
||||
```
|
||||
|
||||
Read [concepts](../concepts.md) to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. `infra`).
|
||||
@ -47,7 +35,7 @@ cd infra/clusters
|
||||
|
||||
Login to your Google Console [API Manager](https://console.cloud.google.com/apis/dashboard) and select a project, or [signup](https://cloud.google.com/free/) if you don't have an account.
|
||||
|
||||
Select "Credentials", and create service account key credentials. Choose the "Compute Engine default service account" and save the JSON private key to a file that can be referenced in configs.
|
||||
Select "Credentials" and create a service account key. Choose the "Compute Engine Admin" role and save the JSON private key to a file that can be referenced in configs.
|
||||
|
||||
```sh
|
||||
mv ~/Downloads/project-id-43048204.json ~/.config/google-cloud/terraform.json
|
||||
@ -89,15 +77,49 @@ provider "tls" {
|
||||
Additional configuration options are described in the `google` provider [docs](https://www.terraform.io/docs/providers/google/index.html).
|
||||
|
||||
!!! tip
|
||||
A project may contain multiple clusters if you wish. Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`.
|
||||
Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`. A project may container multiple clusters across different regions.
|
||||
|
||||
## Atomic Image
|
||||
|
||||
Project Atomic does not publish official Fedora Atomic images to Google Cloud. However, Google Cloud allows [custom boot images](https://cloud.google.com/compute/docs/images/import-existing-image) to be uploaded to a bucket and imported into your project.
|
||||
|
||||
Download the Fedora Atomic 27 [raw image](https://getfedora.org/en/atomic/download/) and decompress the file.
|
||||
|
||||
```
|
||||
xz -d Fedora-Atomic-27-20180326.1.x86_64.raw.xz
|
||||
```
|
||||
|
||||
Rename the image `disk.raw`. Gzip compress and tar the image.
|
||||
|
||||
```
|
||||
mv Fedora-Atomic-27-20180326.1.x86_64.raw disk.raw
|
||||
tar cvzf fedora-atomic-27.tar.gz disk.raw
|
||||
```
|
||||
|
||||
List available storage buckets and upload the tar.gz.
|
||||
|
||||
```
|
||||
gsutil list
|
||||
gsutil cp fedora-atomic-27.tar.gz gs://BUCKET_NAME
|
||||
```
|
||||
|
||||
Create a Google Compute Engine image from the bucket file.
|
||||
|
||||
```
|
||||
gcloud compute images list
|
||||
gcloud compute images create fedora-atomic-27 --source-uri gs://BUCKET/fedora-atomic-27.tar.gz
|
||||
```
|
||||
|
||||
Note your project id and the image name for setting `os_image` later (e.g. proj-id/fedora-atomic-27).
|
||||
|
||||
|
||||
## Cluster
|
||||
|
||||
Define a Kubernetes cluster using the module `google-cloud/container-linux/kubernetes`.
|
||||
Define a Kubernetes cluster using the module `google-cloud/fedora-atomic/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "google-cloud-yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.10.1"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-atomic/kubernetes?ref=v1.10.1"
|
||||
|
||||
providers = {
|
||||
google = "google.default"
|
||||
@ -116,13 +138,14 @@ module "google-cloud-yavin" {
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||
asset_dir = "/home/user/.secrets/clusters/yavin"
|
||||
os_image = "MY-PROJECT_ID/fedora-atomic-27"
|
||||
|
||||
# optional
|
||||
worker_count = 2
|
||||
}
|
||||
```
|
||||
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/variables.tf) source.
|
||||
Reference the [variables docs](#variables) or the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
## ssh-agent
|
||||
|
||||
@ -133,9 +156,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -144,20 +164,11 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
$ terraform plan
|
||||
Plan: 64 to add, 0 to change, 0 to destroy.
|
||||
Plan: 73 to add, 0 to change, 0 to destroy.
|
||||
```
|
||||
|
||||
Apply the changes to create the cluster.
|
||||
@ -171,10 +182,10 @@ module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m30s
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Still creating... (5m40s elapsed)
|
||||
module.google-cloud-yavin.null_resource.bootkube-start: Creation complete (ID: 5768638456220583358)
|
||||
|
||||
Apply complete! Resources: 64 added, 0 changed, 0 destroyed.
|
||||
Apply complete! Resources: 73 added, 0 changed, 0 destroyed.
|
||||
```
|
||||
|
||||
In 4-8 minutes, the Kubernetes cluster will be ready.
|
||||
In 5-10 minutes, the Kubernetes cluster will be ready.
|
||||
|
||||
## Verify
|
||||
|
||||
@ -213,11 +224,10 @@ kube-system pod-checkpointer-l6lrt 1/1 Running 0
|
||||
|
||||
Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overview.md).
|
||||
|
||||
!!! note
|
||||
On Container Linux clusters, install the `CLUO` addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/fedora-atomic/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
@ -226,16 +236,17 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
| region | Google Cloud region | "us-central1" |
|
||||
| dns_zone | Google Cloud DNS zone | "google-cloud.example.com" |
|
||||
| dns_zone_name | Google Cloud DNS zone name | "example-zone" |
|
||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
|
||||
| os_image | Custom uploaded Fedora Atomic 27 image | "PROJECT-ID/fedora-atomic-27" |
|
||||
| ssh_authorized_key | SSH public key for user 'fedora' | "ssh-rsa AAAAB3NZ..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/yavin" |
|
||||
|
||||
Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-zones/regions-zones) and list Container Linux [images](https://cloud.google.com/compute/docs/images) with `gcloud compute images list | grep coreos`.
|
||||
Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-zones/regions-zones).
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`.
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`.
|
||||
|
||||
You'll need a registered domain name or subdomain registered in a Google Cloud DNS zone. You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain on Google Cloud DNS. You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
@ -246,7 +257,7 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers).
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers).
|
||||
|
||||
### Optional
|
||||
|
||||
@ -256,11 +267,8 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
| worker_count | Number of workers | 1 | 3 |
|
||||
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
|
||||
| worker_type | Machine type for workers | "n1-standard-1" | See below |
|
||||
| os_image | Container Linux image for compute instances | "coreos-stable" | "coreos-stable-1632-3-0-v20180215" |
|
||||
| disk_size | Size of the disk in GB | 40 | 100 |
|
||||
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
||||
| controller_clc_snippets | Controller Container Linux Config snippets | [] | |
|
||||
| worker_clc_snippets | Worker Container Linux Config snippets | [] | |
|
||||
| networking | Choice of networking provider | "calico" | "calico" or "flannel" |
|
||||
| pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
|
||||
| service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
|
||||
|
@ -1,10 +1,10 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on AWS.
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on AWS with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a VPC, gateway, subnets, auto-scaling groups of controllers and workers, network load balancers for controllers and workers, and security groups will be created.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancers, and TLS assets.
|
||||
|
||||
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -132,9 +132,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -143,15 +140,6 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
@ -216,6 +204,8 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/aws/container-linux/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
@ -223,14 +213,14 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
| cluster_name | Unique cluster name (prepended to dns_zone) | "tempest" |
|
||||
| dns_zone | AWS Route53 DNS zone | "aws.example.com" |
|
||||
| dns_zone_id | AWS Route53 DNS zone id | "Z3PAABBCFAKEC0" |
|
||||
| ssh_authorized_key | SSH public key for ~/.ssh_authorized_keys | "ssh-rsa AAAAB3NZ..." |
|
||||
| ssh_authorized_key | SSH public key for user 'core' | "ssh-rsa AAAAB3NZ..." |
|
||||
| asset_dir | Path to a directory where generated assets should be placed (contains secrets) | "/home/user/.secrets/clusters/tempest" |
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `tempest.aws.example.com`.
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `tempest.aws.example.com`.
|
||||
|
||||
You'll need a registered domain name or subdomain registered in a AWS Route53 DNS zone. You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain on AWS Route53. You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "aws_route53_zone" "zone-for-clusters" {
|
||||
@ -241,7 +231,7 @@ resource "aws_route53_zone" "zone-for-clusters" {
|
||||
Reference the DNS zone id with `"${aws_route53_zone.zone-for-clusters.zone_id}"`.
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on Route53 (e.g. aws.mydomain.com) and [update nameservers](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html).
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Route53 (e.g. aws.mydomain.com) and [update nameservers](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/SOA-NSrecords.html).
|
||||
|
||||
### Optional
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.1 cluster on bare-metal.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.10.1 cluster on bare-metal with Container Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers.
|
||||
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
Controllers are provisioned as etcd peers and run `etcd-member` (etcd3) and `kubelet`. Workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -224,9 +224,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -235,15 +232,6 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
@ -357,12 +345,14 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-metal/container-linux/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
|:-----|:------------|:--------|
|
||||
| cluster_name | Unique cluster name | mercury |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | http://matchbox.example.com:8080 |
|
||||
| matchbox_http_endpoint | Matchbox HTTP read-only endpoint | http://matchbox.example.com:port |
|
||||
| container_linux_channel | Container Linux channel | stable, beta, alpha |
|
||||
| container_linux_version | Container Linux version of the kernel/initrd to PXE and the image to install | 1632.3.0 |
|
||||
| k8s_domain_name | FQDN resolving to the controller(s) nodes. Workers and kubectl will communicate with this endpoint | "myk8s.example.com" |
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Digital Ocean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Digital Ocean.
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on DigitalOcean with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, firewall rules, DNS records, tags, and droplets for Kubernetes controllers and workers will be created.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `flannel` on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -126,9 +126,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -137,15 +134,6 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
@ -210,6 +198,8 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/digital-ocean/container-linux/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
@ -222,9 +212,9 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
|
||||
Clusters create DNS A records `${cluster_name}.${dns_zone}` to resolve to controller droplets (round robin). This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `nemo.do.example.com`.
|
||||
|
||||
You'll need a registered domain name or subdomain registered in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain in Digital Ocean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "digitalocean_domain" "zone-for-clusters" {
|
||||
@ -235,7 +225,7 @@ resource "digitalocean_domain" "zone-for-clusters" {
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean).
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and [update nameservers](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean).
|
||||
|
||||
#### SSH Fingerprints
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Google Compute Engine (not GKE).
|
||||
In this tutorial, we'll create a Kubernetes v1.10.1 cluster on Google Compute Engine with Container Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a network, firewall rules, managed instance groups of Kubernetes controllers and workers, network load balancers for controllers and workers, and health checks will be created.
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
Controllers and workers are provisioned to run a `kubelet`. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules an `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and runs `kube-proxy` and `calico` or `flannel` on each node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controllers are provisioned to run an `etcd-member` peer and a `kubelet` service. Workers run just a `kubelet` service. A one-time [bootkube](https://github.com/kubernetes-incubator/bootkube) bootstrap schedules the `apiserver`, `scheduler`, `controller-manager`, and `kube-dns` on controllers and schedules `kube-proxy` and `calico` (or `flannel`) on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -47,7 +47,7 @@ cd infra/clusters
|
||||
|
||||
Login to your Google Console [API Manager](https://console.cloud.google.com/apis/dashboard) and select a project, or [signup](https://cloud.google.com/free/) if you don't have an account.
|
||||
|
||||
Select "Credentials", and create service account key credentials. Choose the "Compute Engine default service account" and save the JSON private key to a file that can be referenced in configs.
|
||||
Select "Credentials" and create a service account key. Choose the "Compute Engine Admin" role and save the JSON private key to a file that can be referenced in configs.
|
||||
|
||||
```sh
|
||||
mv ~/Downloads/project-id-43048204.json ~/.config/google-cloud/terraform.json
|
||||
@ -89,7 +89,7 @@ provider "tls" {
|
||||
Additional configuration options are described in the `google` provider [docs](https://www.terraform.io/docs/providers/google/index.html).
|
||||
|
||||
!!! tip
|
||||
A project may contain multiple clusters if you wish. Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`.
|
||||
Regions are listed in [docs](https://cloud.google.com/compute/docs/regions-zones/regions-zones) or with `gcloud compute regions list`. A project may contain multiple clusters across different regions.
|
||||
|
||||
## Cluster
|
||||
|
||||
@ -133,9 +133,6 @@ ssh-add ~/.ssh/id_rsa
|
||||
ssh-add -L
|
||||
```
|
||||
|
||||
!!! warning
|
||||
`terraform apply` will hang connecting to a controller if `ssh-agent` does not contain the SSH key.
|
||||
|
||||
## Apply
|
||||
|
||||
Initialize the config directory if this is the first use with Terraform.
|
||||
@ -144,15 +141,6 @@ Initialize the config directory if this is the first use with Terraform.
|
||||
terraform init
|
||||
```
|
||||
|
||||
Get or update Terraform modules.
|
||||
|
||||
```sh
|
||||
$ terraform get # downloads missing modules
|
||||
$ terraform get --update # updates all modules
|
||||
Get: git::https://github.com/poseidon/typhoon (update)
|
||||
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.12.0 (update)
|
||||
```
|
||||
|
||||
Plan the resources to be created.
|
||||
|
||||
```sh
|
||||
@ -218,6 +206,8 @@ Learn about [maintenance](../topics/maintenance.md) and [addons](../addons/overv
|
||||
|
||||
## Variables
|
||||
|
||||
Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/google-cloud/container-linux/kubernetes/variables.tf) source.
|
||||
|
||||
### Required
|
||||
|
||||
| Name | Description | Example |
|
||||
@ -233,9 +223,9 @@ Check the list of valid [regions](https://cloud.google.com/compute/docs/regions-
|
||||
|
||||
#### DNS Zone
|
||||
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a network load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver. In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`.
|
||||
Clusters create a DNS A record `${cluster_name}.${dns_zone}` to resolve a TCP proxy load balancer backed by controller instances. This FQDN is used by workers and `kubectl` to access the apiserver(s). In this example, the cluster's apiserver would be accessible at `yavin.google-cloud.example.com`.
|
||||
|
||||
You'll need a registered domain name or subdomain registered in a Google Cloud DNS zone. You can set this up once and create many clusters with unique names.
|
||||
You'll need a registered domain name or delegated subdomain on Google Cloud DNS. You can set this up once and create many clusters with unique names.
|
||||
|
||||
```tf
|
||||
resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
@ -246,7 +236,7 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
||||
```
|
||||
|
||||
!!! tip ""
|
||||
If you have an existing domain name with a zone file elsewhere, just carve out a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers).
|
||||
If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Google Cloud (e.g. google-cloud.mydomain.com) and [update nameservers](https://cloud.google.com/dns/update-name-servers).
|
||||
|
||||
### Optional
|
||||
|
||||
|
14
docs/faq.md
14
docs/faq.md
@ -8,9 +8,19 @@ Formats rise and evolve. Typhoon may choose to adapt the format over time (with
|
||||
|
||||
## Operating Systems
|
||||
|
||||
Only Container Linux is supported currently. This just due to operational familiarity, rather than intentional exclusion. It's important that another operating system be added, to reduce the risk of making narrowly-scoped design decisions.
|
||||
Typhoon supports Container Linux and Fedora Atomic 27. Both operating systems offer:
|
||||
|
||||
Fedora Cloud will likely be next.
|
||||
* Minimalism and focus on clustered operation
|
||||
* Automated and atomic operating system upgrades
|
||||
* Declarative and immutable configuration
|
||||
* Optimization for containerized applications
|
||||
|
||||
Together, they diversify Typhoon to support a range of container technologies.
|
||||
|
||||
* Container Linux
|
||||
* Gentoo core, rkt-fly, docker
|
||||
* Fedora Atomic
|
||||
* RHEL core, rpm-ostree, system containers (i.e. runc), CRI-O
|
||||
|
||||
## Get Help
|
||||
|
||||
|
@ -2,14 +2,14 @@
|
||||
|
||||
## Provision Time
|
||||
|
||||
Provisioning times vary based on the platform. Sampling the time to create (apply) and destroy clusters with 1 controller and 2 workers shows (roughly) what to expect.
|
||||
Provisioning times vary based on the operating system and platform. Sampling the time to create (apply) and destroy clusters with 1 controller and 2 workers shows (roughly) what to expect.
|
||||
|
||||
| Platform | Apply | Destroy |
|
||||
|---------------|-------|---------|
|
||||
| AWS | 6 min | 5 min |
|
||||
| Bare-Metal | 10-14 min | NA |
|
||||
| Bare-Metal | 10-15 min | NA |
|
||||
| Digital Ocean | 3 min 30 sec | 20 sec |
|
||||
| Google Cloud | 7 min | 4 min 30 sec |
|
||||
| Google Cloud | 9 min | 4 min 30 sec |
|
||||
|
||||
Notes:
|
||||
|
||||
|
@ -7,7 +7,7 @@ theme:
|
||||
name: 'material'
|
||||
palette:
|
||||
primary: 'blue'
|
||||
accent: 'light blue'
|
||||
accent: 'pink'
|
||||
logo: 'img/spin.png'
|
||||
favicon: 'img/favicon.ico'
|
||||
font:
|
||||
|
Loading…
Reference in New Issue
Block a user