Compare commits

...

13 Commits

Author SHA1 Message Date
cafc58c610 Update module source from dghubble to purenetes 2017-08-07 19:30:41 -07:00
88ba9bd273 Update README with features, modules, and social contract
* Add customization, contributing, and non-goal sections
2017-08-06 23:49:05 -07:00
29ed9eaa33 Add MIT License 2017-08-06 23:41:16 -07:00
b303c0e986 digital-ocean: Add ssh step dependency on the 0th controller 2017-07-29 15:18:24 -07:00
097dcdf47e digital-ocean: Add kubelet hostname-override flag
* Kubelets should register nodes via their private IPv4 address,
as provided by the metadata service from Digital Ocean
* By default, Kubelet exec's hostname to determine the name it should
use when registering with the apiserver. On Digital Ocean, the hostname
is not routeable by other instances. Digital Ocean does not run an
internal DNS service.
* Fixes issue where the apiserver can't reach the worker nodes. This
prevented kubectl logs and exec commands from working
2017-07-29 14:32:51 -07:00
efff7497eb digital-ocean: Join name.dns_zone for controller domain
* Output the DNS FQDNs, IPv4 addresses, and IPv6 addresses
2017-07-29 12:47:47 -07:00
6070ffb449 Add dghubble/pegasus Digital Ocean Kubernetes Terraform module 2017-07-29 11:36:33 -07:00
2d33b9abe2 Update diskless PXE workers to Kubernetes v1.7.1 2017-07-27 23:11:41 -07:00
da596e06bb Add bare-metal support for Container Linux with Matchbox 2017-07-24 23:24:12 -07:00
833c92b2bf Add initial README with goals, operating systems, and platforms 2017-07-24 22:12:13 -07:00
386dc49a58 Organize metal-worker-pxe under bare-metal 2017-07-24 21:40:05 -07:00
4df6bb81a8 Organize modules by platform and OS distribution 2017-07-24 19:41:36 -07:00
75f4826097 Update Kubernetes from v1.6.7 to v1.7.1 2017-07-24 18:51:39 -07:00
39 changed files with 1283 additions and 15 deletions

22
LICENSE Normal file
View File

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2017 Dalton Hubble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

55
README.md Normal file
View File

@ -0,0 +1,55 @@
# purenetes <img align="right" src="https://storage.googleapis.com/dghubble/spin.png">
* Minimal, stable base Kubernetes distribution
* Declarative infrastructure and configuration
* Practical for small labs to medium clusters
* 100% [free](https://www.debian.org/intro/free) components (both freedom and zero cost)
* Respect for privacy by requiring analytics be opt-in
## Status
Purenetes is [dghubble](https://twitter.com/dghubble)'s personal Kubernetes distribution. It powers his cloud and colocation clusters. While functional, it is not yet suited for the public.
## Features
* Kubernetes v1.7.1 with self-hosted control plane via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube)
* Secure etcd with generated TLS certs, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, generated admin kubeconfig
* Multi-master, workload isolation
* Ingress-ready (perhaps include by default)
* Works with your existing Terraform infrastructure and secret management
## Documentation
See [docs.purenetes.org](https://docs.purenetes.org)
## Modules
Purenetes provides a Terraform Module for each supported operating system and platform.
| Platform | Operating System | Terraform Module |
|---------------|------------------|------------------|
| Bare-Metal | Container Linux | bare-metal/container-linux/kubernetes |
| Google Cloud | Container Linux | google-cloud/container-linux/kubernetes |
| Digital Ocean | Container Linux | digital-ocean/container-linux/kubernetes |
## Customization
To customize clusters in ways that aren't supported by input variables, fork the repo and make changes to the Terraform module. Stay tuned for improvements to this strategy since its beneficial to stay close to this upstream.
To customize lower-level Kubernetes control plane bootstrapping, see the [purenetes/bootkube-terraform](https://github.com/purenetes/bootkube-terraform) Terraform module.
## Contributing
Currently, `purenetes` is the author's personal distribution of Kubernetes. It is focused on addressing the author's cluster needs and is not yet accepting sizable contributions. As the project matures, this contributing policy will be changed to reflect those of a community project.
## Social Contract
*A formal social contract is being drafted, inspired by the Debian [Social Contract](https://www.debian.org/social_contract).*
For now, know that `purenetes` is not a product, trial, or free-tier. It is not run by a company, it does not offer support or services, and it does not accept or make any money. It is not associated with any operating system or cloud platform vendors.
Disclosure: The author works for CoreOS, but that work is kept as separate as possible. Support for Fedora is planned to ensure no one distro is favored and because the author wants it.
## Non-Goals
* In-place Kubernetes upgrades (instead, deploy blue/green clusters and failover)

View File

@ -0,0 +1,12 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/purenetes/bootkube-terraform.git?ref=v0.6.0"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]
etcd_servers = ["${var.controller_domains}"]
asset_dir = "${var.asset_dir}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
experimental_self_hosted_etcd = "${var.experimental_self_hosted_etcd}"
}

View File

@ -0,0 +1,37 @@
---
systemd:
units:
- name: installer.service
enable: true
contents: |
[Unit]
Requires=network-online.target
After=network-online.target
[Service]
Type=simple
ExecStart=/opt/installer
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /opt/installer
filesystem: root
mode: 0500
contents:
inline: |
#!/bin/bash -ex
curl --retry 10 "${ignition_endpoint}?{{.request.raw_query}}&os=installed" -o ignition.json
coreos-install \
-d ${install_disk} \
-C ${container_linux_channel} \
-V ${container_linux_version} \
-o "${container_linux_oem}" \
${baseurl_flag} \
-i ignition.json
udevadm settle
systemctl reboot
passwd:
users:
- name: core
ssh_authorized_keys:
- {{.ssh_authorized_key}}

View File

@ -0,0 +1,177 @@
---
systemd:
units:
{{ if eq .etcd_on_host "true" }}
- name: etcd-member.service
enable: true
dropins:
- name: 40-etcd-cluster.conf
contents: |
[Service]
Environment="ETCD_IMAGE_TAG=v3.2.0"
Environment="ETCD_NAME={{.etcd_name}}"
Environment="ETCD_ADVERTISE_CLIENT_URLS=https://{{.domain_name}}:2379"
Environment="ETCD_INITIAL_ADVERTISE_PEER_URLS=https://{{.domain_name}}:2380"
Environment="ETCD_LISTEN_CLIENT_URLS=https://0.0.0.0:2379"
Environment="ETCD_LISTEN_PEER_URLS=https://0.0.0.0:2380"
Environment="ETCD_INITIAL_CLUSTER={{.etcd_initial_cluster}}"
Environment="ETCD_STRICT_RECONFIG_CHECK=true"
Environment="ETCD_SSL_DIR=/etc/ssl/etcd"
Environment="ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/server-ca.crt"
Environment="ETCD_CERT_FILE=/etc/ssl/certs/etcd/server.crt"
Environment="ETCD_KEY_FILE=/etc/ssl/certs/etcd/server.key"
Environment="ETCD_CLIENT_CERT_AUTH=true"
Environment="ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/etcd/peer-ca.crt"
Environment="ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd/peer.crt"
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd/peer.key"
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
{{ end }}
- name: docker.service
enable: true
- name: locksmithd.service
mask: true
- name: kubelet.path
enable: true
contents: |
[Unit]
Description=Watch for kubeconfig
[Path]
PathExists=/etc/kubernetes/kubeconfig
[Install]
WantedBy=multi-user.target
- name: wait-for-dns.service
enable: true
contents: |
[Unit]
Description=Wait for DNS entries
Wants=systemd-resolved.service
Before=kubelet.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
[Install]
RequiredBy=kubelet.service
- name: kubelet.service
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--kubeconfig=/etc/kubernetes/kubeconfig \
--require-kubeconfig \
--client-ca-file=/etc/kubernetes/ca.crt \
--anonymous-auth=false \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--pod-manifest-path=/etc/kubernetes/manifests \
--allow-privileged \
--hostname-override={{.domain_name}} \
--node-labels=node-role.kubernetes.io/master \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: bootkube.service
contents: |
[Unit]
Description=Bootstrap a Kubernetes control plane with a temp api-server
ConditionPathExists=!/opt/bootkube/init_bootkube.done
[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootkube
ExecStart=/opt/bootkube/bootkube-start
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
storage:
{{ if index . "pxe" }}
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- name: root
mount:
device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.1_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644
contents:
inline:
{{.domain_name}}
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
inline: |
fs.inotify.max_user_watches=16184
- path: /opt/bootkube/bootkube-start
filesystem: root
mode: 0544
user:
id: 500
group:
id: 500
contents:
inline: |
#!/bin/bash
# Wrapper for bootkube start
set -e
# Move experimental manifests
[ -d /opt/bootkube/assets/experimental/manifests ] && mv /opt/bootkube/assets/experimental/manifests/* /opt/bootkube/assets/manifests && rm -r /opt/bootkube/assets/experimental/manifests
[ -d /opt/bootkube/assets/experimental/bootstrap-manifests ] && mv /opt/bootkube/assets/experimental/bootstrap-manifests/* /opt/bootkube/assets/bootstrap-manifests && rm -r /opt/bootkube/assets/experimental/bootstrap-manifests
BOOTKUBE_ACI="${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.6.0}"
BOOTKUBE_ASSETS="${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \
--volume assets,kind=host,source=$BOOTKUBE_ASSETS \
--mount volume=assets,target=/assets \
--volume bootstrap,kind=host,source=/etc/kubernetes \
--mount volume=bootstrap,target=/etc/kubernetes \
$RKT_OPTS \
${BOOTKUBE_ACI}:${BOOTKUBE_VERSION} \
--net=host \
--dns=host \
--exec=/bootkube -- start --asset-dir=/assets "$@"
passwd:
users:
- name: core
ssh_authorized_keys:
- {{.ssh_authorized_key}}

View File

@ -0,0 +1,112 @@
---
systemd:
units:
- name: docker.service
enable: true
- name: locksmithd.service
mask: true
- name: kubelet.path
enable: true
contents: |
[Unit]
Description=Watch for kubeconfig
[Path]
PathExists=/etc/kubernetes/kubeconfig
[Install]
WantedBy=multi-user.target
- name: wait-for-dns.service
enable: true
contents: |
[Unit]
Description=Wait for DNS entries
Wants=systemd-resolved.service
Before=kubelet.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
[Install]
RequiredBy=kubelet.service
- name: kubelet.service
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--kubeconfig=/etc/kubernetes/kubeconfig \
--require-kubeconfig \
--client-ca-file=/etc/kubernetes/ca.crt \
--anonymous-auth=false \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--pod-manifest-path=/etc/kubernetes/manifests \
--allow-privileged \
--hostname-override={{.domain_name}} \
--node-labels=node-role.kubernetes.io/node \
--cluster_dns={{.k8s_dns_service_ip}} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
storage:
{{ if index . "pxe" }}
disks:
- device: /dev/sda
wipe_table: true
partitions:
- label: ROOT
filesystems:
- name: root
mount:
device: "/dev/sda1"
format: "ext4"
create:
force: true
options:
- "-LROOT"
{{end}}
files:
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.1_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644
contents:
inline:
{{.domain_name}}
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
inline: |
fs.inotify.max_user_watches=16184
passwd:
users:
- name: core
ssh_authorized_keys:
- {{.ssh_authorized_key}}

View File

@ -0,0 +1,53 @@
// Install Container Linux to disk
resource "matchbox_group" "container-linux-install" {
count = "${length(var.controller_names) + length(var.worker_names)}"
name = "${format("container-linux-install-%s", element(concat(var.controller_names, var.worker_names), count.index))}"
profile = "${var.cached_install == "true" ? matchbox_profile.cached-container-linux-install.name : matchbox_profile.container-linux-install.name}"
selector {
mac = "${element(concat(var.controller_macs, var.worker_macs), count.index)}"
}
metadata {
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "controller" {
count = "${length(var.controller_names)}"
name = "${format("%s-%s", var.cluster_name, element(var.controller_names, count.index))}"
profile = "${matchbox_profile.controller.name}"
selector {
mac = "${element(var.controller_macs, count.index)}"
os = "installed"
}
metadata {
domain_name = "${element(var.controller_domains, count.index)}"
etcd_name = "${element(var.controller_names, count.index)}"
etcd_initial_cluster = "${join(",", formatlist("%s=https://%s:2380", var.controller_names, var.controller_domains))}"
etcd_on_host = "${var.experimental_self_hosted_etcd ? "false" : "true"}"
k8s_dns_service_ip = "${module.bootkube.kube_dns_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}
resource "matchbox_group" "worker" {
count = "${length(var.worker_names)}"
name = "${format("%s-%s", var.cluster_name, element(var.worker_names, count.index))}"
profile = "${matchbox_profile.worker.name}"
selector {
mac = "${element(var.worker_macs, count.index)}"
os = "installed"
}
metadata {
domain_name = "${element(var.worker_domains, count.index)}"
etcd_on_host = "${var.experimental_self_hosted_etcd ? "false" : "true"}"
k8s_dns_service_ip = "${module.bootkube.kube_dns_service_ip}"
ssh_authorized_key = "${var.ssh_authorized_key}"
}
}

View File

@ -0,0 +1,3 @@
output "kubeconfig" {
value = "${module.bootkube.kubeconfig}"
}

View File

@ -0,0 +1,81 @@
// Container Linux Install profile (from release.core-os.net)
resource "matchbox_profile" "container-linux-install" {
name = "container-linux-install"
kernel = "http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
initrd = [
"http://${var.container_linux_channel}.release.core-os.net/amd64-usr/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
]
args = [
"coreos.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
]
container_linux_config = "${data.template_file.container-linux-install-config.rendered}"
}
data "template_file" "container-linux-install-config" {
template = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
vars {
container_linux_channel = "${var.container_linux_channel}"
container_linux_version = "${var.container_linux_version}"
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
install_disk = "${var.install_disk}"
container_linux_oem = "${var.container_linux_oem}"
# only cached-container-linux profile adds -b baseurl
baseurl_flag = ""
}
}
// Container Linux Install profile (from matchbox /assets cache)
// Note: Admin must have downloaded container_linux_version into matchbox assets.
resource "matchbox_profile" "cached-container-linux-install" {
name = "cached-container-linux-install"
kernel = "/assets/coreos/${var.container_linux_version}/coreos_production_pxe.vmlinuz"
initrd = [
"/assets/coreos/${var.container_linux_version}/coreos_production_pxe_image.cpio.gz",
]
args = [
"coreos.config.url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}",
"coreos.first_boot=yes",
"console=tty0",
"console=ttyS0",
]
container_linux_config = "${data.template_file.cached-container-linux-install-config.rendered}"
}
data "template_file" "cached-container-linux-install-config" {
template = "${file("${path.module}/cl/container-linux-install.yaml.tmpl")}"
vars {
container_linux_channel = "${var.container_linux_channel}"
container_linux_version = "${var.container_linux_version}"
ignition_endpoint = "${format("%s/ignition", var.matchbox_http_endpoint)}"
install_disk = "${var.install_disk}"
container_linux_oem = "${var.container_linux_oem}"
# profile uses -b baseurl to install from matchbox cache
baseurl_flag = "-b ${var.matchbox_http_endpoint}/assets/coreos"
}
}
// Kubernetes Controller profile
resource "matchbox_profile" "controller" {
name = "controller"
container_linux_config = "${file("${path.module}/cl/controller.yaml.tmpl")}"
}
// Kubernetes Worker profile
resource "matchbox_profile" "worker" {
name = "worker"
container_linux_config = "${file("${path.module}/cl/worker.yaml.tmpl")}"
}

View File

@ -0,0 +1,97 @@
# Secure copy etcd TLS assets and kubeconfig to all nodes. Activates kubelet.service
resource "null_resource" "copy-secrets" {
count = "${length(var.controller_names) + length(var.worker_names)}"
connection {
type = "ssh"
host = "${element(concat(var.controller_domains, var.worker_domains), count.index)}"
user = "core"
timeout = "60m"
}
provisioner "file" {
content = "${module.bootkube.kubeconfig}"
destination = "$HOME/kubeconfig"
}
provisioner "file" {
content = "${module.bootkube.etcd_ca_cert}"
destination = "$HOME/etcd-client-ca.crt"
}
provisioner "file" {
content = "${module.bootkube.etcd_client_cert}"
destination = "$HOME/etcd-client.crt"
}
provisioner "file" {
content = "${module.bootkube.etcd_client_key}"
destination = "$HOME/etcd-client.key"
}
provisioner "file" {
content = "${module.bootkube.etcd_server_cert}"
destination = "$HOME/etcd-server.crt"
}
provisioner "file" {
content = "${module.bootkube.etcd_server_key}"
destination = "$HOME/etcd-server.key"
}
provisioner "file" {
content = "${module.bootkube.etcd_peer_cert}"
destination = "$HOME/etcd-peer.crt"
}
provisioner "file" {
content = "${module.bootkube.etcd_peer_key}"
destination = "$HOME/etcd-peer.key"
}
provisioner "remote-exec" {
inline = [
"sudo mkdir -p /etc/ssl/etcd/etcd",
"sudo mv etcd-client* /etc/ssl/etcd/",
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/server-ca.crt",
"sudo mv etcd-server.crt /etc/ssl/etcd/etcd/server.crt",
"sudo mv etcd-server.key /etc/ssl/etcd/etcd/server.key",
"sudo cp /etc/ssl/etcd/etcd-client-ca.crt /etc/ssl/etcd/etcd/peer-ca.crt",
"sudo mv etcd-peer.crt /etc/ssl/etcd/etcd/peer.crt",
"sudo mv etcd-peer.key /etc/ssl/etcd/etcd/peer.key",
"sudo chown -R etcd:etcd /etc/ssl/etcd",
"sudo chmod -R 500 /etc/ssl/etcd",
"sudo mv /home/core/kubeconfig /etc/kubernetes/kubeconfig",
]
}
}
# Secure copy bootkube assets to ONE controller and start bootkube to perform
# one-time self-hosted cluster bootstrapping.
resource "null_resource" "bootkube-start" {
# Without depends_on, this remote-exec may start before the kubeconfig copy.
# Terraform only does one task at a time, so it would try to bootstrap
# Kubernetes and Tectonic while no Kubelets are running. Ensure all nodes
# receive a kubeconfig before proceeding with bootkube.
depends_on = ["null_resource.copy-secrets"]
connection {
type = "ssh"
host = "${element(var.controller_domains, 0)}"
user = "core"
timeout = "60m"
}
provisioner "file" {
source = "${var.asset_dir}"
destination = "$HOME/assets"
}
provisioner "remote-exec" {
inline = [
"sudo mv /home/core/assets /opt/bootkube",
"sudo systemctl start bootkube",
]
}
}

View File

@ -0,0 +1,104 @@
variable "matchbox_http_endpoint" {
type = "string"
description = "Matchbox HTTP read-only endpoint (e.g. http://matchbox.example.com:8080)"
}
variable "container_linux_channel" {
type = "string"
description = "Container Linux channel corresponding to the container_linux_version"
}
variable "container_linux_version" {
type = "string"
description = "Container Linux version of the kernel/initrd to PXE or the image to install"
}
variable "cluster_name" {
type = "string"
description = "Cluster name"
}
variable "ssh_authorized_key" {
type = "string"
description = "SSH public key to set as an authorized_key on machines"
}
# Machines
# Terraform's crude "type system" does properly support lists of maps so we do this.
variable "controller_names" {
type = "list"
}
variable "controller_macs" {
type = "list"
}
variable "controller_domains" {
type = "list"
}
variable "worker_names" {
type = "list"
}
variable "worker_macs" {
type = "list"
}
variable "worker_domains" {
type = "list"
}
# bootkube assets
variable "k8s_domain_name" {
description = "Controller DNS name which resolves to a controller instance. Workers and kubeconfig's will communicate with this endpoint (e.g. cluster.example.com)"
type = "string"
}
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
}
variable "pod_cidr" {
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
default = "10.2.0.0/16"
}
variable "service_cidr" {
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 200th IP will be reserved for bootstrap self-hosted etcd.
EOD
type = "string"
default = "10.3.0.0/16"
}
# optional
variable "cached_install" {
type = "string"
default = "false"
description = "Whether Container Linux should PXE boot and install from matchbox /assets cache. Note that the admin must have downloaded the container_linux_version into matchbox assets."
}
variable "install_disk" {
type = "string"
default = "/dev/sda"
description = "Disk device to which the install profiles should install Container Linux (e.g. /dev/sda)"
}
variable "container_linux_oem" {
type = "string"
default = ""
description = "Specify an OEM image id to use as base for the installation (e.g. ami, vmware_raw, xen) or leave blank for the default image"
}
variable "experimental_self_hosted_etcd" {
default = "false"
description = "Create self-hosted etcd cluster as pods on Kubernetes, instead of on-hosts"
}

View File

@ -92,7 +92,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.6.7_coreos.0
KUBELET_IMAGE_TAG=v1.7.1_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -0,0 +1,12 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/purenetes/bootkube-terraform.git?ref=v0.6.0"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]
etcd_servers = ["http://127.0.0.1:2379"]
asset_dir = "${var.asset_dir}"
pod_cidr = "${var.pod_cidr}"
service_cidr = "${var.service_cidr}"
experimental_self_hosted_etcd = "true"
}

View File

@ -0,0 +1,142 @@
---
systemd:
units:
- name: docker.service
enable: true
- name: locksmithd.service
mask: true
- name: wait-for-dns.service
enable: true
contents: |
[Unit]
Description=Wait for DNS entries
Wants=systemd-resolved.service
Before=kubelet.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
[Install]
RequiredBy=kubelet.service
- name: kubelet.service
enable: true
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
Requires=coreos-metadata.service
After=coreos-metadata.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
EnvironmentFile=/run/metadata/coreos
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--kubeconfig=/etc/kubernetes/kubeconfig \
--require-kubeconfig \
--client-ca-file=/etc/kubernetes/ca.crt \
--anonymous-auth=false \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
--pod-manifest-path=/etc/kubernetes/manifests \
--allow-privileged \
--node-labels=node-role.kubernetes.io/master \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: bootkube.service
contents: |
[Unit]
Description=Bootstrap a Kubernetes cluster
ConditionPathExists=!/opt/bootkube/init_bootkube.done
[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/bootkube
ExecStart=/opt/bootkube/bootkube-start
ExecStartPost=/bin/touch /opt/bootkube/init_bootkube.done
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/kubeconfig
filesystem: root
mode: 0644
contents:
inline: |
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: ${kubeconfig_server}
certificate-authority-data: ${kubeconfig_ca_cert}
users:
- name: kubelet
user:
client-certificate-data: ${kubeconfig_kubelet_cert}
client-key-data: ${kubeconfig_kubelet_key}
contexts:
- context:
cluster: local
user: kubelet
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.1_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
inline: |
fs.inotify.max_user_watches=16184
- path: /opt/bootkube/bootkube-start
filesystem: root
mode: 0544
user:
id: 500
group:
id: 500
contents:
inline: |
#!/bin/bash
# Wrapper for bootkube start
set -e
# Move experimental manifests
[ -d /opt/bootkube/assets/experimental/manifests ] && mv /opt/bootkube/assets/experimental/manifests/* /opt/bootkube/assets/manifests && rm -r /opt/bootkube/assets/experimental/manifests
[ -d /opt/bootkube/assets/experimental/bootstrap-manifests ] && mv /opt/bootkube/assets/experimental/bootstrap-manifests/* /opt/bootkube/assets/bootstrap-manifests && rm -r /opt/bootkube/assets/experimental/bootstrap-manifests
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.6.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \
--volume assets,kind=host,source=$${BOOTKUBE_ASSETS} \
--mount volume=assets,target=/assets \
--volume bootstrap,kind=host,source=/etc/kubernetes \
--mount volume=bootstrap,target=/etc/kubernetes \
$${RKT_OPTS} \
$${BOOTKUBE_ACI}:$${BOOTKUBE_VERSION} \
--net=host \
--dns=host \
--exec=/bootkube -- start --asset-dir=/assets "$@"

View File

@ -0,0 +1,126 @@
---
systemd:
units:
- name: docker.service
enable: true
- name: locksmithd.service
mask: true
- name: wait-for-dns.service
enable: true
contents: |
[Unit]
Description=Wait for DNS entries
Wants=systemd-resolved.service
Before=kubelet.service
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c 'while ! /usr/bin/grep '^[^#[:space:]]' /etc/resolv.conf > /dev/null; do sleep 1; done'
[Install]
RequiredBy=kubelet.service
- name: kubelet.service
enable: true
contents: |
[Unit]
Description=Kubelet via Hyperkube ACI
Requires=coreos-metadata.service
After=coreos-metadata.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
EnvironmentFile=/run/metadata/coreos
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--kubeconfig=/etc/kubernetes/kubeconfig \
--require-kubeconfig \
--client-ca-file=/etc/kubernetes/ca.crt \
--anonymous-auth=false \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--hostname-override=$${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0} \
--pod-manifest-path=/etc/kubernetes/manifests \
--allow-privileged \
--node-labels=node-role.kubernetes.io/node \
--cluster_dns=${k8s_dns_service_ip} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
- name: delete-node.service
enable: true
contents: |
[Unit]
Description=Waiting to delete Kubernetes node on shutdown
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/true
ExecStop=/etc/kubernetes/delete-node
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/kubeconfig
filesystem: root
mode: 0644
contents:
inline: |
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: ${kubeconfig_server}
certificate-authority-data: ${kubeconfig_ca_cert}
users:
- name: kubelet
user:
client-certificate-data: ${kubeconfig_kubelet_cert}
client-key-data: ${kubeconfig_kubelet_key}
contexts:
- context:
cluster: local
user: kubelet
- path: /etc/kubernetes/kubelet.env
filesystem: root
mode: 0644
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.1_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
inline: |
fs.inotify.max_user_watches=16184
- path: /etc/kubernetes/delete-node
filesystem: root
mode: 0744
contents:
inline: |
#!/bin/bash
set -e
exec /usr/bin/rkt run \
--trust-keys-from-https \
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
quay.io/coreos/hyperkube:v1.7.1_coreos.0 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -0,0 +1,58 @@
# Controller DNS records
resource "digitalocean_record" "controllers" {
count = "${var.controller_count}"
# DNS zone where record should be created
domain = "${var.dns_zone}"
name = "${var.cluster_name}"
type = "A"
ttl = 300
value = "${element(digitalocean_droplet.controllers.*.ipv4_address, count.index)}"
}
# Controller droplet instances
resource "digitalocean_droplet" "controllers" {
count = "${var.controller_count}"
name = "${var.cluster_name}-controller-${count.index}"
region = "${var.region}"
image = "${var.image}"
size = "${var.controller_type}"
# network
ipv6 = true
private_networking = true
user_data = "${data.ct_config.controller_ign.rendered}"
ssh_keys = "${var.ssh_fingerprints}"
tags = [
"${digitalocean_tag.controllers.id}"
]
}
// Tag to label controllers
resource "digitalocean_tag" "controllers" {
name = "${var.cluster_name}-controller"
}
# Controller Container Linux Config
data "template_file" "controller_config" {
template = "${file("${path.module}/cl/controller.yaml.tmpl")}"
vars = {
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
k8s_etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
kubeconfig_ca_cert = "${module.bootkube.ca_cert}"
kubeconfig_kubelet_cert = "${module.bootkube.kubelet_cert}"
kubeconfig_kubelet_key = "${module.bootkube.kubelet_key}"
kubeconfig_server = "${module.bootkube.server}"
}
}
data "ct_config" "controller_ign" {
content = "${data.template_file.controller_config.rendered}"
pretty_print = false
}

View File

@ -0,0 +1,23 @@
output "controllers_dns" {
value = "${digitalocean_record.controllers.0.fqdn}"
}
output "workers_dns" {
value = "${digitalocean_record.workers.0.fqdn}"
}
output "controllers_ipv4" {
value = ["${digitalocean_droplet.controllers.*.ipv4_address}"]
}
output "controllers_ipv6" {
value = ["${digitalocean_droplet.controllers.*.ipv6_address}"]
}
output "workers_ipv4" {
value = ["${digitalocean_droplet.workers.*.ipv4_address}"]
}
output "workers_ipv6" {
value = ["${digitalocean_droplet.workers.*.ipv6_address}"]
}

View File

@ -0,0 +1,25 @@
# Secure copy bootkube assets to ONE controller and start bootkube to perform
# one-time self-hosted cluster bootstrapping.
resource "null_resource" "bootkube-start" {
depends_on = ["module.bootkube", "digitalocean_droplet.controllers"]
connection {
type = "ssh"
host = "${digitalocean_droplet.controllers.0.ipv4_address}"
user = "core"
timeout = "15m"
}
provisioner "file" {
source = "${var.asset_dir}"
destination = "$HOME/assets"
}
provisioner "remote-exec" {
inline = [
"sudo mv /home/core/assets /opt/bootkube",
"sudo systemctl start bootkube",
]
}
}

View File

@ -0,0 +1,71 @@
variable "cluster_name" {
type = "string"
description = "Unique cluster name"
}
variable "region" {
type = "string"
description = "Digital Ocean region (e.g. nyc1, sfo2, fra1, tor1)"
}
variable "dns_zone" {
type = "string"
description = "Digital Ocean domain name (i.e. DNS zone with NS records) (e.g. digital-ocean.dghubble.io)"
}
variable "image" {
type = "string"
description = "OS image from which to initialize the disk (e.g. coreos-stable)"
}
variable "controller_type" {
type = "string"
default = "1gb"
description = "Digital Ocean droplet type or size (e.g. 2gb, 4gb, 8gb). Do not choose a value below 2gb."
}
variable "controller_count" {
type = "string"
default = "1"
description = "Number of controllers"
}
variable "worker_type" {
type = "string"
default = "512mb"
description = "Digital Ocean droplet type or size (e.g. 512mb, 1gb, 2gb, 4gb)"
}
variable "worker_count" {
type = "string"
default = "1"
description = "Number of workers"
}
variable "ssh_fingerprints" {
type = "list"
description = "SSH public key fingerprints. Use ssh-add -l -E md5."
}
# bootkube assets
variable "asset_dir" {
description = "Path to a directory where generated assets should be placed (contains secrets)"
type = "string"
}
variable "pod_cidr" {
description = "CIDR IP range to assign Kubernetes pods"
type = "string"
default = "10.2.0.0/16"
}
variable "service_cidr" {
description = <<EOD
CIDR IP range to assign Kubernetes services.
The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for kube-dns, the 15th IP will be reserved for self-hosted etcd, and the 200th IP will be reserved for bootstrap self-hosted etcd.
EOD
type = "string"
default = "10.3.0.0/16"
}

View File

@ -0,0 +1,58 @@
# Worker DNS records
resource "digitalocean_record" "workers" {
count = "${var.worker_count}"
# DNS zone where record should be created
domain = "${var.dns_zone}"
name = "${var.cluster_name}-workers"
type = "A"
ttl = 300
value = "${element(digitalocean_droplet.workers.*.ipv4_address, count.index)}"
}
# Worker droplet instances
resource "digitalocean_droplet" "workers" {
count = "${var.worker_count}"
name = "${var.cluster_name}-worker-${count.index}"
region = "${var.region}"
image = "${var.image}"
size = "${var.worker_type}"
# network
ipv6 = true
private_networking = true
user_data = "${data.ct_config.worker_ign.rendered}"
ssh_keys = "${var.ssh_fingerprints}"
tags = [
"${digitalocean_tag.workers.id}"
]
}
// Tag to label workers
resource "digitalocean_tag" "workers" {
name = "${var.cluster_name}-worker"
}
# Worker Container Linux Config
data "template_file" "worker_config" {
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
vars = {
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
k8s_etcd_service_ip = "${cidrhost(var.service_cidr, 15)}"
kubeconfig_ca_cert = "${module.bootkube.ca_cert}"
kubeconfig_kubelet_cert = "${module.bootkube.kubelet_cert}"
kubeconfig_kubelet_key = "${module.bootkube.kubelet_key}"
kubeconfig_server = "${module.bootkube.server}"
}
}
data "ct_config" "worker_ign" {
content = "${data.template_file.worker_config.rendered}"
pretty_print = false
}

View File

@ -101,7 +101,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.6.7_coreos.0
KUBELET_IMAGE_TAG=v1.7.1_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -123,7 +123,7 @@ storage:
[ -d /opt/bootkube/assets/experimental/manifests ] && mv /opt/bootkube/assets/experimental/manifests/* /opt/bootkube/assets/manifests && rm -r /opt/bootkube/assets/experimental/manifests
[ -d /opt/bootkube/assets/experimental/bootstrap-manifests ] && mv /opt/bootkube/assets/experimental/bootstrap-manifests/* /opt/bootkube/assets/bootstrap-manifests && rm -r /opt/bootkube/assets/experimental/bootstrap-manifests
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.5.1}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.6.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \

View File

@ -16,9 +16,9 @@ resource "google_compute_instance_group_manager" "controllers" {
]
}
# bootkube-controller Container Linux config
# Controller Container Linux Config
data "template_file" "controller_config" {
template = "${file("${path.module}/cl/bootkube-controller.yaml.tmpl")}"
template = "${file("${path.module}/cl/controller.yaml.tmpl")}"
vars = {
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
@ -38,7 +38,7 @@ data "ct_config" "controller_ign" {
resource "google_compute_instance_template" "controller" {
name_prefix = "${var.cluster_name}-controller-"
description = "bootkube-controller Instance template"
description = "Controller Instance template"
machine_type = "${var.machine_type}"
metadata {

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/dghubble/bootkube-terraform.git?ref=v0.5.1"
source = "git::https://github.com/purenetes/bootkube-terraform.git?ref=v0.6.0"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]

View File

@ -1,5 +1,5 @@
module "controllers" {
source = "../gce-bootkube-controller"
source = "../controllers"
cluster_name = "${var.cluster_name}"
ssh_authorized_key = "${var.ssh_authorized_key}"
@ -23,7 +23,7 @@ module "controllers" {
}
module "workers" {
source = "../gce-bootkube-worker"
source = "../workers"
cluster_name = "${var.cluster_name}"
ssh_authorized_key = "${var.ssh_authorized_key}"

View File

@ -42,7 +42,7 @@ variable "os_image" {
variable "controller_count" {
type = "string"
default = "1"
description = "Number of workers"
description = "Number of controllers"
}
variable "worker_count" {

View File

@ -99,7 +99,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.6.7_coreos.0
KUBELET_IMAGE_TAG=v1.7.1_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -116,7 +116,7 @@ storage:
--trust-keys-from-https \
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
quay.io/coreos/hyperkube:v1.6.7_coreos.0 \
quay.io/coreos/hyperkube:v1.7.1_coreos.0 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -16,9 +16,9 @@ resource "google_compute_instance_group_manager" "workers" {
]
}
# bootkube-worker Container Linux config
# Worker Container Linux Config
data "template_file" "worker_config" {
template = "${file("${path.module}/cl/bootkube-worker.yaml.tmpl")}"
template = "${file("${path.module}/cl/worker.yaml.tmpl")}"
vars = {
k8s_dns_service_ip = "${cidrhost(var.service_cidr, 10)}"
@ -38,7 +38,7 @@ data "ct_config" "worker_ign" {
resource "google_compute_instance_template" "worker" {
name_prefix = "${var.cluster_name}-worker-"
description = "bootkube-worker Instance template"
description = "Worker Instance template"
machine_type = "${var.machine_type}"
metadata {