Compare commits

..

8 Commits

Author SHA1 Message Date
1bc25c1036 Update Kubernetes from v1.7.5 to v1.7.7
* Update from bootkube v0.6.2 to v0.7.0
* Use renamed terraform-render-bootkube. Renamed from
bootkube-terraform to meet Terraform Module requirements
2017-10-03 21:03:15 -07:00
2d5a4ae1ef Update kube-dns image to address dnsmasq vulnerability
* https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html
2017-10-02 10:27:10 -07:00
1ab27ae1f1 Fix status of the google-cloud module to production 2017-10-01 21:41:08 -07:00
def84aa5a0 docs: Add details about security features 2017-10-01 21:38:52 -07:00
dd883988bd Update from Calico v2.5.1 to v2.6.1
* Network policy improvements
* Update cni sidecar image from v1.10.0 to v1.11.0
* Lower log level in Calico CNI config from debug to info
2017-09-30 16:16:40 -07:00
e0d8917573 Add LICENSE to top-level of each module 2017-09-28 20:41:19 -07:00
f7f983c7da docs: Add docs and addons for Nginx AWS Ingress 2017-09-28 01:09:31 -07:00
b20233e05d aws: Add Ingress ELB DNS name output as ingress_dns_name
* Expose the Ingress ELB DNS name so application DNS records can
be defined in Terraform to resolve to the Ingress ELB
2017-09-28 00:46:17 -07:00
42 changed files with 513 additions and 86 deletions

View File

@ -2,10 +2,19 @@
Notable changes between versions.
## v1.7.7
* Kubernetes v1.7.7
* Use kubernetes-incubator/bootkube v0.7.0
* Update kube-dns to 1.14.5 to fix dnsmasq [vulnerability](https://security.googleblog.com/2017/10/behind-masq-yet-more-dns-and-dhcp.html)
* Calico v2.6.1
* flannel-cni v0.3.0
* Update flannel CNI config to fix hostPort
## v1.7.5
* Kubernetes v1.7.5
* Use kubernete-incubator/bootkube v0.6.2
* Use kubernetes-incubator/bootkube v0.6.2
* Add AWS Terraform module (alpha)
* Add support for Calico networking (bare-metal, Google Cloud, AWS)
* Change networking default from "flannel" to "calico"

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features
* Kubernetes v1.7.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.7.7 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Dashboards, Metrics, and other optional [addons](https://typhoon.psdn.io/addons/overview/)
@ -25,7 +25,7 @@ Typhoon provides a Terraform Module for each supported operating system and plat
| AWS | Container Linux | [aws/container-linux/kubernetes](aws/container-linux/kubernetes) | alpha |
| Bare-Metal | Container Linux | [bare-metal/container-linux/kubernetes](bare-metal/container-linux/kubernetes) | production |
| Digital Ocean | Container Linux | [digital-ocean/container-linux/kubernetes](digital-ocean/container-linux/kubernetes) | beta |
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | beta |
| Google Cloud | Container Linux | [google-cloud/container-linux/kubernetes](google-cloud/container-linux/kubernetes) | production |
## Usage
@ -78,9 +78,9 @@ In 5-10 minutes (varies by platform), the cluster will be ready. This Google Clo
$ KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-1682.c.example-com.internal Ready 6m v1.7.5+coreos.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.7.5+coreos.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.7.5+coreos.0
yavin-controller-1682.c.example-com.internal Ready 6m v1.7.7+coreos.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.7.7+coreos.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.7.7+coreos.0
```
List the pods.

View File

@ -0,0 +1,36 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-backend
namespace: ingress
spec:
replicas: 1
template:
metadata:
labels:
name: default-backend
phase: prod
spec:
containers:
- name: default-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
terminationGracePeriodSeconds: 60

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: default-backend
namespace: ingress
spec:
type: ClusterIP
selector:
name: default-backend
phase: prod
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080

View File

@ -0,0 +1,61 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress
spec:
replicas: 2
strategy:
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: nginx-ingress-controller
phase: prod
spec:
nodeSelector:
node-role.kubernetes.io/node: ""
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-backend
- --ingress-class=public
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: health
containerPort: 10254
hostPort: 10254
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
initialDelaySeconds: 11
timeoutSeconds: 1
httpGet:
path: /healthz
port: 10254
scheme: HTTP
hostNetwork: true
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 60

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: ingress

View File

@ -0,0 +1,12 @@
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress
subjects:
- kind: ServiceAccount
namespace: ingress
name: default

View File

@ -0,0 +1,51 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update

View File

@ -0,0 +1,13 @@
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ingress
namespace: ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress
subjects:
- kind: ServiceAccount
namespace: ingress
name: default

View File

@ -0,0 +1,41 @@
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ingress
namespace: ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-public"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update

View File

@ -0,0 +1,19 @@
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
namespace: ingress
spec:
type: ClusterIP
selector:
name: nginx-ingress-controller
phase: prod
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443

View File

@ -0,0 +1,23 @@
The MIT License (MIT)
Copyright (c) 2017 Typhoon Authors
Copyright (c) 2017 Dalton Hubble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features
* Kubernetes v1.7.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.7.7 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Dashboards, Metrics, and other optional [addons](https://typhoon.psdn.io/addons/overview/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.6.2"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=v0.7.0"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -105,7 +105,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.5_coreos.0
KUBELET_IMAGE_TAG=v1.7.7_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -128,7 +128,7 @@ storage:
[ -d /opt/bootkube/assets/experimental/manifests ] && mv /opt/bootkube/assets/experimental/manifests/* /opt/bootkube/assets/manifests && rm -r /opt/bootkube/assets/experimental/manifests
[ -d /opt/bootkube/assets/experimental/bootstrap-manifests ] && mv /opt/bootkube/assets/experimental/bootstrap-manifests/* /opt/bootkube/assets/bootstrap-manifests && rm -r /opt/bootkube/assets/experimental/bootstrap-manifests
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.6.2}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.7.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \

View File

@ -103,7 +103,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.5_coreos.0
KUBELET_IMAGE_TAG=v1.7.7_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -120,7 +120,7 @@ storage:
--trust-keys-from-https \
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
quay.io/coreos/hyperkube:v1.7.5_coreos.0 \
quay.io/coreos/hyperkube:v1.7.7_coreos.0 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -18,7 +18,7 @@ resource "aws_elb" "ingress" {
instance_protocol = "tcp"
}
# Kubelet HTTP health check
# Ingress Controller HTTP health check
health_check {
target = "HTTP:10254/healthz"
healthy_threshold = 2

View File

@ -0,0 +1,5 @@
output "ingress_dns_name" {
value = "${aws_elb.ingress.dns_name}"
description = "DNS name of the ELB for distributing traffic to Ingress controllers"
}

View File

@ -0,0 +1,23 @@
The MIT License (MIT)
Copyright (c) 2017 Typhoon Authors
Copyright (c) 2017 Dalton Hubble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features
* Kubernetes v1.7.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.7.7 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Dashboards, Metrics, and other optional [addons](https://typhoon.psdn.io/addons/overview/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/bootkube-terraform.git?ref=3b8d7620810ec8077672801bb4af7cd41e97253f"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=v0.7.0"
cluster_name = "${var.cluster_name}"
api_servers = ["${var.k8s_domain_name}"]

View File

@ -130,7 +130,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.5_coreos.0
KUBELET_IMAGE_TAG=v1.7.7_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644
@ -159,7 +159,7 @@ storage:
[ -d /opt/bootkube/assets/experimental/manifests ] && mv /opt/bootkube/assets/experimental/manifests/* /opt/bootkube/assets/manifests && rm -r /opt/bootkube/assets/experimental/manifests
[ -d /opt/bootkube/assets/experimental/bootstrap-manifests ] && mv /opt/bootkube/assets/experimental/bootstrap-manifests/* /opt/bootkube/assets/bootstrap-manifests && rm -r /opt/bootkube/assets/experimental/bootstrap-manifests
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.6.2}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.7.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \

View File

@ -96,7 +96,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.5_coreos.0
KUBELET_IMAGE_TAG=v1.7.7_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -96,7 +96,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.5_coreos.0
KUBELET_IMAGE_TAG=v1.7.7_coreos.0
- path: /etc/hostname
filesystem: root
mode: 0644

View File

@ -0,0 +1,23 @@
The MIT License (MIT)
Copyright (c) 2017 Typhoon Authors
Copyright (c) 2017 Dalton Hubble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features
* Kubernetes v1.7.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.7.7 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Dashboards, Metrics, and other optional [addons](https://typhoon.psdn.io/addons/overview/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.6.2"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=v0.7.0"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -96,7 +96,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.5_coreos.0
KUBELET_IMAGE_TAG=v1.7.7_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -119,7 +119,7 @@ storage:
[ -d /opt/bootkube/assets/experimental/manifests ] && mv /opt/bootkube/assets/experimental/manifests/* /opt/bootkube/assets/manifests && rm -r /opt/bootkube/assets/experimental/manifests
[ -d /opt/bootkube/assets/experimental/bootstrap-manifests ] && mv /opt/bootkube/assets/experimental/bootstrap-manifests/* /opt/bootkube/assets/bootstrap-manifests && rm -r /opt/bootkube/assets/experimental/bootstrap-manifests
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.6.2}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.7.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \

View File

@ -94,7 +94,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.5_coreos.0
KUBELET_IMAGE_TAG=v1.7.7_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -111,7 +111,7 @@ storage:
--trust-keys-from-https \
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
quay.io/coreos/hyperkube:v1.7.5_coreos.0 \
quay.io/coreos/hyperkube:v1.7.7_coreos.0 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)

View File

@ -2,6 +2,66 @@
Nginx Ingress controller pods accept and demultiplex HTTP, HTTPS, TCP, or UDP traffic to backend services. Ingress controllers watch the Kubernetes API for Ingress resources and update their configuration accordingly. Ingress resources for HTTP(S) applications support virtual hosts (FQDNs), path rules, TLS termination, and SNI.
## AWS
On AWS, an elastic load balancer distributes traffic across worker nodes (i.e. an auto-scaling group) running an Ingress controller deployment on host ports 80 and 443. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a health Ingress controller receive traffic.
Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, default backend, and namespace.
```
kubectl apply -R addons/nginx-ingress/aws
```
For each application, add a DNS CNAME resolving to the ELB's DNS record.
```
app1.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
aap2.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
app3.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
```
Find the ELB's DNS name through the console or use the Typhoon module's output `ingress_dns_name`. For example, you might use Terraform to manage a Google Cloud DNS record:
```tf
resource "google_dns_record_set" "some-application" {
# DNS zone name
managed_zone = "example-zone"
# DNS record
name = "app.example.com."
type = "CNAME"
ttl = 300
rrdatas = ["${module.aws-tempest.ingress_dns_name}."]
}
```
## Digital Ocean
On Digital Ocean, a DNS A record (e.g. `nemo-workers.example.com`) resolves to each worker[^1] running an Ingress controller DaemonSet on host ports 80 and 443. Firewall rules allow IPv4 and IPv6 traffic to ports 80 and 443.
Create the Ingress controller daemonset, service, RBAC roles, RBAC bindings, default backend, and namespace.
```
kubectl apply -R addons/nginx-ingress/digital-ocean
```
For each application, add a CNAME record resolving to the worker(s) DNS record. Use the Typhoon module's output `workers_dns` to find the worker DNS value. For example, you might use Terraform to manage a Google Cloud DNS record:
```tf
resource "google_dns_record_set" "some-application" {
# DNS zone name
managed_zone = "example-zone"
# DNS record
name = "app.example.com."
type = "CNAME"
ttl = 300
rrdatas = ["${module.digital-ocean-nemo.workers_dns}."]
}
```
[^1]: Digital Ocean does offers load balancers. We've opted not to use them to keep the Digital Ocean setup simple and cheap for developers.
## Google Cloud
On Google Cloud, a network load balancer distributes traffic across worker nodes (i.e. a target pool of backends) running an Ingress controller deployment on host ports 80 and 443. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure the target pool only includes worker nodes with a healthy Nginx Ingress controller.
@ -12,7 +72,7 @@ Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, de
kubectl apply -R addons/nginx-ingress/google-cloud
```
Add a DNS record resolving to the network load balancer's IPv4 address for each application.
For each application, add a DNS record resolving to the network load balancer's IPv4 address.
```
app1.example.com -> 11.22.33.44
@ -35,33 +95,6 @@ resource "google_dns_record_set" "some-application" {
}
```
## Digital Ocean
On Digital Ocean, a DNS A record (e.g. `nemo-workers.example.com`) resolves to each worker[^1] running an Ingress controller DaemonSet on host ports 80 and 443. Firewall rules allow IPv4 and IPv6 traffic to ports 80 and 443.
Create the Ingress controller daemonset, service, RBAC roles, RBAC bindings, default backend, and namespace.
```
kubectl apply -R addons/nginx-ingress/digital-ocean
```
Add a CNAME record to the worker DNS record for each application. Use the Typhoon module's output `workers_dns` to find the worker DNS value. For example, you might use Terraform to manage a Google Cloud DNS record:
```tf
resource "google_dns_record_set" "some-application" {
# DNS zone name
managed_zone = "example-zone"
# DNS record
name = "app.example.com."
type = "CNAME"
ttl = 300
rrdatas = ["${module.digital-ocean-nemo.workers_dns}."]
}
```
[^1]: Digital Ocean does offers load balancers. We've opted not to use them to keep the Digital Ocean setup simple and cheap for developers.
## Bare-Metal
On bare-metal, routing traffic to Ingress controller pods can be done in number of ways.

View File

@ -1,6 +1,6 @@
# AWS
In this tutorial, we'll create a Kubernetes v1.7.5 cluster on AWS.
In this tutorial, we'll create a Kubernetes v1.7.7 cluster on AWS.
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a VPC, gateway, subnets, auto-scaling groups of controllers and workers, network load balancers for controllers and workers, and security groups will be created.
@ -125,7 +125,7 @@ Get or update Terraform modules.
$ terraform get # downloads missing modules
$ terraform get --update # updates all modules
Get: git::https://github.com/poseidon/typhoon (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.6.2 (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.7.0 (update)
```
Plan the resources to be created.
@ -160,9 +160,9 @@ In 10-20 minutes, the Kubernetes cluster will be ready.
$ KUBECONFIG=/home/user/.secrets/clusters/tempest/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
ip-10-0-12-221 Ready 34m v1.7.5+coreos.0
ip-10-0-19-112 Ready 34m v1.7.5+coreos.0
ip-10-0-4-22 Ready 34m v1.7.5+coreos.0
ip-10-0-12-221 Ready 34m v1.7.7+coreos.0
ip-10-0-19-112 Ready 34m v1.7.7+coreos.0
ip-10-0-4-22 Ready 34m v1.7.7+coreos.0
```
List the pods.

View File

@ -1,6 +1,6 @@
# Bare-Metal
In this tutorial, we'll network boot and provison a Kubernetes v1.7.5 cluster on bare-metal.
In this tutorial, we'll network boot and provison a Kubernetes v1.7.7 cluster on bare-metal.
First, we'll deploy a [Matchbox](https://github.com/coreos/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers.
@ -228,7 +228,7 @@ Get or update Terraform modules.
$ terraform get # downloads missing modules
$ terraform get --update # updates all modules
Get: git::https://github.com/poseidon/typhoon (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.6.2 (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.7.0 (update)
```
Plan the resources to be created.
@ -304,9 +304,9 @@ bootkube[5]: Tearing down temporary bootstrap control plane...
$ KUBECONFIG=/home/user/.secrets/clusters/mercury/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
node1.example.com Ready 11m v1.7.5+coreos.0
node2.example.com Ready 11m v1.7.5+coreos.0
node3.example.com Ready 11m v1.7.5+coreos.0
node1.example.com Ready 11m v1.7.7+coreos.0
node2.example.com Ready 11m v1.7.7+coreos.0
node3.example.com Ready 11m v1.7.7+coreos.0
```
List the pods.

View File

@ -60,7 +60,7 @@ Modules are updated regularly, set the version to a [release tag](https://github
```tf
...
source = "git:https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.7.5"
source = "git:https://github.com/poseidon/typhoon//google-cloud/container-linux/kubernetes?ref=v1.7.7"
```
Module versioning ensures `terraform get --update` only fetches the desired version, so plan and apply don't change cluster resources, unless the version is altered.

View File

@ -1,6 +1,6 @@
# Digital Ocean
In this tutorial, we'll create a Kubernetes v1.7.5 cluster on Digital Ocean.
In this tutorial, we'll create a Kubernetes v1.7.7 cluster on Digital Ocean.
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, firewall rules, DNS records, tags, and droplets for Kubernetes controllers and workers will be created.
@ -114,7 +114,7 @@ Get or update Terraform modules.
$ terraform get # downloads missing modules
$ terraform get --update # updates all modules
Get: git::https://github.com/poseidon/typhoon (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.6.2 (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.7.0 (update)
```
Plan the resources to be created.
@ -147,9 +147,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
$ KUBECONFIG=/home/user/.secrets/clusters/nemo/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
10.132.110.130 Ready 10m v1.7.5+coreos.0
10.132.115.81 Ready 10m v1.7.5+coreos.0
10.132.124.107 Ready 10m v1.7.5+coreos.0
10.132.110.130 Ready 10m v1.7.7+coreos.0
10.132.115.81 Ready 10m v1.7.7+coreos.0
10.132.124.107 Ready 10m v1.7.7+coreos.0
```
List the pods.

View File

@ -1,6 +1,6 @@
# Google Cloud
In this tutorial, we'll create a Kubernetes v1.7.5 cluster on Google Compute Engine (not GKE).
In this tutorial, we'll create a Kubernetes v1.7.7 cluster on Google Compute Engine (not GKE).
We'll declare a Kubernetes cluster in Terraform using the Typhoon Terraform module. On apply, a network, firewall rules, managed instance groups of Kubernetes controllers and workers, network load balancers for controllers and workers, and health checks will be created.
@ -120,7 +120,7 @@ Get or update Terraform modules.
$ terraform get # downloads missing modules
$ terraform get --update # updates all modules
Get: git::https://github.com/poseidon/typhoon (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.6.2 (update)
Get: git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.7.0 (update)
```
Plan the resources to be created.
@ -154,9 +154,9 @@ In 5-10 minutes, the Kubernetes cluster will be ready.
$ KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-1682.c.example-com.internal Ready 6m v1.7.5+coreos.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.7.5+coreos.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.7.5+coreos.0
yavin-controller-1682.c.example-com.internal Ready 6m v1.7.7+coreos.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.7.7+coreos.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.7.7+coreos.0
```
List the pods.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features
* Kubernetes v1.7.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.7.7 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Dashboards, Metrics and other optional [addons](addons/overview.md)
@ -77,9 +77,9 @@ In 5-10 minutes (varies by platform), the cluster will be ready. This Google Clo
$ KUBECONFIG=/home/user/.secrets/clusters/yavin/auth/kubeconfig
$ kubectl get nodes
NAME STATUS AGE VERSION
yavin-controller-1682.c.example-com.internal Ready 6m v1.7.5+coreos.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.7.5+coreos.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.7.5+coreos.0
yavin-controller-1682.c.example-com.internal Ready 6m v1.7.7+coreos.0
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.7.7+coreos.0
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.7.7+coreos.0
```
List the pods.

View File

@ -2,11 +2,47 @@
Typhoon aims to be minimal and secure. We're running it ourselves after all.
## OpenPGP
## Overview
**Kubernetes**
* etcd with peer-to-peer and client-auth TLS
* Generated kubelet TLS certificates and `kubeconfig` (365 days)
* [Role-Based Access Control](https://kubernetes.io/docs/admin/authorization/rbac/) is enabled. Apps must define RBAC policies
* Workloads run on worker nodes only, unless they tolerate the master taint
* Kubernetes [Network Policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) and Calico [Policy](https://docs.projectcalico.org/latest/reference/calicoctl/resources/policy) support [^1]
[^1]: Requires `networking = "calico"`. Calico is the default on AWS, bare-metal, and Google Cloud. Digital Ocean is limited to `networking = "flannel"`.
**Hosts**
* Container Linux auto-updates are enabled
* Hosts limit logins to SSH key-based auth (user "core")
**Platform**
* Cloud firewalls limit access to ssh, kube-apiserver, and ingress
* No cluster credentials are stored in Matchbox (used for bare-metal)
* No cluster credentials are stored in Digital Ocean metadata
* Cluster credentials are stored in Google Cloud metadata (for managed instance groups)
* Cluster credentials are stored in AWS metadata (for ASGs)
* No account credentials are available to Google Cloud instances (no IAM permissions)
* No account credentials are available to AWS EC2 instances (no IAM permissions)
* No account credentials are available to Digital Ocean droplets
## Precautions
Typhoon limits exposure to many security threats, but it is not a silver bullet. As usual,
* Do not run untrusted images or accept manifests from strangers
* Do not give untrusted users a shell behind your firewall
* Define network policies for your namespaces
## OpenPGP Signing
Typhoon uses upstream container images and binaries. We do not currently distribute materials of our own.
## Disclosures
If you find security issues, please see [security disclosures](/topics/security). If the issue lies in upstream Kubernetes, please inform upstream Kubernetes as well.
If you find security issues, please email dghubble at gmail. If the issue lies in upstream Kubernetes, please inform upstream Kubernetes as well.

View File

@ -0,0 +1,23 @@
The MIT License (MIT)
Copyright (c) 2017 Typhoon Authors
Copyright (c) 2017 Dalton Hubble
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
## Features
* Kubernetes v1.7.5 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Kubernetes v1.7.7 (upstream, via [kubernetes-incubator/bootkube](https://github.com/kubernetes-incubator/bootkube))
* Single or multi-master, workloads isolated on workers, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* Ready for Ingress, Dashboards, Metrics, and other optional [addons](https://typhoon.psdn.io/addons/overview/)

View File

@ -1,6 +1,6 @@
# Self-hosted Kubernetes assets (kubeconfig, manifests)
module "bootkube" {
source = "git::https://github.com/poseidon/bootkube-terraform.git?ref=v0.6.2"
source = "git::https://github.com/poseidon/terraform-render-bootkube.git?ref=v0.7.0"
cluster_name = "${var.cluster_name}"
api_servers = ["${format("%s.%s", var.cluster_name, var.dns_zone)}"]

View File

@ -105,7 +105,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.5_coreos.0
KUBELET_IMAGE_TAG=v1.7.7_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -128,7 +128,7 @@ storage:
[ -d /opt/bootkube/assets/experimental/manifests ] && mv /opt/bootkube/assets/experimental/manifests/* /opt/bootkube/assets/manifests && rm -r /opt/bootkube/assets/experimental/manifests
[ -d /opt/bootkube/assets/experimental/bootstrap-manifests ] && mv /opt/bootkube/assets/experimental/bootstrap-manifests/* /opt/bootkube/assets/bootstrap-manifests && rm -r /opt/bootkube/assets/experimental/bootstrap-manifests
BOOTKUBE_ACI="$${BOOTKUBE_ACI:-quay.io/coreos/bootkube}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.6.2}"
BOOTKUBE_VERSION="$${BOOTKUBE_VERSION:-v0.7.0}"
BOOTKUBE_ASSETS="$${BOOTKUBE_ASSETS:-/opt/bootkube/assets}"
exec /usr/bin/rkt run \
--trust-keys-from-https \

View File

@ -103,7 +103,7 @@ storage:
contents:
inline: |
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_IMAGE_TAG=v1.7.5_coreos.0
KUBELET_IMAGE_TAG=v1.7.7_coreos.0
- path: /etc/sysctl.d/max-user-watches.conf
filesystem: root
contents:
@ -120,7 +120,7 @@ storage:
--trust-keys-from-https \
--volume config,kind=host,source=/etc/kubernetes \
--mount volume=config,target=/etc/kubernetes \
quay.io/coreos/hyperkube:v1.7.5_coreos.0 \
quay.io/coreos/hyperkube:v1.7.7_coreos.0 \
--net=host \
--dns=host \
--exec=/kubectl -- --kubeconfig=/etc/kubernetes/kubeconfig delete node $(hostname)