mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-04-22 10:01:10 +02:00
Compare commits
1 Commits
Author | SHA1 | Date | |
---|---|---|---|
743650c37a |
58
CHANGES.md
58
CHANGES.md
@ -4,57 +4,11 @@ Notable changes between versions.
|
|||||||
|
|
||||||
## Latest
|
## Latest
|
||||||
|
|
||||||
## v1.31.3
|
### Azure
|
||||||
|
|
||||||
* Kubernetes [v1.31.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1312)
|
* Allow controller and worker nodes to use different CPU architectures
|
||||||
* Update CoreDNS from v1.11.3 to v1.11.4
|
* Add `controller_arch` and `worker_arch` variables
|
||||||
* Update Cilium from v1.16.3 to [v1.16.4](https://github.com/cilium/cilium/releases/tag/v1.16.4)
|
* Remove the `arch` variable
|
||||||
|
|
||||||
### Deprecations
|
|
||||||
|
|
||||||
* Plan to drop support for using Calico CNI, recommend everyone use the Cilium default
|
|
||||||
|
|
||||||
## v1.31.2
|
|
||||||
|
|
||||||
* Kubernetes [v1.31.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1312)
|
|
||||||
* Update Cilium from v1.16.1 to [v1.16.3](https://github.com/cilium/cilium/releases/tag/v1.16.3)
|
|
||||||
* Update flannel from v0.25.6 to [v0.26.0](https://github.com/flannel-io/flannel/releases/tag/v0.26.0)
|
|
||||||
|
|
||||||
## v1.31.1
|
|
||||||
|
|
||||||
* Kubernetes [v1.31.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1311)
|
|
||||||
* Update flannel from v0.25.5 to [v0.25.6](https://github.com/flannel-io/flannel/releases/tag/v0.25.6)
|
|
||||||
|
|
||||||
### Google
|
|
||||||
|
|
||||||
* Add `controller_disk_type` and `worker_disk_type` variables ([#1513](https://github.com/poseidon/typhoon/pull/1513))
|
|
||||||
* Add explicit `region` field to regional worker instance templates ([#1524](https://github.com/poseidon/typhoon/pull/1524))
|
|
||||||
|
|
||||||
## v1.31.0
|
|
||||||
|
|
||||||
* Kubernetes [v1.31.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#v1310)
|
|
||||||
* Use Cilium kube-proxy replacement mode when `cilium` networking is chosen ([#1501](https://github.com/poseidon/typhoon/pull/1501))
|
|
||||||
* Fix invalid flannel-cni container image for those using `flannel` networking ([#1497](https://github.com/poseidon/typhoon/pull/1497))
|
|
||||||
|
|
||||||
### AWS
|
|
||||||
|
|
||||||
* Use EC2 resource-based hostnames instead of IP-based hostnames ([#1499](https://github.com/poseidon/typhoon/pull/1499))
|
|
||||||
* The Amazon DNS server can resolve A and AAAA queries to IPv4 and IPv6 node addresses
|
|
||||||
* Tag controller node EBS volumes with a name based on the controller node name
|
|
||||||
|
|
||||||
### Google
|
|
||||||
|
|
||||||
* Use `google_compute_region_instance_template` instead of `google_compute_instance_template`
|
|
||||||
* Google's regional instance template metadata is kept in the associated region for greater resiliency. The "global" instance templates were kept in a single region
|
|
||||||
|
|
||||||
## v1.30.4
|
|
||||||
|
|
||||||
* Kubernetes [v1.30.4](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1304)
|
|
||||||
* Update Cilium from v1.15.7 to [v1.16.1](https://github.com/cilium/cilium/releases/tag/v1.16.1)
|
|
||||||
* Update CoreDNS from v1.11.1 to v1.11.3
|
|
||||||
* Remove `enable_aggregation` variable for Kubernetes Aggregation Layer, always set to true
|
|
||||||
* Remove `cluster_domain_suffix` variable, always use "cluster.local"
|
|
||||||
* Remove `enable_reporting` variable for analytics, always set to false
|
|
||||||
|
|
||||||
## v1.30.3
|
## v1.30.3
|
||||||
|
|
||||||
@ -64,12 +18,12 @@ Notable changes between versions.
|
|||||||
|
|
||||||
### AWS
|
### AWS
|
||||||
|
|
||||||
* Configure controller and worker disks ([#1482](https://github.com/poseidon/typhoon/pull/1482))
|
* Allow configuring controller and worker disks ([#1482](https://github.com/poseidon/typhoon/pull/1482))
|
||||||
* Add `controller_disk_type`, `controller_disk_size`, and `controller_disk_iops` variables
|
* Add `controller_disk_type`, `controller_disk_size`, and `controller_disk_iops` variables
|
||||||
* Add `worker_disk_type`, `worker_disk_size`, and `worker_disk_iops` variables
|
* Add `worker_disk_type`, `worker_disk_size`, and `worker_disk_iops` variables
|
||||||
* Remove `disk_type`, `disk_size`, and `disk_iops` variables
|
* Remove `disk_type`, `disk_size`, and `disk_iops` variables
|
||||||
* Fix propagating settings to worker disks, previously ignored
|
* Fix propagating settings to worker disks, previously ignored
|
||||||
* Configure CPU pricing model for burstable instance types ([#1482](https://github.com/poseidon/typhoon/pull/1482))
|
* Allow configuring CPU pricing model for burstable instance types ([#1482](https://github.com/poseidon/typhoon/pull/1482))
|
||||||
* Add `controller_cpu_credits` and `worker_cpu_credits` variables (`standard` or `unlimited`)
|
* Add `controller_cpu_credits` and `worker_cpu_credits` variables (`standard` or `unlimited`)
|
||||||
* Configure controller or worker instance architecture ([#1485](https://github.com/poseidon/typhoon/pull/1485))
|
* Configure controller or worker instance architecture ([#1485](https://github.com/poseidon/typhoon/pull/1485))
|
||||||
* Add `controller_arch` and `worker_arch` variables (`amd64` or `arm64`)
|
* Add `controller_arch` and `worker_arch` variables (`amd64` or `arm64`)
|
||||||
|
18
README.md
18
README.md
@ -18,7 +18,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.31.3 (upstream)
|
* Kubernetes v1.30.3 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||||
@ -78,7 +78,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin" {
|
module "yavin" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
cluster_name = "yavin"
|
cluster_name = "yavin"
|
||||||
@ -98,7 +98,6 @@ module "yavin" {
|
|||||||
resource "local_file" "kubeconfig-yavin" {
|
resource "local_file" "kubeconfig-yavin" {
|
||||||
content = module.yavin.kubeconfig-admin
|
content = module.yavin.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/yavin-config"
|
filename = "/home/user/.kube/configs/yavin-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -118,9 +117,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME ROLES STATUS AGE VERSION
|
NAME ROLES STATUS AGE VERSION
|
||||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.31.3
|
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.3
|
||||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.31.3
|
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.3
|
||||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.31.3
|
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -128,10 +127,9 @@ List the pods.
|
|||||||
```
|
```
|
||||||
$ kubectl get pods --all-namespaces
|
$ kubectl get pods --all-namespaces
|
||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system cilium-1cs8z 1/1 Running 0 6m
|
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||||
kube-system cilium-d1l5b 1/1 Running 0 6m
|
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||||
kube-system cilium-sp9ps 1/1 Running 0 6m
|
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||||
kube-system cilium-operator-68d778b448-g744f 1/1 Running 0 6m
|
|
||||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||||
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
|
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
|
||||||
|
@ -128,8 +128,8 @@ resource "kubernetes_config_map" "cilium" {
|
|||||||
enable-bpf-masquerade = "true"
|
enable-bpf-masquerade = "true"
|
||||||
|
|
||||||
# kube-proxy
|
# kube-proxy
|
||||||
kube-proxy-replacement = "true"
|
kube-proxy-replacement = "false"
|
||||||
kube-proxy-replacement-healthz-bind-address = ":10256"
|
kube-proxy-replacement-healthz-bind-address = ""
|
||||||
enable-session-affinity = "true"
|
enable-session-affinity = "true"
|
||||||
|
|
||||||
# ClusterIPs from host namespace
|
# ClusterIPs from host namespace
|
||||||
|
@ -61,7 +61,7 @@ resource "kubernetes_daemonset" "cilium" {
|
|||||||
# https://github.com/cilium/cilium/pull/24075
|
# https://github.com/cilium/cilium/pull/24075
|
||||||
init_container {
|
init_container {
|
||||||
name = "install-cni"
|
name = "install-cni"
|
||||||
image = "quay.io/cilium/cilium:v1.16.4"
|
image = "quay.io/cilium/cilium:v1.16.0"
|
||||||
command = ["/install-plugin.sh"]
|
command = ["/install-plugin.sh"]
|
||||||
security_context {
|
security_context {
|
||||||
allow_privilege_escalation = true
|
allow_privilege_escalation = true
|
||||||
@ -80,7 +80,7 @@ resource "kubernetes_daemonset" "cilium" {
|
|||||||
# We use nsenter command with host's cgroup and mount namespaces enabled.
|
# We use nsenter command with host's cgroup and mount namespaces enabled.
|
||||||
init_container {
|
init_container {
|
||||||
name = "mount-cgroup"
|
name = "mount-cgroup"
|
||||||
image = "quay.io/cilium/cilium:v1.16.4"
|
image = "quay.io/cilium/cilium:v1.16.0"
|
||||||
command = [
|
command = [
|
||||||
"sh",
|
"sh",
|
||||||
"-ec",
|
"-ec",
|
||||||
@ -115,7 +115,7 @@ resource "kubernetes_daemonset" "cilium" {
|
|||||||
|
|
||||||
init_container {
|
init_container {
|
||||||
name = "clean-cilium-state"
|
name = "clean-cilium-state"
|
||||||
image = "quay.io/cilium/cilium:v1.16.4"
|
image = "quay.io/cilium/cilium:v1.16.0"
|
||||||
command = ["/init-container.sh"]
|
command = ["/init-container.sh"]
|
||||||
security_context {
|
security_context {
|
||||||
allow_privilege_escalation = true
|
allow_privilege_escalation = true
|
||||||
@ -139,7 +139,7 @@ resource "kubernetes_daemonset" "cilium" {
|
|||||||
|
|
||||||
container {
|
container {
|
||||||
name = "cilium-agent"
|
name = "cilium-agent"
|
||||||
image = "quay.io/cilium/cilium:v1.16.4"
|
image = "quay.io/cilium/cilium:v1.16.0"
|
||||||
command = ["cilium-agent"]
|
command = ["cilium-agent"]
|
||||||
args = [
|
args = [
|
||||||
"--config-dir=/tmp/cilium/config-map"
|
"--config-dir=/tmp/cilium/config-map"
|
||||||
|
@ -58,7 +58,7 @@ resource "kubernetes_deployment" "operator" {
|
|||||||
enable_service_links = false
|
enable_service_links = false
|
||||||
container {
|
container {
|
||||||
name = "cilium-operator"
|
name = "cilium-operator"
|
||||||
image = "quay.io/cilium/operator-generic:v1.16.4"
|
image = "quay.io/cilium/operator-generic:v1.16.0"
|
||||||
command = ["cilium-operator-generic"]
|
command = ["cilium-operator-generic"]
|
||||||
args = [
|
args = [
|
||||||
"--config-dir=/tmp/cilium/config-map",
|
"--config-dir=/tmp/cilium/config-map",
|
||||||
|
@ -77,7 +77,7 @@ resource "kubernetes_deployment" "coredns" {
|
|||||||
}
|
}
|
||||||
container {
|
container {
|
||||||
name = "coredns"
|
name = "coredns"
|
||||||
image = "registry.k8s.io/coredns/coredns:v1.12.0"
|
image = "registry.k8s.io/coredns/coredns:v1.11.3"
|
||||||
args = ["-conf", "/etc/coredns/Corefile"]
|
args = ["-conf", "/etc/coredns/Corefile"]
|
||||||
port {
|
port {
|
||||||
name = "dns"
|
name = "dns"
|
||||||
|
@ -73,7 +73,7 @@ resource "kubernetes_daemonset" "flannel" {
|
|||||||
|
|
||||||
container {
|
container {
|
||||||
name = "flannel"
|
name = "flannel"
|
||||||
image = "docker.io/flannel/flannel:v0.26.1"
|
image = "docker.io/flannel/flannel:v0.25.5"
|
||||||
command = [
|
command = [
|
||||||
"/opt/bin/flanneld",
|
"/opt/bin/flanneld",
|
||||||
"--ip-masq",
|
"--ip-masq",
|
||||||
|
@ -59,11 +59,4 @@ rules:
|
|||||||
- get
|
- get
|
||||||
- list
|
- list
|
||||||
- watch
|
- watch
|
||||||
- apiGroups:
|
|
||||||
- discovery.k8s.io
|
|
||||||
resources:
|
|
||||||
- "endpointslices"
|
|
||||||
verbs:
|
|
||||||
- get
|
|
||||||
- list
|
|
||||||
- watch
|
|
||||||
|
@ -59,11 +59,4 @@ rules:
|
|||||||
- get
|
- get
|
||||||
- list
|
- list
|
||||||
- watch
|
- watch
|
||||||
- apiGroups:
|
|
||||||
- discovery.k8s.io
|
|
||||||
resources:
|
|
||||||
- "endpointslices"
|
|
||||||
verbs:
|
|
||||||
- get
|
|
||||||
- list
|
|
||||||
- watch
|
|
||||||
|
@ -59,11 +59,4 @@ rules:
|
|||||||
- get
|
- get
|
||||||
- list
|
- list
|
||||||
- watch
|
- watch
|
||||||
- apiGroups:
|
|
||||||
- discovery.k8s.io
|
|
||||||
resources:
|
|
||||||
- "endpointslices"
|
|
||||||
verbs:
|
|
||||||
- get
|
|
||||||
- list
|
|
||||||
- watch
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Service
|
kind: Service
|
||||||
metadata:
|
metadata:
|
||||||
name: nginx-ingress-controller
|
name: ingress-controller-public
|
||||||
namespace: ingress
|
namespace: ingress
|
||||||
annotations:
|
annotations:
|
||||||
prometheus.io/scrape: 'true'
|
prometheus.io/scrape: 'true'
|
||||||
@ -10,7 +10,7 @@ spec:
|
|||||||
type: ClusterIP
|
type: ClusterIP
|
||||||
clusterIP: 10.3.0.12
|
clusterIP: 10.3.0.12
|
||||||
selector:
|
selector:
|
||||||
name: nginx-ingress-controller
|
name: ingress-controller-public
|
||||||
phase: prod
|
phase: prod
|
||||||
ports:
|
ports:
|
||||||
- name: http
|
- name: http
|
||||||
|
@ -59,11 +59,4 @@ rules:
|
|||||||
- get
|
- get
|
||||||
- list
|
- list
|
||||||
- watch
|
- watch
|
||||||
- apiGroups:
|
|
||||||
- discovery.k8s.io
|
|
||||||
resources:
|
|
||||||
- "endpointslices"
|
|
||||||
verbs:
|
|
||||||
- get
|
|
||||||
- list
|
|
||||||
- watch
|
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.31.3 (upstream)
|
* Kubernetes v1.30.3 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
@ -9,6 +9,9 @@ module "bootstrap" {
|
|||||||
network_mtu = var.network_mtu
|
network_mtu = var.network_mtu
|
||||||
pod_cidr = var.pod_cidr
|
pod_cidr = var.pod_cidr
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
enable_reporting = var.enable_reporting
|
||||||
|
enable_aggregation = var.enable_aggregation
|
||||||
daemonset_tolerations = var.daemonset_tolerations
|
daemonset_tolerations = var.daemonset_tolerations
|
||||||
components = var.components
|
components = var.components
|
||||||
}
|
}
|
||||||
|
@ -57,7 +57,7 @@ systemd:
|
|||||||
After=afterburn.service
|
After=afterburn.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
EnvironmentFile=/run/metadata/afterburn
|
EnvironmentFile=/run/metadata/afterburn
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -116,7 +116,7 @@ systemd:
|
|||||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||||
--entrypoint=/apply \
|
--entrypoint=/apply \
|
||||||
quay.io/poseidon/kubelet:v1.31.3
|
quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||||
storage:
|
storage:
|
||||||
@ -149,7 +149,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -20,8 +20,10 @@ resource "aws_instance" "controllers" {
|
|||||||
tags = {
|
tags = {
|
||||||
Name = "${var.cluster_name}-controller-${count.index}"
|
Name = "${var.cluster_name}-controller-${count.index}"
|
||||||
}
|
}
|
||||||
|
|
||||||
instance_type = var.controller_type
|
instance_type = var.controller_type
|
||||||
ami = var.controller_arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
ami = var.controller_arch == "arm64" ? data.aws_ami.fedora-coreos-arm[0].image_id : data.aws_ami.fedora-coreos.image_id
|
||||||
|
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||||
|
|
||||||
# storage
|
# storage
|
||||||
root_block_device {
|
root_block_device {
|
||||||
@ -29,9 +31,7 @@ resource "aws_instance" "controllers" {
|
|||||||
volume_size = var.controller_disk_size
|
volume_size = var.controller_disk_size
|
||||||
iops = var.controller_disk_iops
|
iops = var.controller_disk_iops
|
||||||
encrypted = true
|
encrypted = true
|
||||||
tags = {
|
tags = {}
|
||||||
Name = "${var.cluster_name}-controller-${count.index}"
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# network
|
# network
|
||||||
@ -39,10 +39,6 @@ resource "aws_instance" "controllers" {
|
|||||||
subnet_id = element(aws_subnet.public.*.id, count.index)
|
subnet_id = element(aws_subnet.public.*.id, count.index)
|
||||||
vpc_security_group_ids = [aws_security_group.controller.id]
|
vpc_security_group_ids = [aws_security_group.controller.id]
|
||||||
|
|
||||||
# boot
|
|
||||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
|
||||||
|
|
||||||
# cost
|
|
||||||
credit_specification {
|
credit_specification {
|
||||||
cpu_credits = var.controller_cpu_credits
|
cpu_credits = var.controller_cpu_credits
|
||||||
}
|
}
|
||||||
@ -69,6 +65,7 @@ data "ct_config" "controllers" {
|
|||||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
|
@ -47,25 +47,17 @@ resource "aws_route" "egress-ipv6" {
|
|||||||
resource "aws_subnet" "public" {
|
resource "aws_subnet" "public" {
|
||||||
count = length(data.aws_availability_zones.all.names)
|
count = length(data.aws_availability_zones.all.names)
|
||||||
|
|
||||||
tags = {
|
|
||||||
"Name" = "${var.cluster_name}-public-${count.index}"
|
|
||||||
}
|
|
||||||
vpc_id = aws_vpc.network.id
|
vpc_id = aws_vpc.network.id
|
||||||
availability_zone = data.aws_availability_zones.all.names[count.index]
|
availability_zone = data.aws_availability_zones.all.names[count.index]
|
||||||
|
|
||||||
# IPv4 and IPv6 CIDR blocks
|
|
||||||
cidr_block = cidrsubnet(var.host_cidr, 4, count.index)
|
cidr_block = cidrsubnet(var.host_cidr, 4, count.index)
|
||||||
ipv6_cidr_block = cidrsubnet(aws_vpc.network.ipv6_cidr_block, 8, count.index)
|
ipv6_cidr_block = cidrsubnet(aws_vpc.network.ipv6_cidr_block, 8, count.index)
|
||||||
|
|
||||||
# Assign IPv4 and IPv6 addresses to instances
|
|
||||||
map_public_ip_on_launch = true
|
map_public_ip_on_launch = true
|
||||||
assign_ipv6_address_on_creation = true
|
assign_ipv6_address_on_creation = true
|
||||||
|
|
||||||
# Hostnames assigned to instances
|
tags = {
|
||||||
# resource-name: <ec2-instance-id>.region.compute.internal
|
"Name" = "${var.cluster_name}-public-${count.index}"
|
||||||
private_dns_hostname_type_on_launch = "resource-name"
|
}
|
||||||
enable_resource_name_dns_a_record_on_launch = true
|
|
||||||
enable_resource_name_dns_aaaa_record_on_launch = true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_route_table_association" "public" {
|
resource "aws_route_table_association" "public" {
|
||||||
|
@ -164,12 +164,32 @@ EOD
|
|||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "enable_reporting" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_aggregation" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable the Kubernetes Aggregation Layer"
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
|
||||||
variable "worker_node_labels" {
|
variable "worker_node_labels" {
|
||||||
type = list(string)
|
type = list(string)
|
||||||
description = "List of initial worker node labels"
|
description = "List of initial worker node labels"
|
||||||
default = []
|
default = []
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by CoreDNS. Default is cluster.local (e.g. foo.default.svc.cluster.local)"
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
# advanced
|
# advanced
|
||||||
|
|
||||||
variable "controller_arch" {
|
variable "controller_arch" {
|
||||||
|
@ -6,11 +6,9 @@ module "workers" {
|
|||||||
vpc_id = aws_vpc.network.id
|
vpc_id = aws_vpc.network.id
|
||||||
subnet_ids = aws_subnet.public.*.id
|
subnet_ids = aws_subnet.public.*.id
|
||||||
security_groups = [aws_security_group.worker.id]
|
security_groups = [aws_security_group.worker.id]
|
||||||
|
|
||||||
# instances
|
|
||||||
os_stream = var.os_stream
|
|
||||||
worker_count = var.worker_count
|
worker_count = var.worker_count
|
||||||
instance_type = var.worker_type
|
instance_type = var.worker_type
|
||||||
|
os_stream = var.os_stream
|
||||||
arch = var.worker_arch
|
arch = var.worker_arch
|
||||||
disk_type = var.worker_disk_type
|
disk_type = var.worker_disk_type
|
||||||
disk_size = var.worker_disk_size
|
disk_size = var.worker_disk_size
|
||||||
@ -23,6 +21,7 @@ module "workers" {
|
|||||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
snippets = var.worker_snippets
|
snippets = var.worker_snippets
|
||||||
node_labels = var.worker_node_labels
|
node_labels = var.worker_node_labels
|
||||||
}
|
}
|
||||||
|
@ -29,7 +29,7 @@ systemd:
|
|||||||
After=afterburn.service
|
After=afterburn.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
EnvironmentFile=/run/metadata/afterburn
|
EnvironmentFile=/run/metadata/afterburn
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -104,7 +104,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -108,6 +108,12 @@ EOD
|
|||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
variable "node_labels" {
|
variable "node_labels" {
|
||||||
type = list(string)
|
type = list(string)
|
||||||
description = "List of initial node labels"
|
description = "List of initial node labels"
|
||||||
@ -120,14 +126,15 @@ variable "node_taints" {
|
|||||||
default = []
|
default = []
|
||||||
}
|
}
|
||||||
|
|
||||||
# advanced
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
variable "arch" {
|
variable "arch" {
|
||||||
type = string
|
type = string
|
||||||
description = "Container architecture (amd64 or arm64)"
|
description = "Container architecture (amd64 or arm64)"
|
||||||
default = "amd64"
|
default = "amd64"
|
||||||
|
|
||||||
validation {
|
validation {
|
||||||
condition = contains(["amd64", "arm64"], var.arch)
|
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||||
error_message = "The arch must be amd64 or arm64."
|
error_message = "The arch must be amd64 or arm64."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -6,11 +6,13 @@ resource "aws_autoscaling_group" "workers" {
|
|||||||
desired_capacity = var.worker_count
|
desired_capacity = var.worker_count
|
||||||
min_size = var.worker_count
|
min_size = var.worker_count
|
||||||
max_size = var.worker_count + 2
|
max_size = var.worker_count + 2
|
||||||
|
default_cooldown = 30
|
||||||
|
health_check_grace_period = 30
|
||||||
|
|
||||||
# network
|
# network
|
||||||
vpc_zone_identifier = var.subnet_ids
|
vpc_zone_identifier = var.subnet_ids
|
||||||
|
|
||||||
# instance template
|
# template
|
||||||
launch_template {
|
launch_template {
|
||||||
id = aws_launch_template.worker.id
|
id = aws_launch_template.worker.id
|
||||||
version = aws_launch_template.worker.latest_version
|
version = aws_launch_template.worker.latest_version
|
||||||
@ -30,11 +32,6 @@ resource "aws_autoscaling_group" "workers" {
|
|||||||
min_healthy_percentage = 90
|
min_healthy_percentage = 90
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
# Grace period before checking new instance's health
|
|
||||||
health_check_grace_period = 30
|
|
||||||
# Cooldown period between scaling activities
|
|
||||||
default_cooldown = 30
|
|
||||||
|
|
||||||
|
|
||||||
lifecycle {
|
lifecycle {
|
||||||
# override the default destroy and replace update behavior
|
# override the default destroy and replace update behavior
|
||||||
@ -59,6 +56,11 @@ resource "aws_launch_template" "worker" {
|
|||||||
name_prefix = "${var.name}-worker"
|
name_prefix = "${var.name}-worker"
|
||||||
image_id = local.ami_id
|
image_id = local.ami_id
|
||||||
instance_type = var.instance_type
|
instance_type = var.instance_type
|
||||||
|
monitoring {
|
||||||
|
enabled = false
|
||||||
|
}
|
||||||
|
|
||||||
|
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||||
|
|
||||||
# storage
|
# storage
|
||||||
ebs_optimized = true
|
ebs_optimized = true
|
||||||
@ -74,26 +76,14 @@ resource "aws_launch_template" "worker" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
# network
|
# network
|
||||||
network_interfaces {
|
vpc_security_group_ids = var.security_groups
|
||||||
associate_public_ip_address = true
|
|
||||||
security_groups = var.security_groups
|
|
||||||
}
|
|
||||||
|
|
||||||
# boot
|
|
||||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
|
||||||
|
|
||||||
# metadata
|
# metadata
|
||||||
metadata_options {
|
metadata_options {
|
||||||
http_tokens = "optional"
|
http_tokens = "optional"
|
||||||
}
|
}
|
||||||
monitoring {
|
|
||||||
enabled = false
|
|
||||||
}
|
|
||||||
|
|
||||||
# cost
|
# spot
|
||||||
credit_specification {
|
|
||||||
cpu_credits = var.cpu_credits
|
|
||||||
}
|
|
||||||
dynamic "instance_market_options" {
|
dynamic "instance_market_options" {
|
||||||
for_each = var.spot_price > 0 ? [1] : []
|
for_each = var.spot_price > 0 ? [1] : []
|
||||||
content {
|
content {
|
||||||
@ -104,6 +94,10 @@ resource "aws_launch_template" "worker" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
credit_specification {
|
||||||
|
cpu_credits = var.cpu_credits
|
||||||
|
}
|
||||||
|
|
||||||
lifecycle {
|
lifecycle {
|
||||||
// Override the default destroy and replace update behavior
|
// Override the default destroy and replace update behavior
|
||||||
create_before_destroy = true
|
create_before_destroy = true
|
||||||
@ -117,6 +111,7 @@ data "ct_config" "worker" {
|
|||||||
kubeconfig = indent(10, var.kubeconfig)
|
kubeconfig = indent(10, var.kubeconfig)
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
node_labels = join(",", var.node_labels)
|
node_labels = join(",", var.node_labels)
|
||||||
node_taints = join(",", var.node_taints)
|
node_taints = join(",", var.node_taints)
|
||||||
})
|
})
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.31.3 (upstream)
|
* Kubernetes v1.30.3 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
@ -9,6 +9,9 @@ module "bootstrap" {
|
|||||||
network_mtu = var.network_mtu
|
network_mtu = var.network_mtu
|
||||||
pod_cidr = var.pod_cidr
|
pod_cidr = var.pod_cidr
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
enable_reporting = var.enable_reporting
|
||||||
|
enable_aggregation = var.enable_aggregation
|
||||||
daemonset_tolerations = var.daemonset_tolerations
|
daemonset_tolerations = var.daemonset_tolerations
|
||||||
components = var.components
|
components = var.components
|
||||||
}
|
}
|
||||||
|
@ -58,7 +58,7 @@ systemd:
|
|||||||
After=coreos-metadata.service
|
After=coreos-metadata.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
EnvironmentFile=/run/metadata/coreos
|
EnvironmentFile=/run/metadata/coreos
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -109,7 +109,7 @@ systemd:
|
|||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
WorkingDirectory=/opt/bootstrap
|
WorkingDirectory=/opt/bootstrap
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStart=/usr/bin/docker run \
|
ExecStart=/usr/bin/docker run \
|
||||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||||
-v /opt/bootstrap/assets:/assets:ro \
|
-v /opt/bootstrap/assets:/assets:ro \
|
||||||
@ -148,7 +148,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -20,8 +20,11 @@ resource "aws_instance" "controllers" {
|
|||||||
tags = {
|
tags = {
|
||||||
Name = "${var.cluster_name}-controller-${count.index}"
|
Name = "${var.cluster_name}-controller-${count.index}"
|
||||||
}
|
}
|
||||||
|
|
||||||
instance_type = var.controller_type
|
instance_type = var.controller_type
|
||||||
|
|
||||||
ami = local.ami_id
|
ami = local.ami_id
|
||||||
|
user_data = data.ct_config.controllers.*.rendered[count.index]
|
||||||
|
|
||||||
# storage
|
# storage
|
||||||
root_block_device {
|
root_block_device {
|
||||||
@ -29,9 +32,7 @@ resource "aws_instance" "controllers" {
|
|||||||
volume_size = var.controller_disk_size
|
volume_size = var.controller_disk_size
|
||||||
iops = var.controller_disk_iops
|
iops = var.controller_disk_iops
|
||||||
encrypted = true
|
encrypted = true
|
||||||
tags = {
|
tags = {}
|
||||||
Name = "${var.cluster_name}-controller-${count.index}"
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# network
|
# network
|
||||||
@ -39,10 +40,6 @@ resource "aws_instance" "controllers" {
|
|||||||
subnet_id = element(aws_subnet.public.*.id, count.index)
|
subnet_id = element(aws_subnet.public.*.id, count.index)
|
||||||
vpc_security_group_ids = [aws_security_group.controller.id]
|
vpc_security_group_ids = [aws_security_group.controller.id]
|
||||||
|
|
||||||
# boot
|
|
||||||
user_data = data.ct_config.controllers.*.rendered[count.index]
|
|
||||||
|
|
||||||
# cost
|
|
||||||
credit_specification {
|
credit_specification {
|
||||||
cpu_credits = var.controller_cpu_credits
|
cpu_credits = var.controller_cpu_credits
|
||||||
}
|
}
|
||||||
@ -69,6 +66,7 @@ data "ct_config" "controllers" {
|
|||||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
|
@ -47,25 +47,17 @@ resource "aws_route" "egress-ipv6" {
|
|||||||
resource "aws_subnet" "public" {
|
resource "aws_subnet" "public" {
|
||||||
count = length(data.aws_availability_zones.all.names)
|
count = length(data.aws_availability_zones.all.names)
|
||||||
|
|
||||||
tags = {
|
|
||||||
"Name" = "${var.cluster_name}-public-${count.index}"
|
|
||||||
}
|
|
||||||
vpc_id = aws_vpc.network.id
|
vpc_id = aws_vpc.network.id
|
||||||
availability_zone = data.aws_availability_zones.all.names[count.index]
|
availability_zone = data.aws_availability_zones.all.names[count.index]
|
||||||
|
|
||||||
# IPv4 and IPv6 CIDR blocks
|
|
||||||
cidr_block = cidrsubnet(var.host_cidr, 4, count.index)
|
cidr_block = cidrsubnet(var.host_cidr, 4, count.index)
|
||||||
ipv6_cidr_block = cidrsubnet(aws_vpc.network.ipv6_cidr_block, 8, count.index)
|
ipv6_cidr_block = cidrsubnet(aws_vpc.network.ipv6_cidr_block, 8, count.index)
|
||||||
|
|
||||||
# Assign IPv4 and IPv6 addresses to instances
|
|
||||||
map_public_ip_on_launch = true
|
map_public_ip_on_launch = true
|
||||||
assign_ipv6_address_on_creation = true
|
assign_ipv6_address_on_creation = true
|
||||||
|
|
||||||
# Hostnames assigned to instances
|
tags = {
|
||||||
# resource-name: <ec2-instance-id>.region.compute.internal
|
"Name" = "${var.cluster_name}-public-${count.index}"
|
||||||
private_dns_hostname_type_on_launch = "resource-name"
|
}
|
||||||
enable_resource_name_dns_a_record_on_launch = true
|
|
||||||
enable_resource_name_dns_aaaa_record_on_launch = true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resource "aws_route_table_association" "public" {
|
resource "aws_route_table_association" "public" {
|
||||||
|
@ -164,13 +164,31 @@ EOD
|
|||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "enable_reporting" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_aggregation" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable the Kubernetes Aggregation Layer"
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
|
||||||
variable "worker_node_labels" {
|
variable "worker_node_labels" {
|
||||||
type = list(string)
|
type = list(string)
|
||||||
description = "List of initial worker node labels"
|
description = "List of initial worker node labels"
|
||||||
default = []
|
default = []
|
||||||
}
|
}
|
||||||
|
|
||||||
# advanced
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by CoreDNS. Default is cluster.local (e.g. foo.default.svc.cluster.local)"
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
variable "controller_arch" {
|
variable "controller_arch" {
|
||||||
type = string
|
type = string
|
||||||
@ -192,6 +210,7 @@ variable "worker_arch" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
variable "daemonset_tolerations" {
|
variable "daemonset_tolerations" {
|
||||||
type = list(string)
|
type = list(string)
|
||||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||||
|
@ -6,11 +6,9 @@ module "workers" {
|
|||||||
vpc_id = aws_vpc.network.id
|
vpc_id = aws_vpc.network.id
|
||||||
subnet_ids = aws_subnet.public.*.id
|
subnet_ids = aws_subnet.public.*.id
|
||||||
security_groups = [aws_security_group.worker.id]
|
security_groups = [aws_security_group.worker.id]
|
||||||
|
|
||||||
# instances
|
|
||||||
os_image = var.os_image
|
|
||||||
worker_count = var.worker_count
|
worker_count = var.worker_count
|
||||||
instance_type = var.worker_type
|
instance_type = var.worker_type
|
||||||
|
os_image = var.os_image
|
||||||
arch = var.worker_arch
|
arch = var.worker_arch
|
||||||
disk_type = var.worker_disk_type
|
disk_type = var.worker_disk_type
|
||||||
disk_size = var.worker_disk_size
|
disk_size = var.worker_disk_size
|
||||||
@ -22,6 +20,7 @@ module "workers" {
|
|||||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
snippets = var.worker_snippets
|
snippets = var.worker_snippets
|
||||||
node_labels = var.worker_node_labels
|
node_labels = var.worker_node_labels
|
||||||
}
|
}
|
||||||
|
@ -30,7 +30,7 @@ systemd:
|
|||||||
After=coreos-metadata.service
|
After=coreos-metadata.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
EnvironmentFile=/run/metadata/coreos
|
EnvironmentFile=/run/metadata/coreos
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -103,7 +103,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -108,6 +108,12 @@ EOD
|
|||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
variable "node_labels" {
|
variable "node_labels" {
|
||||||
type = list(string)
|
type = list(string)
|
||||||
description = "List of initial node labels"
|
description = "List of initial node labels"
|
||||||
@ -128,7 +134,7 @@ variable "arch" {
|
|||||||
default = "amd64"
|
default = "amd64"
|
||||||
|
|
||||||
validation {
|
validation {
|
||||||
condition = contains(["amd64", "arm64"], var.arch)
|
condition = var.arch == "amd64" || var.arch == "arm64"
|
||||||
error_message = "The arch must be amd64 or arm64."
|
error_message = "The arch must be amd64 or arm64."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -6,11 +6,13 @@ resource "aws_autoscaling_group" "workers" {
|
|||||||
desired_capacity = var.worker_count
|
desired_capacity = var.worker_count
|
||||||
min_size = var.worker_count
|
min_size = var.worker_count
|
||||||
max_size = var.worker_count + 2
|
max_size = var.worker_count + 2
|
||||||
|
default_cooldown = 30
|
||||||
|
health_check_grace_period = 30
|
||||||
|
|
||||||
# network
|
# network
|
||||||
vpc_zone_identifier = var.subnet_ids
|
vpc_zone_identifier = var.subnet_ids
|
||||||
|
|
||||||
# instance template
|
# template
|
||||||
launch_template {
|
launch_template {
|
||||||
id = aws_launch_template.worker.id
|
id = aws_launch_template.worker.id
|
||||||
version = aws_launch_template.worker.latest_version
|
version = aws_launch_template.worker.latest_version
|
||||||
@ -30,10 +32,6 @@ resource "aws_autoscaling_group" "workers" {
|
|||||||
min_healthy_percentage = 90
|
min_healthy_percentage = 90
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
# Grace period before checking new instance's health
|
|
||||||
health_check_grace_period = 30
|
|
||||||
# Cooldown period between scaling activities
|
|
||||||
default_cooldown = 30
|
|
||||||
|
|
||||||
lifecycle {
|
lifecycle {
|
||||||
# override the default destroy and replace update behavior
|
# override the default destroy and replace update behavior
|
||||||
@ -58,6 +56,11 @@ resource "aws_launch_template" "worker" {
|
|||||||
name_prefix = "${var.name}-worker"
|
name_prefix = "${var.name}-worker"
|
||||||
image_id = local.ami_id
|
image_id = local.ami_id
|
||||||
instance_type = var.instance_type
|
instance_type = var.instance_type
|
||||||
|
monitoring {
|
||||||
|
enabled = false
|
||||||
|
}
|
||||||
|
|
||||||
|
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
||||||
|
|
||||||
# storage
|
# storage
|
||||||
ebs_optimized = true
|
ebs_optimized = true
|
||||||
@ -73,26 +76,14 @@ resource "aws_launch_template" "worker" {
|
|||||||
}
|
}
|
||||||
|
|
||||||
# network
|
# network
|
||||||
network_interfaces {
|
vpc_security_group_ids = var.security_groups
|
||||||
associate_public_ip_address = true
|
|
||||||
security_groups = var.security_groups
|
|
||||||
}
|
|
||||||
|
|
||||||
# boot
|
|
||||||
user_data = sensitive(base64encode(data.ct_config.worker.rendered))
|
|
||||||
|
|
||||||
# metadata
|
# metadata
|
||||||
metadata_options {
|
metadata_options {
|
||||||
http_tokens = "optional"
|
http_tokens = "optional"
|
||||||
}
|
}
|
||||||
monitoring {
|
|
||||||
enabled = false
|
|
||||||
}
|
|
||||||
|
|
||||||
# cost
|
# spot
|
||||||
credit_specification {
|
|
||||||
cpu_credits = var.cpu_credits
|
|
||||||
}
|
|
||||||
dynamic "instance_market_options" {
|
dynamic "instance_market_options" {
|
||||||
for_each = var.spot_price > 0 ? [1] : []
|
for_each = var.spot_price > 0 ? [1] : []
|
||||||
content {
|
content {
|
||||||
@ -103,6 +94,10 @@ resource "aws_launch_template" "worker" {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
credit_specification {
|
||||||
|
cpu_credits = var.cpu_credits
|
||||||
|
}
|
||||||
|
|
||||||
lifecycle {
|
lifecycle {
|
||||||
// Override the default destroy and replace update behavior
|
// Override the default destroy and replace update behavior
|
||||||
create_before_destroy = true
|
create_before_destroy = true
|
||||||
@ -116,6 +111,7 @@ data "ct_config" "worker" {
|
|||||||
kubeconfig = indent(10, var.kubeconfig)
|
kubeconfig = indent(10, var.kubeconfig)
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
node_labels = join(",", var.node_labels)
|
node_labels = join(",", var.node_labels)
|
||||||
node_taints = join(",", var.node_taints)
|
node_taints = join(",", var.node_taints)
|
||||||
})
|
})
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.31.3 (upstream)
|
* Kubernetes v1.30.3 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
@ -14,6 +14,9 @@ module "bootstrap" {
|
|||||||
|
|
||||||
pod_cidr = var.pod_cidr
|
pod_cidr = var.pod_cidr
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
enable_reporting = var.enable_reporting
|
||||||
|
enable_aggregation = var.enable_aggregation
|
||||||
daemonset_tolerations = var.daemonset_tolerations
|
daemonset_tolerations = var.daemonset_tolerations
|
||||||
components = var.components
|
components = var.components
|
||||||
}
|
}
|
||||||
|
@ -54,7 +54,7 @@ systemd:
|
|||||||
Description=Kubelet (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -111,7 +111,7 @@ systemd:
|
|||||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||||
--entrypoint=/apply \
|
--entrypoint=/apply \
|
||||||
quay.io/poseidon/kubelet:v1.31.3
|
quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||||
storage:
|
storage:
|
||||||
@ -144,7 +144,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -163,6 +163,7 @@ data "ct_config" "controllers" {
|
|||||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
|
@ -27,6 +27,7 @@ variable "os_image" {
|
|||||||
description = "Fedora CoreOS image for instances"
|
description = "Fedora CoreOS image for instances"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
variable "controller_count" {
|
variable "controller_count" {
|
||||||
type = number
|
type = number
|
||||||
description = "Number of controllers (i.e. masters)"
|
description = "Number of controllers (i.e. masters)"
|
||||||
@ -144,13 +145,31 @@ EOD
|
|||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "enable_reporting" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_aggregation" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable the Kubernetes Aggregation Layer"
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
|
||||||
variable "worker_node_labels" {
|
variable "worker_node_labels" {
|
||||||
type = list(string)
|
type = list(string)
|
||||||
description = "List of initial worker node labels"
|
description = "List of initial worker node labels"
|
||||||
default = []
|
default = []
|
||||||
}
|
}
|
||||||
|
|
||||||
# advanced
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
variable "daemonset_tolerations" {
|
variable "daemonset_tolerations" {
|
||||||
type = list(string)
|
type = list(string)
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.13.0, < 2.0.0"
|
required_version = ">= 0.13.0, < 2.0.0"
|
||||||
required_providers {
|
required_providers {
|
||||||
azurerm = ">= 2.8"
|
azurerm = ">= 2.8, < 4.0"
|
||||||
null = ">= 2.1"
|
null = ">= 2.1"
|
||||||
ct = {
|
ct = {
|
||||||
source = "poseidon/ct"
|
source = "poseidon/ct"
|
||||||
|
@ -9,10 +9,9 @@ module "workers" {
|
|||||||
security_group_id = azurerm_network_security_group.worker.id
|
security_group_id = azurerm_network_security_group.worker.id
|
||||||
backend_address_pool_ids = local.backend_address_pool_ids
|
backend_address_pool_ids = local.backend_address_pool_ids
|
||||||
|
|
||||||
# instances
|
|
||||||
os_image = var.os_image
|
|
||||||
worker_count = var.worker_count
|
worker_count = var.worker_count
|
||||||
vm_type = var.worker_type
|
vm_type = var.worker_type
|
||||||
|
os_image = var.os_image
|
||||||
disk_type = var.worker_disk_type
|
disk_type = var.worker_disk_type
|
||||||
disk_size = var.worker_disk_size
|
disk_size = var.worker_disk_size
|
||||||
ephemeral_disk = var.worker_ephemeral_disk
|
ephemeral_disk = var.worker_ephemeral_disk
|
||||||
@ -23,6 +22,7 @@ module "workers" {
|
|||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
azure_authorized_key = var.azure_authorized_key
|
azure_authorized_key = var.azure_authorized_key
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
snippets = var.worker_snippets
|
snippets = var.worker_snippets
|
||||||
node_labels = var.worker_node_labels
|
node_labels = var.worker_node_labels
|
||||||
}
|
}
|
||||||
|
@ -26,7 +26,7 @@ systemd:
|
|||||||
Description=Kubelet (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -99,7 +99,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -120,3 +120,12 @@ variable "node_taints" {
|
|||||||
description = "List of initial node taints"
|
description = "List of initial node taints"
|
||||||
default = []
|
default = []
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
type = string
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.13.0, < 2.0.0"
|
required_version = ">= 0.13.0, < 2.0.0"
|
||||||
required_providers {
|
required_providers {
|
||||||
azurerm = ">= 2.8"
|
azurerm = ">= 2.8, < 4.0"
|
||||||
ct = {
|
ct = {
|
||||||
source = "poseidon/ct"
|
source = "poseidon/ct"
|
||||||
version = "~> 0.13"
|
version = "~> 0.13"
|
||||||
|
@ -84,6 +84,7 @@ data "ct_config" "worker" {
|
|||||||
kubeconfig = indent(10, var.kubeconfig)
|
kubeconfig = indent(10, var.kubeconfig)
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
node_labels = join(",", var.node_labels)
|
node_labels = join(",", var.node_labels)
|
||||||
node_taints = join(",", var.node_taints)
|
node_taints = join(",", var.node_taints)
|
||||||
})
|
})
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.31.3 (upstream)
|
* Kubernetes v1.30.3 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
@ -14,6 +14,9 @@ module "bootstrap" {
|
|||||||
|
|
||||||
pod_cidr = var.pod_cidr
|
pod_cidr = var.pod_cidr
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
enable_reporting = var.enable_reporting
|
||||||
|
enable_aggregation = var.enable_aggregation
|
||||||
daemonset_tolerations = var.daemonset_tolerations
|
daemonset_tolerations = var.daemonset_tolerations
|
||||||
components = var.components
|
components = var.components
|
||||||
}
|
}
|
||||||
|
@ -56,7 +56,7 @@ systemd:
|
|||||||
After=docker.service
|
After=docker.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -105,7 +105,7 @@ systemd:
|
|||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
WorkingDirectory=/opt/bootstrap
|
WorkingDirectory=/opt/bootstrap
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStart=/usr/bin/docker run \
|
ExecStart=/usr/bin/docker run \
|
||||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||||
-v /opt/bootstrap/assets:/assets:ro \
|
-v /opt/bootstrap/assets:/assets:ro \
|
||||||
@ -144,7 +144,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -185,6 +185,7 @@ data "ct_config" "controllers" {
|
|||||||
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
kubeconfig = indent(10, module.bootstrap.kubeconfig-kubelet)
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
|
@ -34,7 +34,7 @@ resource "azurerm_public_ip" "frontend-ipv4" {
|
|||||||
|
|
||||||
# Static IPv6 address for the load balancer
|
# Static IPv6 address for the load balancer
|
||||||
resource "azurerm_public_ip" "frontend-ipv6" {
|
resource "azurerm_public_ip" "frontend-ipv6" {
|
||||||
name = "${var.cluster_name}-frontend-ipv6"
|
name = "${var.cluster_name}-ingress-ipv6"
|
||||||
resource_group_name = azurerm_resource_group.cluster.name
|
resource_group_name = azurerm_resource_group.cluster.name
|
||||||
location = var.location
|
location = var.location
|
||||||
ip_version = "IPv6"
|
ip_version = "IPv6"
|
||||||
|
@ -150,6 +150,18 @@ EOD
|
|||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "enable_reporting" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_aggregation" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable the Kubernetes Aggregation Layer"
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
|
||||||
variable "worker_node_labels" {
|
variable "worker_node_labels" {
|
||||||
type = list(string)
|
type = list(string)
|
||||||
description = "List of initial worker node labels"
|
description = "List of initial worker node labels"
|
||||||
@ -184,6 +196,14 @@ variable "daemonset_tolerations" {
|
|||||||
default = []
|
default = []
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
variable "components" {
|
variable "components" {
|
||||||
description = "Configure pre-installed cluster components"
|
description = "Configure pre-installed cluster components"
|
||||||
# Component configs are passed through to terraform-render-bootstrap,
|
# Component configs are passed through to terraform-render-bootstrap,
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.13.0, < 2.0.0"
|
required_version = ">= 0.13.0, < 2.0.0"
|
||||||
required_providers {
|
required_providers {
|
||||||
azurerm = ">= 2.8"
|
azurerm = ">= 2.8, < 4.0"
|
||||||
null = ">= 2.1"
|
null = ">= 2.1"
|
||||||
ct = {
|
ct = {
|
||||||
source = "poseidon/ct"
|
source = "poseidon/ct"
|
||||||
|
@ -22,6 +22,7 @@ module "workers" {
|
|||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
azure_authorized_key = var.azure_authorized_key
|
azure_authorized_key = var.azure_authorized_key
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
snippets = var.worker_snippets
|
snippets = var.worker_snippets
|
||||||
node_labels = var.worker_node_labels
|
node_labels = var.worker_node_labels
|
||||||
arch = var.worker_arch
|
arch = var.worker_arch
|
||||||
|
@ -28,7 +28,7 @@ systemd:
|
|||||||
After=docker.service
|
After=docker.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -99,7 +99,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -137,3 +137,12 @@ variable "arch" {
|
|||||||
error_message = "The arch must be amd64 or arm64."
|
error_message = "The arch must be amd64 or arm64."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
type = string
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
terraform {
|
terraform {
|
||||||
required_version = ">= 0.13.0, < 2.0.0"
|
required_version = ">= 0.13.0, < 2.0.0"
|
||||||
required_providers {
|
required_providers {
|
||||||
azurerm = ">= 2.8"
|
azurerm = ">= 2.8, < 4.0"
|
||||||
ct = {
|
ct = {
|
||||||
source = "poseidon/ct"
|
source = "poseidon/ct"
|
||||||
version = "~> 0.13"
|
version = "~> 0.13"
|
||||||
|
@ -105,6 +105,7 @@ data "ct_config" "worker" {
|
|||||||
kubeconfig = indent(10, var.kubeconfig)
|
kubeconfig = indent(10, var.kubeconfig)
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
node_labels = join(",", var.node_labels)
|
node_labels = join(",", var.node_labels)
|
||||||
node_taints = join(",", var.node_taints)
|
node_taints = join(",", var.node_taints)
|
||||||
})
|
})
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.31.3 (upstream)
|
* Kubernetes v1.30.3 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [var.k8s_domain_name]
|
api_servers = [var.k8s_domain_name]
|
||||||
@ -10,6 +10,9 @@ module "bootstrap" {
|
|||||||
network_ip_autodetection_method = var.network_ip_autodetection_method
|
network_ip_autodetection_method = var.network_ip_autodetection_method
|
||||||
pod_cidr = var.pod_cidr
|
pod_cidr = var.pod_cidr
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
enable_reporting = var.enable_reporting
|
||||||
|
enable_aggregation = var.enable_aggregation
|
||||||
components = var.components
|
components = var.components
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -53,7 +53,7 @@ systemd:
|
|||||||
Description=Kubelet (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -113,7 +113,7 @@ systemd:
|
|||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
WorkingDirectory=/opt/bootstrap
|
WorkingDirectory=/opt/bootstrap
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPre=-/usr/bin/podman rm bootstrap
|
ExecStartPre=-/usr/bin/podman rm bootstrap
|
||||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||||
--network host \
|
--network host \
|
||||||
@ -154,7 +154,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -59,6 +59,7 @@ data "ct_config" "controllers" {
|
|||||||
etcd_name = var.controllers.*.name[count.index]
|
etcd_name = var.controllers.*.name[count.index]
|
||||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
||||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
|
@ -139,7 +139,25 @@ variable "kernel_args" {
|
|||||||
default = []
|
default = []
|
||||||
}
|
}
|
||||||
|
|
||||||
# advanced
|
variable "enable_reporting" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_aggregation" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable the Kubernetes Aggregation Layer"
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
|
||||||
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
type = string
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
variable "components" {
|
variable "components" {
|
||||||
description = "Configure pre-installed cluster components"
|
description = "Configure pre-installed cluster components"
|
||||||
|
@ -25,7 +25,7 @@ systemd:
|
|||||||
Description=Kubelet (System Container)
|
Description=Kubelet (System Container)
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -108,7 +108,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -53,6 +53,7 @@ data "ct_config" "worker" {
|
|||||||
domain_name = var.domain
|
domain_name = var.domain
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
node_labels = join(",", var.node_labels)
|
node_labels = join(",", var.node_labels)
|
||||||
node_taints = join(",", var.node_taints)
|
node_taints = join(",", var.node_taints)
|
||||||
})
|
})
|
||||||
|
@ -103,3 +103,9 @@ The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for
|
|||||||
EOD
|
EOD
|
||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
type = string
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
@ -18,6 +18,7 @@ module "workers" {
|
|||||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
|
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
|
||||||
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
|
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
|
||||||
snippets = lookup(var.snippets, var.workers[count.index].name, [])
|
snippets = lookup(var.snippets, var.workers[count.index].name, [])
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.31.3 (upstream)
|
* Kubernetes v1.30.3 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [var.k8s_domain_name]
|
api_servers = [var.k8s_domain_name]
|
||||||
@ -10,6 +10,9 @@ module "bootstrap" {
|
|||||||
network_ip_autodetection_method = var.network_ip_autodetection_method
|
network_ip_autodetection_method = var.network_ip_autodetection_method
|
||||||
pod_cidr = var.pod_cidr
|
pod_cidr = var.pod_cidr
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
enable_reporting = var.enable_reporting
|
||||||
|
enable_aggregation = var.enable_aggregation
|
||||||
components = var.components
|
components = var.components
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -64,7 +64,7 @@ systemd:
|
|||||||
After=docker.service
|
After=docker.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -114,7 +114,7 @@ systemd:
|
|||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
WorkingDirectory=/opt/bootstrap
|
WorkingDirectory=/opt/bootstrap
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStart=/usr/bin/docker run \
|
ExecStart=/usr/bin/docker run \
|
||||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||||
-v /opt/bootstrap/assets:/assets:ro \
|
-v /opt/bootstrap/assets:/assets:ro \
|
||||||
@ -155,7 +155,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -89,6 +89,7 @@ data "ct_config" "controllers" {
|
|||||||
etcd_name = var.controllers.*.name[count.index]
|
etcd_name = var.controllers.*.name[count.index]
|
||||||
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
etcd_initial_cluster = join(",", formatlist("%s=https://%s:2380", var.controllers.*.name, var.controllers.*.domain))
|
||||||
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
cluster_dns_service_ip = module.bootstrap.cluster_dns_service_ip
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
|
@ -150,6 +150,18 @@ variable "kernel_args" {
|
|||||||
default = []
|
default = []
|
||||||
}
|
}
|
||||||
|
|
||||||
|
variable "enable_reporting" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_aggregation" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable the Kubernetes Aggregation Layer"
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
|
||||||
variable "oem_type" {
|
variable "oem_type" {
|
||||||
type = string
|
type = string
|
||||||
description = <<EOD
|
description = <<EOD
|
||||||
@ -161,7 +173,13 @@ EOD
|
|||||||
default = ""
|
default = ""
|
||||||
}
|
}
|
||||||
|
|
||||||
# advanced
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
variable "components" {
|
variable "components" {
|
||||||
description = "Configure pre-installed cluster components"
|
description = "Configure pre-installed cluster components"
|
||||||
|
@ -36,7 +36,7 @@ systemd:
|
|||||||
After=docker.service
|
After=docker.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||||
@ -113,7 +113,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -80,6 +80,7 @@ data "ct_config" "worker" {
|
|||||||
domain_name = var.domain
|
domain_name = var.domain
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
node_labels = join(",", var.node_labels)
|
node_labels = join(",", var.node_labels)
|
||||||
node_taints = join(",", var.node_taints)
|
node_taints = join(",", var.node_taints)
|
||||||
})
|
})
|
||||||
|
@ -120,3 +120,13 @@ The 1st IP will be reserved for kube_apiserver, the 10th IP will be reserved for
|
|||||||
EOD
|
EOD
|
||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -18,6 +18,7 @@ module "workers" {
|
|||||||
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
kubeconfig = module.bootstrap.kubeconfig-kubelet
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
|
node_labels = lookup(var.worker_node_labels, var.workers[count.index].name, [])
|
||||||
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
|
node_taints = lookup(var.worker_node_taints, var.workers[count.index].name, [])
|
||||||
snippets = lookup(var.snippets, var.workers[count.index].name, [])
|
snippets = lookup(var.snippets, var.workers[count.index].name, [])
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.31.3 (upstream)
|
* Kubernetes v1.30.3 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
@ -13,6 +13,9 @@ module "bootstrap" {
|
|||||||
|
|
||||||
pod_cidr = var.pod_cidr
|
pod_cidr = var.pod_cidr
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
enable_reporting = var.enable_reporting
|
||||||
|
enable_aggregation = var.enable_aggregation
|
||||||
components = var.components
|
components = var.components
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -55,7 +55,7 @@ systemd:
|
|||||||
After=afterburn.service
|
After=afterburn.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
EnvironmentFile=/run/metadata/afterburn
|
EnvironmentFile=/run/metadata/afterburn
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -123,7 +123,7 @@ systemd:
|
|||||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||||
--entrypoint=/apply \
|
--entrypoint=/apply \
|
||||||
quay.io/poseidon/kubelet:v1.31.3
|
quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||||
storage:
|
storage:
|
||||||
@ -151,7 +151,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -28,7 +28,7 @@ systemd:
|
|||||||
After=afterburn.service
|
After=afterburn.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
EnvironmentFile=/run/metadata/afterburn
|
EnvironmentFile=/run/metadata/afterburn
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -104,7 +104,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -74,6 +74,7 @@ data "ct_config" "controllers" {
|
|||||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||||
])
|
])
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
|
@ -86,7 +86,25 @@ EOD
|
|||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
# advanced
|
variable "enable_reporting" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_aggregation" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable the Kubernetes Aggregation Layer"
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
|
||||||
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
variable "components" {
|
variable "components" {
|
||||||
description = "Configure pre-installed cluster components"
|
description = "Configure pre-installed cluster components"
|
||||||
|
@ -62,6 +62,7 @@ resource "digitalocean_tag" "workers" {
|
|||||||
data "ct_config" "worker" {
|
data "ct_config" "worker" {
|
||||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.worker_snippets
|
snippets = var.worker_snippets
|
||||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
|||||||
|
|
||||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||||
|
|
||||||
* Kubernetes v1.31.3 (upstream)
|
* Kubernetes v1.30.3 (upstream)
|
||||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Kubernetes assets (kubeconfig, manifests)
|
# Kubernetes assets (kubeconfig, manifests)
|
||||||
module "bootstrap" {
|
module "bootstrap" {
|
||||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e6a1c7bccfc45ab299b5f8149bc3840f99b30b2b"
|
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=1609060f4f138f3b3aef74a9e5494e0fe831c423"
|
||||||
|
|
||||||
cluster_name = var.cluster_name
|
cluster_name = var.cluster_name
|
||||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||||
@ -13,6 +13,9 @@ module "bootstrap" {
|
|||||||
|
|
||||||
pod_cidr = var.pod_cidr
|
pod_cidr = var.pod_cidr
|
||||||
service_cidr = var.service_cidr
|
service_cidr = var.service_cidr
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
|
enable_reporting = var.enable_reporting
|
||||||
|
enable_aggregation = var.enable_aggregation
|
||||||
components = var.components
|
components = var.components
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -66,7 +66,7 @@ systemd:
|
|||||||
After=coreos-metadata.service
|
After=coreos-metadata.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
EnvironmentFile=/run/metadata/coreos
|
EnvironmentFile=/run/metadata/coreos
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -117,7 +117,7 @@ systemd:
|
|||||||
Type=oneshot
|
Type=oneshot
|
||||||
RemainAfterExit=true
|
RemainAfterExit=true
|
||||||
WorkingDirectory=/opt/bootstrap
|
WorkingDirectory=/opt/bootstrap
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
ExecStart=/usr/bin/docker run \
|
ExecStart=/usr/bin/docker run \
|
||||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||||
-v /opt/bootstrap/assets:/assets:ro \
|
-v /opt/bootstrap/assets:/assets:ro \
|
||||||
@ -153,7 +153,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -38,7 +38,7 @@ systemd:
|
|||||||
After=coreos-metadata.service
|
After=coreos-metadata.service
|
||||||
Wants=rpc-statd.service
|
Wants=rpc-statd.service
|
||||||
[Service]
|
[Service]
|
||||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.31.3
|
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.3
|
||||||
EnvironmentFile=/run/metadata/coreos
|
EnvironmentFile=/run/metadata/coreos
|
||||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||||
@ -103,7 +103,7 @@ storage:
|
|||||||
cgroupDriver: systemd
|
cgroupDriver: systemd
|
||||||
clusterDNS:
|
clusterDNS:
|
||||||
- ${cluster_dns_service_ip}
|
- ${cluster_dns_service_ip}
|
||||||
clusterDomain: cluster.local
|
clusterDomain: ${cluster_domain_suffix}
|
||||||
healthzPort: 0
|
healthzPort: 0
|
||||||
rotateCertificates: true
|
rotateCertificates: true
|
||||||
shutdownGracePeriod: 45s
|
shutdownGracePeriod: 45s
|
||||||
|
@ -79,6 +79,7 @@ data "ct_config" "controllers" {
|
|||||||
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
for i in range(var.controller_count) : "etcd${i}=https://${var.cluster_name}-etcd${i}.${var.dns_zone}:2380"
|
||||||
])
|
])
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.controller_snippets
|
snippets = var.controller_snippets
|
||||||
|
@ -86,7 +86,25 @@ EOD
|
|||||||
default = "10.3.0.0/16"
|
default = "10.3.0.0/16"
|
||||||
}
|
}
|
||||||
|
|
||||||
# advanced
|
variable "enable_reporting" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable usage or analytics reporting to upstreams (Calico)"
|
||||||
|
default = false
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "enable_aggregation" {
|
||||||
|
type = bool
|
||||||
|
description = "Enable the Kubernetes Aggregation Layer"
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
|
||||||
|
# unofficial, undocumented, unsupported
|
||||||
|
|
||||||
|
variable "cluster_domain_suffix" {
|
||||||
|
type = string
|
||||||
|
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||||
|
default = "cluster.local"
|
||||||
|
}
|
||||||
|
|
||||||
variable "components" {
|
variable "components" {
|
||||||
description = "Configure pre-installed cluster components"
|
description = "Configure pre-installed cluster components"
|
||||||
|
@ -60,6 +60,7 @@ resource "digitalocean_tag" "workers" {
|
|||||||
data "ct_config" "worker" {
|
data "ct_config" "worker" {
|
||||||
content = templatefile("${path.module}/butane/worker.yaml", {
|
content = templatefile("${path.module}/butane/worker.yaml", {
|
||||||
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
cluster_dns_service_ip = cidrhost(var.service_cidr, 10)
|
||||||
|
cluster_domain_suffix = var.cluster_domain_suffix
|
||||||
})
|
})
|
||||||
strict = true
|
strict = true
|
||||||
snippets = var.worker_snippets
|
snippets = var.worker_snippets
|
||||||
|
@ -1,11 +1,13 @@
|
|||||||
# ARM64
|
# ARM64
|
||||||
|
|
||||||
Typhoon supports Kubernetes clusters with ARM64 controller or worker nodes on several platforms:
|
Typhoon supports ARM64 Kubernetes clusters with ARM64 controller and worker nodes (full-cluster) or adding worker pools of ARM64 nodes to clusters with an x86/amd64 control plane for a hybdrid (mixed-arch) cluster.
|
||||||
|
|
||||||
|
Typhoon ARM64 clusters (full-cluster or mixed-arch) are available on:
|
||||||
|
|
||||||
* AWS with Fedora CoreOS or Flatcar Linux
|
* AWS with Fedora CoreOS or Flatcar Linux
|
||||||
* Azure with Flatcar Linux
|
* Azure with Flatcar Linux
|
||||||
|
|
||||||
## AWS
|
## Cluster
|
||||||
|
|
||||||
Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
||||||
|
|
||||||
@ -13,23 +15,24 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "gravitas" {
|
module "gravitas" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
cluster_name = "gravitas"
|
cluster_name = "gravitas"
|
||||||
dns_zone = "aws.example.com"
|
dns_zone = "aws.example.com"
|
||||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||||
|
|
||||||
# instances
|
|
||||||
controller_type = "t4g.small"
|
|
||||||
controller_arch = "arm64"
|
|
||||||
worker_count = 2
|
|
||||||
worker_type = "t4g.small"
|
|
||||||
worker_arch = "arm64"
|
|
||||||
worker_price = "0.0168"
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
arch = "arm64"
|
||||||
|
networking = "cilium"
|
||||||
|
worker_count = 2
|
||||||
|
worker_price = "0.0168"
|
||||||
|
|
||||||
|
controller_type = "t4g.small"
|
||||||
|
worker_type = "t4g.small"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -37,23 +40,24 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "gravitas" {
|
module "gravitas" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
cluster_name = "gravitas"
|
cluster_name = "gravitas"
|
||||||
dns_zone = "aws.example.com"
|
dns_zone = "aws.example.com"
|
||||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||||
|
|
||||||
# instances
|
|
||||||
controller_type = "t4g.small"
|
|
||||||
controller_arch = "arm64"
|
|
||||||
worker_count = 2
|
|
||||||
worker_type = "t4g.small"
|
|
||||||
worker_arch = "arm64"
|
|
||||||
worker_price = "0.0168"
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
arch = "arm64"
|
||||||
|
networking = "cilium"
|
||||||
|
worker_count = 2
|
||||||
|
worker_price = "0.0168"
|
||||||
|
|
||||||
|
controller_type = "t4g.small"
|
||||||
|
worker_type = "t4g.small"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -62,64 +66,35 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
|
|||||||
```
|
```
|
||||||
$ kubectl get nodes -o wide
|
$ kubectl get nodes -o wide
|
||||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||||
ip-10-0-21-119 Ready <none> 77s v1.31.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
ip-10-0-21-119 Ready <none> 77s v1.30.3 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||||
ip-10-0-32-166 Ready <none> 80s v1.31.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
ip-10-0-32-166 Ready <none> 80s v1.30.3 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||||
ip-10-0-5-79 Ready <none> 77s v1.31.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
ip-10-0-5-79 Ready <none> 77s v1.30.3 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||||
```
|
|
||||||
|
|
||||||
## Azure
|
|
||||||
|
|
||||||
Create a cluster on Azure with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
|
||||||
|
|
||||||
```tf
|
|
||||||
module "ramius" {
|
|
||||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.31.3"
|
|
||||||
|
|
||||||
# Azure
|
|
||||||
cluster_name = "ramius"
|
|
||||||
location = "centralus"
|
|
||||||
dns_zone = "azure.example.com"
|
|
||||||
dns_zone_group = "example-group"
|
|
||||||
|
|
||||||
# instances
|
|
||||||
controller_arch = "arm64"
|
|
||||||
controller_type = "Standard_B2pls_v5"
|
|
||||||
worker_count = 2
|
|
||||||
controller_arch = "arm64"
|
|
||||||
worker_type = "Standard_D2pls_v5"
|
|
||||||
|
|
||||||
# configuration
|
|
||||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Hybrid
|
## Hybrid
|
||||||
|
|
||||||
Create a hybrid/mixed arch cluster by defining a cluster where [worker pool(s)](worker-pools.md#aws) have a different instance type architecture than controllers or other workers. Taints are added to aid in scheduling.
|
Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [worker pool](worker-pools.md#aws) with ARM64 workers. Optional taints are added to aid in scheduling.
|
||||||
|
|
||||||
Here's an AWS example,
|
|
||||||
|
|
||||||
=== "FCOS Cluster"
|
=== "FCOS Cluster"
|
||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "gravitas" {
|
module "gravitas" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
cluster_name = "gravitas"
|
cluster_name = "gravitas"
|
||||||
dns_zone = "aws.example.com"
|
dns_zone = "aws.example.com"
|
||||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||||
|
|
||||||
# instances
|
# configuration
|
||||||
|
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
networking = "cilium"
|
||||||
worker_count = 2
|
worker_count = 2
|
||||||
worker_arch = "arm64"
|
|
||||||
worker_type = "t4g.medium"
|
|
||||||
worker_price = "0.021"
|
worker_price = "0.021"
|
||||||
|
|
||||||
# configuration
|
|
||||||
daemonset_tolerations = ["arch"] # important
|
daemonset_tolerations = ["arch"] # important
|
||||||
networking = "cilium"
|
|
||||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -127,23 +102,22 @@ Here's an AWS example,
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "gravitas" {
|
module "gravitas" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
cluster_name = "gravitas"
|
cluster_name = "gravitas"
|
||||||
dns_zone = "aws.example.com"
|
dns_zone = "aws.example.com"
|
||||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||||
|
|
||||||
# instances
|
# configuration
|
||||||
|
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
networking = "cilium"
|
||||||
worker_count = 2
|
worker_count = 2
|
||||||
worker_arch = "arm64"
|
|
||||||
worker_type = "t4g.medium"
|
|
||||||
worker_price = "0.021"
|
worker_price = "0.021"
|
||||||
|
|
||||||
# configuration
|
|
||||||
daemonset_tolerations = ["arch"] # important
|
daemonset_tolerations = ["arch"] # important
|
||||||
networking = "cilium"
|
|
||||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -151,23 +125,23 @@ Here's an AWS example,
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "gravitas-arm64" {
|
module "gravitas-arm64" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
vpc_id = module.gravitas.vpc_id
|
vpc_id = module.gravitas.vpc_id
|
||||||
subnet_ids = module.gravitas.subnet_ids
|
subnet_ids = module.gravitas.subnet_ids
|
||||||
security_groups = module.gravitas.worker_security_groups
|
security_groups = module.gravitas.worker_security_groups
|
||||||
|
|
||||||
# instances
|
|
||||||
arch = "arm64"
|
|
||||||
instance_type = "t4g.small"
|
|
||||||
spot_price = "0.0168"
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
name = "gravitas-arm64"
|
name = "gravitas-arm64"
|
||||||
kubeconfig = module.gravitas.kubeconfig
|
kubeconfig = module.gravitas.kubeconfig
|
||||||
node_taints = ["arch=arm64:NoSchedule"]
|
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
|
|
||||||
|
# optional
|
||||||
|
arch = "arm64"
|
||||||
|
instance_type = "t4g.small"
|
||||||
|
spot_price = "0.0168"
|
||||||
|
node_taints = ["arch=arm64:NoSchedule"]
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -175,23 +149,23 @@ Here's an AWS example,
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "gravitas-arm64" {
|
module "gravitas-arm64" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
vpc_id = module.gravitas.vpc_id
|
vpc_id = module.gravitas.vpc_id
|
||||||
subnet_ids = module.gravitas.subnet_ids
|
subnet_ids = module.gravitas.subnet_ids
|
||||||
security_groups = module.gravitas.worker_security_groups
|
security_groups = module.gravitas.worker_security_groups
|
||||||
|
|
||||||
# instances
|
|
||||||
arch = "arm64"
|
|
||||||
instance_type = "t4g.small"
|
|
||||||
spot_price = "0.0168"
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
name = "gravitas-arm64"
|
name = "gravitas-arm64"
|
||||||
kubeconfig = module.gravitas.kubeconfig
|
kubeconfig = module.gravitas.kubeconfig
|
||||||
node_taints = ["arch=arm64:NoSchedule"]
|
|
||||||
ssh_authorized_key = var.ssh_authorized_key
|
ssh_authorized_key = var.ssh_authorized_key
|
||||||
|
|
||||||
|
# optional
|
||||||
|
arch = "arm64"
|
||||||
|
instance_type = "t4g.small"
|
||||||
|
spot_price = "0.0168"
|
||||||
|
node_taints = ["arch=arm64:NoSchedule"]
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -200,9 +174,33 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
|
|||||||
```
|
```
|
||||||
$ kubectl get nodes -o wide
|
$ kubectl get nodes -o wide
|
||||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||||
ip-10-0-1-73 Ready <none> 111m v1.31.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
ip-10-0-1-73 Ready <none> 111m v1.30.3 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||||
ip-10-0-22-79... Ready <none> 111m v1.31.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
ip-10-0-22-79... Ready <none> 111m v1.30.3 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||||
ip-10-0-24-130 Ready <none> 111m v1.31.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
ip-10-0-24-130 Ready <none> 111m v1.30.3 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||||
ip-10-0-39-19 Ready <none> 111m v1.31.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
ip-10-0-39-19 Ready <none> 111m v1.30.3 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Azure
|
||||||
|
|
||||||
|
Create a cluster on Azure with ARM64 controller and worker nodes. Container workloads must be `arm64` compatible and use `arm64` (or multi-arch) container images.
|
||||||
|
|
||||||
|
```tf
|
||||||
|
module "ramius" {
|
||||||
|
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
|
# Azure
|
||||||
|
cluster_name = "ramius"
|
||||||
|
location = "centralus"
|
||||||
|
dns_zone = "azure.example.com"
|
||||||
|
dns_zone_group = "example-group"
|
||||||
|
|
||||||
|
# configuration
|
||||||
|
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
arch = "arm64"
|
||||||
|
controller_type = "Standard_D2pls_v5"
|
||||||
|
worker_type = "Standard_D2pls_v5"
|
||||||
|
worker_count = 2
|
||||||
|
}
|
||||||
|
```
|
||||||
|
@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin" {
|
module "yavin" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
cluster_name = "yavin"
|
cluster_name = "yavin"
|
||||||
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin-pool" {
|
module "yavin-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
cluster_name = "yavin"
|
cluster_name = "yavin"
|
||||||
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin" {
|
module "yavin" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
cluster_name = "yavin"
|
cluster_name = "yavin"
|
||||||
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin-pool" {
|
module "yavin-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
cluster_name = "yavin"
|
cluster_name = "yavin"
|
||||||
|
@ -19,7 +19,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "tempest-worker-pool" {
|
module "tempest-worker-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
vpc_id = module.tempest.vpc_id
|
vpc_id = module.tempest.vpc_id
|
||||||
@ -42,7 +42,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "tempest-worker-pool" {
|
module "tempest-worker-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
vpc_id = module.tempest.vpc_id
|
vpc_id = module.tempest.vpc_id
|
||||||
@ -111,7 +111,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "ramius-worker-pool" {
|
module "ramius-worker-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
location = module.ramius.location
|
location = module.ramius.location
|
||||||
@ -137,7 +137,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "ramius-worker-pool" {
|
module "ramius-worker-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
location = module.ramius.location
|
location = module.ramius.location
|
||||||
@ -207,7 +207,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin-worker-pool" {
|
module "yavin-worker-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
region = "europe-west2"
|
region = "europe-west2"
|
||||||
@ -231,7 +231,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin-worker-pool" {
|
module "yavin-worker-pool" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.30.3"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
region = "europe-west2"
|
region = "europe-west2"
|
||||||
@ -262,11 +262,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
|||||||
```
|
```
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS AGE VERSION
|
NAME STATUS AGE VERSION
|
||||||
yavin-controller-0.c.example-com.internal Ready 6m v1.31.3
|
yavin-controller-0.c.example-com.internal Ready 6m v1.30.3
|
||||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.31.3
|
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.30.3
|
||||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.31.3
|
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.30.3
|
||||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.31.3
|
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.30.3
|
||||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.31.3
|
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
### Variables
|
### Variables
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# AWS
|
# AWS
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on AWS with Fedora CoreOS.
|
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on AWS with Fedora CoreOS.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||||
|
|
||||||
@ -72,19 +72,19 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "tempest" {
|
module "tempest" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
cluster_name = "tempest"
|
cluster_name = "tempest"
|
||||||
dns_zone = "aws.example.com"
|
dns_zone = "aws.example.com"
|
||||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||||
|
|
||||||
# instances
|
|
||||||
worker_count = 2
|
|
||||||
worker_type = "t3.small"
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
worker_count = 2
|
||||||
|
worker_type = "t3.small"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -136,7 +136,6 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
|||||||
resource "local_file" "kubeconfig-tempest" {
|
resource "local_file" "kubeconfig-tempest" {
|
||||||
content = module.tempest.kubeconfig-admin
|
content = module.tempest.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/tempest-config"
|
filename = "/home/user/.kube/configs/tempest-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -146,9 +145,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
ip-10-0-3-155 Ready <none> 10m v1.31.3
|
ip-10-0-3-155 Ready <none> 10m v1.30.3
|
||||||
ip-10-0-26-65 Ready <none> 10m v1.31.3
|
ip-10-0-26-65 Ready <none> 10m v1.30.3
|
||||||
ip-10-0-41-21 Ready <none> 10m v1.31.3
|
ip-10-0-41-21 Ready <none> 10m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -156,9 +155,9 @@ List the pods.
|
|||||||
```
|
```
|
||||||
$ kubectl get pods --all-namespaces
|
$ kubectl get pods --all-namespaces
|
||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system cilium-1m5bf 1/1 Running 0 34m
|
kube-system calico-node-1m5bf 2/2 Running 0 34m
|
||||||
kube-system cilium-7jmr1 1/1 Running 0 34m
|
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||||
kube-system cilium-bknc8 1/1 Running 0 34m
|
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||||
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
|
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
|
||||||
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
|
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
|
||||||
@ -207,21 +206,16 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
|||||||
|
|
||||||
| Name | Description | Default | Example |
|
| Name | Description | Default | Example |
|
||||||
|:-----|:------------|:--------|:--------|
|
|:-----|:------------|:--------|:--------|
|
||||||
| os_stream | Fedora CoreOS stream for instances | "stable" | "testing", "next" |
|
|
||||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||||
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
|
|
||||||
| controller_disk_size | Size of EBS volume in GB | 30 | 100 |
|
|
||||||
| controller_disk_type | Type of EBS volume | gp3 | io1 |
|
|
||||||
| controller_disk_iops | IOPS of EBS volume | 3000 | 4000 |
|
|
||||||
| controller_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
|
|
||||||
| worker_count | Number of workers | 1 | 3 |
|
| worker_count | Number of workers | 1 | 3 |
|
||||||
|
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
|
||||||
| worker_type | EC2 instance type for workers | "t3.small" | See below |
|
| worker_type | EC2 instance type for workers | "t3.small" | See below |
|
||||||
| worker_disk_size | Size of EBS volume in GB | 30 | 100 |
|
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "testing", "next" |
|
||||||
| worker_disk_type | Type of EBS volume | gp3 | io1 |
|
| disk_size | Size of the EBS volume in GB | 30 | 100 |
|
||||||
| worker_disk_iops | IOPS of EBS volume | 3000 | 4000 |
|
| disk_type | Type of the EBS volume | "gp3" | standard, gp2, gp3, io1 |
|
||||||
| worker_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
|
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
|
||||||
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 |
|
|
||||||
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
|
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
|
||||||
|
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0 | 0.10 |
|
||||||
| controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) |
|
| controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) |
|
||||||
| worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) |
|
| worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) |
|
||||||
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
|
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
|
||||||
@ -234,7 +228,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
|||||||
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
Do not choose a `controller_type` smaller than `t3.small`. Smaller instances are not sufficient for running a controller.
|
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller.
|
||||||
|
|
||||||
!!! tip "MTU"
|
!!! tip "MTU"
|
||||||
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
|
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
|
||||||
@ -242,3 +236,4 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
|
|||||||
#### Spot
|
#### Spot
|
||||||
|
|
||||||
Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled.
|
Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled.
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Azure
|
# Azure
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Azure with Fedora CoreOS.
|
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on Azure with Fedora CoreOS.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||||
|
|
||||||
@ -86,23 +86,23 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "ramius" {
|
module "ramius" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
cluster_name = "ramius"
|
cluster_name = "ramius"
|
||||||
location = "centralus"
|
location = "centralus"
|
||||||
dns_zone = "azure.example.com"
|
dns_zone = "azure.example.com"
|
||||||
dns_zone_group = "example-group"
|
dns_zone_group = "example-group"
|
||||||
|
|
||||||
|
# configuration
|
||||||
|
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-36.20220716.3.1"
|
||||||
|
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
worker_count = 2
|
||||||
network_cidr = {
|
network_cidr = {
|
||||||
ipv4 = ["10.0.0.0/20"]
|
ipv4 = ["10.0.0.0/20"]
|
||||||
}
|
}
|
||||||
|
|
||||||
# instances
|
|
||||||
os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-36.20220716.3.1"
|
|
||||||
worker_count = 2
|
|
||||||
|
|
||||||
# configuration
|
|
||||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -154,7 +154,6 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
|||||||
resource "local_file" "kubeconfig-ramius" {
|
resource "local_file" "kubeconfig-ramius" {
|
||||||
content = module.ramius.kubeconfig-admin
|
content = module.ramius.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/ramius-config"
|
filename = "/home/user/.kube/configs/ramius-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -164,9 +163,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
ramius-controller-0 Ready <none> 24m v1.31.3
|
ramius-controller-0 Ready <none> 24m v1.30.3
|
||||||
ramius-worker-000001 Ready <none> 25m v1.31.3
|
ramius-worker-000001 Ready <none> 25m v1.30.3
|
||||||
ramius-worker-000002 Ready <none> 24m v1.31.3
|
ramius-worker-000002 Ready <none> 24m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -176,9 +175,9 @@ $ kubectl get pods --all-namespaces
|
|||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
|
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
|
||||||
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
|
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
|
||||||
kube-system cilium-1m5bf 1/1 Running 0 26m
|
kube-system calico-node-1m5bf 2/2 Running 0 26m
|
||||||
kube-system cilium-7jmr1 1/1 Running 0 26m
|
kube-system calico-node-7jmr1 2/2 Running 0 26m
|
||||||
kube-system cilium-bknc8 1/1 Running 0 26m
|
kube-system calico-node-bknc8 2/2 Running 0 26m
|
||||||
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
|
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
|
||||||
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
|
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
|
||||||
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
|
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
|
||||||
@ -241,14 +240,10 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
|||||||
| Name | Description | Default | Example |
|
| Name | Description | Default | Example |
|
||||||
|:-----|:------------|:--------|:--------|
|
|:-----|:------------|:--------|:--------|
|
||||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||||
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
|
|
||||||
| controller_disk_type | Managed disk for controllers | Premium_LRS | Standard_LRS |
|
|
||||||
| controller_disk_size | Managed disk size in GB | 30 | 50 |
|
|
||||||
| worker_count | Number of workers | 1 | 3 |
|
| worker_count | Number of workers | 1 | 3 |
|
||||||
|
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
|
||||||
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
|
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
|
||||||
| worker_disk_type | Managed disk for workers | Standard_LRS | Premium_LRS |
|
| disk_size | Size of the disk in GB | 30 | 100 |
|
||||||
| worker_disk_size | Size of the disk in GB | 30 | 100 |
|
|
||||||
| worker_ephemeral_disk | Use ephemeral local disk instead of managed disk | false | true |
|
|
||||||
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
||||||
| controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) |
|
| controller_snippets | Controller Butane snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
| worker_snippets | Worker Butane snippets | [] | [example](/advanced/customization/#usage) |
|
| worker_snippets | Worker Butane snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
@ -260,6 +255,9 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
|||||||
|
|
||||||
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
|
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
Unlike AWS and GCP, Azure requires its *virtual* networks to have non-overlapping IPv4 CIDRs (yeah, go figure). Instead of each cluster just using `10.0.0.0/16` for instances, each Azure cluster's `host_cidr` must be non-overlapping (e.g. 10.0.0.0/20 for the 1st cluster, 10.0.16.0/20 for the 2nd cluster, etc).
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller.
|
Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller.
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Bare-Metal
|
# Bare-Metal
|
||||||
|
|
||||||
In this tutorial, we'll network boot and provision a Kubernetes v1.31.3 cluster on bare-metal with Fedora CoreOS.
|
In this tutorial, we'll network boot and provision a Kubernetes v1.30.3 cluster on bare-metal with Fedora CoreOS.
|
||||||
|
|
||||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||||
|
|
||||||
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "mercury" {
|
module "mercury" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# bare-metal
|
# bare-metal
|
||||||
cluster_name = "mercury"
|
cluster_name = "mercury"
|
||||||
@ -191,7 +191,7 @@ Workers with similar features can be defined inline using the `workers` field as
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "mercury-node1" {
|
module "mercury-node1" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.30.3"
|
||||||
|
|
||||||
# bare-metal
|
# bare-metal
|
||||||
cluster_name = "mercury"
|
cluster_name = "mercury"
|
||||||
@ -304,7 +304,6 @@ systemd[1]: Started Kubernetes control plane.
|
|||||||
resource "local_file" "kubeconfig-mercury" {
|
resource "local_file" "kubeconfig-mercury" {
|
||||||
content = module.mercury.kubeconfig-admin
|
content = module.mercury.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/mercury-config"
|
filename = "/home/user/.kube/configs/mercury-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -314,9 +313,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
node1.example.com Ready <none> 10m v1.31.3
|
node1.example.com Ready <none> 10m v1.30.3
|
||||||
node2.example.com Ready <none> 10m v1.31.3
|
node2.example.com Ready <none> 10m v1.30.3
|
||||||
node3.example.com Ready <none> 10m v1.31.3
|
node3.example.com Ready <none> 10m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -324,10 +323,9 @@ List the pods.
|
|||||||
```
|
```
|
||||||
$ kubectl get pods --all-namespaces
|
$ kubectl get pods --all-namespaces
|
||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system cilium-6qp7f 1/1 Running 1 11m
|
kube-system calico-node-6qp7f 2/2 Running 1 11m
|
||||||
kube-system cilium-gnjrm 1/1 Running 0 11m
|
kube-system calico-node-gnjrm 2/2 Running 0 11m
|
||||||
kube-system cilium-llbgt 1/1 Running 0 11m
|
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||||
kube-system cilium-operator-68d778b448-g744f 1/1 Running 0 11m
|
|
||||||
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
|
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
|
||||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||||
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m
|
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m
|
||||||
@ -374,3 +372,4 @@ Check the [variables.tf](https://github.com/poseidon/typhoon/blob/master/bare-me
|
|||||||
| kernel_args | Additional kernel args to provide at PXE boot | [] | ["kvm-intel.nested=1"] |
|
| kernel_args | Additional kernel args to provide at PXE boot | [] | ["kvm-intel.nested=1"] |
|
||||||
| worker_node_labels | Map from worker name to list of initial node labels | {} | {"node2" = ["role=special"]} |
|
| worker_node_labels | Map from worker name to list of initial node labels | {} | {"node2" = ["role=special"]} |
|
||||||
| worker_node_taints | Map from worker name to list of initial node taints | {} | {"node2" = ["role=special:NoSchedule"]} |
|
| worker_node_taints | Map from worker name to list of initial node taints | {} | {"node2" = ["role=special:NoSchedule"]} |
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# DigitalOcean
|
# DigitalOcean
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on DigitalOcean with Fedora CoreOS.
|
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on DigitalOcean with Fedora CoreOS.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||||
|
|
||||||
@ -81,19 +81,19 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "nemo" {
|
module "nemo" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# Digital Ocean
|
# Digital Ocean
|
||||||
cluster_name = "nemo"
|
cluster_name = "nemo"
|
||||||
region = "nyc3"
|
region = "nyc3"
|
||||||
dns_zone = "digital-ocean.example.com"
|
dns_zone = "digital-ocean.example.com"
|
||||||
|
|
||||||
# instances
|
|
||||||
os_image = data.digitalocean_image.fedora-coreos-31-20200323-3-2.id
|
|
||||||
worker_count = 2
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
|
os_image = data.digitalocean_image.fedora-coreos-31-20200323-3-2.id
|
||||||
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
|
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
|
||||||
|
|
||||||
|
# optional
|
||||||
|
worker_count = 2
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -146,7 +146,6 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
|||||||
resource "local_file" "kubeconfig-nemo" {
|
resource "local_file" "kubeconfig-nemo" {
|
||||||
content = module.nemo.kubeconfig-admin
|
content = module.nemo.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/nemo-config"
|
filename = "/home/user/.kube/configs/nemo-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -156,9 +155,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
10.132.110.130 Ready <none> 10m v1.31.3
|
10.132.110.130 Ready <none> 10m v1.30.3
|
||||||
10.132.115.81 Ready <none> 10m v1.31.3
|
10.132.115.81 Ready <none> 10m v1.30.3
|
||||||
10.132.124.107 Ready <none> 10m v1.31.3
|
10.132.124.107 Ready <none> 10m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -167,9 +166,9 @@ List the pods.
|
|||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
||||||
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
|
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
|
||||||
kube-system cilium-1m5bf 1/1 Running 0 11m
|
kube-system calico-node-1m5bf 2/2 Running 0 11m
|
||||||
kube-system cilium-7jmr1 1/1 Running 0 11m
|
kube-system calico-node-7jmr1 2/2 Running 0 11m
|
||||||
kube-system cilium-bknc8 1/1 Running 0 11m
|
kube-system calico-node-bknc8 2/2 Running 0 11m
|
||||||
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
|
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
|
||||||
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
|
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
|
||||||
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
|
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
|
||||||
@ -249,3 +248,4 @@ Check the list of valid [droplet types](https://developers.digitalocean.com/docu
|
|||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
Do not choose a `controller_type` smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail.
|
Do not choose a `controller_type` smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail.
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Google Cloud
|
# Google Cloud
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Google Compute Engine with Fedora CoreOS.
|
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on Google Compute Engine with Fedora CoreOS.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||||
|
|
||||||
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kuberne
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin" {
|
module "yavin" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
cluster_name = "yavin"
|
cluster_name = "yavin"
|
||||||
@ -81,11 +81,11 @@ module "yavin" {
|
|||||||
dns_zone = "example.com"
|
dns_zone = "example.com"
|
||||||
dns_zone_name = "example-zone"
|
dns_zone_name = "example-zone"
|
||||||
|
|
||||||
# instances
|
|
||||||
worker_count = 2
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
worker_count = 2
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -138,7 +138,6 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
|||||||
resource "local_file" "kubeconfig-yavin" {
|
resource "local_file" "kubeconfig-yavin" {
|
||||||
content = module.yavin.kubeconfig-admin
|
content = module.yavin.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/yavin-config"
|
filename = "/home/user/.kube/configs/yavin-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -148,9 +147,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME ROLES STATUS AGE VERSION
|
NAME ROLES STATUS AGE VERSION
|
||||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.31.3
|
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.3
|
||||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.31.3
|
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.3
|
||||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.31.3
|
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -158,9 +157,9 @@ List the pods.
|
|||||||
```
|
```
|
||||||
$ kubectl get pods --all-namespaces
|
$ kubectl get pods --all-namespaces
|
||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system cilium-1cs8z 1/1 Running 0 6m
|
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||||
kube-system cilium-d1l5b 1/1 Running 0 6m
|
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||||
kube-system cilium-sp9ps 1/1 Running 0 6m
|
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||||
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
|
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
|
||||||
@ -211,16 +210,13 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
|||||||
### Optional
|
### Optional
|
||||||
|
|
||||||
| Name | Description | Default | Example |
|
| Name | Description | Default | Example |
|
||||||
|:---------------------|:---------------------------------------------------------------------------|:----------------|:-------------------------------------|
|
|:-----|:------------|:--------|:--------|
|
||||||
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "stable", "testing", "next" |
|
|
||||||
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
|
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
|
||||||
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
|
|
||||||
| controller_disk_size | Controller disk size in GB | 30 | 20 |
|
|
||||||
| controller_disk_type | Controller disk type | "pd-standard" | "pd-ssd" |
|
|
||||||
| worker_count | Number of workers | 1 | 3 |
|
| worker_count | Number of workers | 1 | 3 |
|
||||||
|
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
|
||||||
| worker_type | Machine type for workers | "n1-standard-1" | See below |
|
| worker_type | Machine type for workers | "n1-standard-1" | See below |
|
||||||
| worker_disk_size | Worker disk size in GB | 30 | 100 |
|
| os_stream | Fedora CoreOS stream for compute instances | "stable" | "stable", "testing", "next" |
|
||||||
| worker_disk_type | Worker disk type | "pd-standard" | "pd-ssd" |
|
| disk_size | Size of the disk in GB | 30 | 100 |
|
||||||
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
||||||
| controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) |
|
| controller_snippets | Controller Butane snippets | [] | [examples](/advanced/customization/) |
|
||||||
| worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) |
|
| worker_snippets | Worker Butane snippets | [] | [examples](/advanced/customization/) |
|
||||||
@ -234,3 +230,4 @@ Check the list of valid [machine types](https://cloud.google.com/compute/docs/ma
|
|||||||
#### Preemption
|
#### Preemption
|
||||||
|
|
||||||
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`
|
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# AWS
|
# AWS
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on AWS with Flatcar Linux.
|
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on AWS with Flatcar Linux.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||||
|
|
||||||
@ -72,19 +72,19 @@ Define a Kubernetes cluster using the module `aws/flatcar-linux/kubernetes`.
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "tempest" {
|
module "tempest" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# AWS
|
# AWS
|
||||||
cluster_name = "tempest"
|
cluster_name = "tempest"
|
||||||
dns_zone = "aws.example.com"
|
dns_zone = "aws.example.com"
|
||||||
dns_zone_id = "Z3PAABBCFAKEC0"
|
dns_zone_id = "Z3PAABBCFAKEC0"
|
||||||
|
|
||||||
# instances
|
|
||||||
worker_count = 2
|
|
||||||
worker_type = "t3.small"
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
worker_count = 2
|
||||||
|
worker_type = "t3.small"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -136,7 +136,6 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
|||||||
resource "local_file" "kubeconfig-tempest" {
|
resource "local_file" "kubeconfig-tempest" {
|
||||||
content = module.tempest.kubeconfig-admin
|
content = module.tempest.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/tempest-config"
|
filename = "/home/user/.kube/configs/tempest-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -146,9 +145,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
ip-10-0-3-155 Ready <none> 10m v1.31.3
|
ip-10-0-3-155 Ready <none> 10m v1.30.3
|
||||||
ip-10-0-26-65 Ready <none> 10m v1.31.3
|
ip-10-0-26-65 Ready <none> 10m v1.30.3
|
||||||
ip-10-0-41-21 Ready <none> 10m v1.31.3
|
ip-10-0-41-21 Ready <none> 10m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -156,9 +155,9 @@ List the pods.
|
|||||||
```
|
```
|
||||||
$ kubectl get pods --all-namespaces
|
$ kubectl get pods --all-namespaces
|
||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system cilium-1m5bf 1/1 Running 0 34m
|
kube-system calico-node-1m5bf 2/2 Running 0 34m
|
||||||
kube-system cilium-7jmr1 1/1 Running 0 34m
|
kube-system calico-node-7jmr1 2/2 Running 0 34m
|
||||||
kube-system cilium-bknc8 1/1 Running 0 34m
|
kube-system calico-node-bknc8 2/2 Running 0 34m
|
||||||
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
kube-system coredns-1187388186-wx1lg 1/1 Running 0 34m
|
||||||
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
|
kube-system coredns-1187388186-qjnvp 1/1 Running 0 34m
|
||||||
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
|
kube-system kube-apiserver-ip-10-0-3-155 1/1 Running 0 34m
|
||||||
@ -207,19 +206,16 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
|||||||
|
|
||||||
| Name | Description | Default | Example |
|
| Name | Description | Default | Example |
|
||||||
|:-----|:------------|:--------|:--------|
|
|:-----|:------------|:--------|:--------|
|
||||||
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
|
|
||||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||||
|
| worker_count | Number of workers | 1 | 3 |
|
||||||
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
|
| controller_type | EC2 instance type for controllers | "t3.small" | See below |
|
||||||
| controller_disk_size | Size of EBS volume in GB | 30 | 100 |
|
| worker_type | EC2 instance type for workers | "t3.small" | See below |
|
||||||
| controller_disk_type | Type of EBS volume | gp3 | io1 |
|
| os_image | AMI channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||||
| controller_disk_iops | IOPS of EBS volume | 3000 | 4000 |
|
| disk_size | Size of the EBS volume in GB | 30 | 100 |
|
||||||
| controller_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
|
| disk_type | Type of the EBS volume | "gp3" | standard, gp2, gp3, io1 |
|
||||||
| worker_disk_size | Size of EBS volume in GB | 30 | 100 |
|
| disk_iops | IOPS of the EBS volume | 0 (i.e. auto) | 400 |
|
||||||
| worker_disk_type | Type of EBS volume | gp3 | io1 |
|
|
||||||
| worker_disk_iops | IOPS of EBS volume | 3000 | 4000 |
|
|
||||||
| worker_cpu_credits | Burstable CPU pricing model | null (i.e. auto) | standard, unlimited |
|
|
||||||
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0/null | 0.10 |
|
|
||||||
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
|
| worker_target_groups | Target group ARNs to which worker instances should be added | [] | [aws_lb_target_group.app.id] |
|
||||||
|
| worker_price | Spot price in USD for worker instances or 0 to use on-demand instances | 0/null | 0.10 |
|
||||||
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||||
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||||
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
|
| networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
|
||||||
@ -232,7 +228,7 @@ Reference the DNS zone id with `aws_route53_zone.zone-for-clusters.zone_id`.
|
|||||||
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-types/).
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
Do not choose a `controller_type` smaller than `t3.small`. Smaller instances are not sufficient for running a controller.
|
Do not choose a `controller_type` smaller than `t2.small`. Smaller instances are not sufficient for running a controller.
|
||||||
|
|
||||||
!!! tip "MTU"
|
!!! tip "MTU"
|
||||||
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
|
If your EC2 instance type supports [Jumbo frames](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html#jumbo_frame_instances) (most do), we recommend you change the `network_mtu` to 8981! You will get better pod-to-pod bandwidth.
|
||||||
@ -240,3 +236,4 @@ Check the list of valid [instance types](https://aws.amazon.com/ec2/instance-typ
|
|||||||
#### Spot
|
#### Spot
|
||||||
|
|
||||||
Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled.
|
Add `worker_price = "0.10"` to use spot instance workers (instead of "on-demand") and set a maximum spot price in USD. Clusters can tolerate spot market interuptions fairly well (reschedules pods, but cannot drain) to save money, with the tradeoff that requests for workers may go unfulfilled.
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Azure
|
# Azure
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Azure with Flatcar Linux.
|
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on Azure with Flatcar Linux.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||||
|
|
||||||
@ -75,22 +75,22 @@ Define a Kubernetes cluster using the module `azure/flatcar-linux/kubernetes`.
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "ramius" {
|
module "ramius" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# Azure
|
# Azure
|
||||||
cluster_name = "ramius"
|
cluster_name = "ramius"
|
||||||
location = "centralus"
|
location = "centralus"
|
||||||
dns_zone = "azure.example.com"
|
dns_zone = "azure.example.com"
|
||||||
dns_zone_group = "example-group"
|
dns_zone_group = "example-group"
|
||||||
network_cidr = {
|
|
||||||
ipv4 = ["10.0.0.0/20"]
|
|
||||||
}
|
|
||||||
|
|
||||||
# instances
|
|
||||||
worker_count = 2
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
worker_count = 2
|
||||||
|
network_cidr = {
|
||||||
|
ipv4 = ["10.0.0.0/20"]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -142,7 +142,6 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
|||||||
resource "local_file" "kubeconfig-ramius" {
|
resource "local_file" "kubeconfig-ramius" {
|
||||||
content = module.ramius.kubeconfig-admin
|
content = module.ramius.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/ramius-config"
|
filename = "/home/user/.kube/configs/ramius-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -152,9 +151,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
ramius-controller-0 Ready <none> 24m v1.31.3
|
ramius-controller-0 Ready <none> 24m v1.30.3
|
||||||
ramius-worker-000001 Ready <none> 25m v1.31.3
|
ramius-worker-000001 Ready <none> 25m v1.30.3
|
||||||
ramius-worker-000002 Ready <none> 24m v1.31.3
|
ramius-worker-000002 Ready <none> 24m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -164,9 +163,9 @@ $ kubectl get pods --all-namespaces
|
|||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
|
kube-system coredns-7c6fbb4f4b-b6qzx 1/1 Running 0 26m
|
||||||
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
|
kube-system coredns-7c6fbb4f4b-j2k3d 1/1 Running 0 26m
|
||||||
kube-system cilium-1m5bf 1/1 Running 0 26m
|
kube-system calico-node-1m5bf 2/2 Running 0 26m
|
||||||
kube-system cilium-7jmr1 1/1 Running 0 26m
|
kube-system calico-node-7jmr1 2/2 Running 0 26m
|
||||||
kube-system cilium-bknc8 1/1 Running 0 26m
|
kube-system calico-node-bknc8 2/2 Running 0 26m
|
||||||
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
|
kube-system kube-apiserver-ramius-controller-0 1/1 Running 0 26m
|
||||||
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
|
kube-system kube-controller-manager-ramius-controller-0 1/1 Running 0 26m
|
||||||
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
|
kube-system kube-proxy-j4vpq 1/1 Running 0 26m
|
||||||
@ -227,16 +226,12 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
|||||||
|
|
||||||
| Name | Description | Default | Example |
|
| Name | Description | Default | Example |
|
||||||
|:-----|:------------|:--------|:--------|
|
|:-----|:------------|:--------|:--------|
|
||||||
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
|
|
||||||
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
| controller_count | Number of controllers (i.e. masters) | 1 | 1 |
|
||||||
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
|
|
||||||
| controller_disk_type | Managed disk for controllers | Premium_LRS | Standard_LRS |
|
|
||||||
| controller_disk_size | Managed disk size in GB | 30 | 50 |
|
|
||||||
| worker_count | Number of workers | 1 | 3 |
|
| worker_count | Number of workers | 1 | 3 |
|
||||||
|
| controller_type | Machine type for controllers | "Standard_B2s" | See below |
|
||||||
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
|
| worker_type | Machine type for workers | "Standard_D2as_v5" | See below |
|
||||||
| worker_disk_type | Managed disk for workers | Standard_LRS | Premium_LRS |
|
| os_image | Channel for a Container Linux derivative | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||||
| worker_disk_size | Size of the disk in GB | 30 | 100 |
|
| disk_size | Size of the disk in GB | 30 | 100 |
|
||||||
| worker_ephemeral_disk | Use ephemeral local disk instead of managed disk | false | true |
|
|
||||||
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
| worker_priority | Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | Regular | Spot |
|
||||||
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/#usage) |
|
||||||
@ -248,6 +243,9 @@ Reference the DNS zone with `azurerm_dns_zone.clusters.name` and its resource gr
|
|||||||
|
|
||||||
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
|
Check the list of valid [machine types](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) and their [specs](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general). Use `az vm list-skus` to get the identifier.
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
Unlike AWS and GCP, Azure requires its *virtual* networks to have non-overlapping IPv4 CIDRs (yeah, go figure). Instead of each cluster just using `10.0.0.0/16` for instances, each Azure cluster's `host_cidr` must be non-overlapping (e.g. 10.0.0.0/20 for the 1st cluster, 10.0.16.0/20 for the 2nd cluster, etc).
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller.
|
Do not choose a `controller_type` smaller than `Standard_B2s`. Smaller instances are not sufficient for running a controller.
|
||||||
|
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Bare-Metal
|
# Bare-Metal
|
||||||
|
|
||||||
In this tutorial, we'll network boot and provision a Kubernetes v1.31.3 cluster on bare-metal with Flatcar Linux.
|
In this tutorial, we'll network boot and provision a Kubernetes v1.30.3 cluster on bare-metal with Flatcar Linux.
|
||||||
|
|
||||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||||
|
|
||||||
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/flatcar-linux/kubernete
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "mercury" {
|
module "mercury" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# bare-metal
|
# bare-metal
|
||||||
cluster_name = "mercury"
|
cluster_name = "mercury"
|
||||||
@ -194,7 +194,7 @@ Workers with similar features can be defined inline using the `workers` field as
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "mercury-node1" {
|
module "mercury-node1" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.30.3"
|
||||||
|
|
||||||
# bare-metal
|
# bare-metal
|
||||||
cluster_name = "mercury"
|
cluster_name = "mercury"
|
||||||
@ -206,13 +206,13 @@ module "mercury-node1" {
|
|||||||
name = "node2"
|
name = "node2"
|
||||||
mac = "52:54:00:b2:2f:86"
|
mac = "52:54:00:b2:2f:86"
|
||||||
domain = "node2.example.com"
|
domain = "node2.example.com"
|
||||||
kubeconfig = module.mercury.kubeconfig-admin
|
kubeconfig = module.mercury.kubeconfig
|
||||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||||
|
|
||||||
# optional
|
# optional
|
||||||
snippets = []
|
snippets = []
|
||||||
node_labels = []
|
node_labels = []
|
||||||
node_taints = []
|
node_tains = []
|
||||||
install_disk = "/dev/vda"
|
install_disk = "/dev/vda"
|
||||||
cached_install = false
|
cached_install = false
|
||||||
}
|
}
|
||||||
@ -314,7 +314,6 @@ systemd[1]: Started Kubernetes control plane.
|
|||||||
resource "local_file" "kubeconfig-mercury" {
|
resource "local_file" "kubeconfig-mercury" {
|
||||||
content = module.mercury.kubeconfig-admin
|
content = module.mercury.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/mercury-config"
|
filename = "/home/user/.kube/configs/mercury-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -324,9 +323,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
node1.example.com Ready <none> 10m v1.31.3
|
node1.example.com Ready <none> 10m v1.30.3
|
||||||
node2.example.com Ready <none> 10m v1.31.3
|
node2.example.com Ready <none> 10m v1.30.3
|
||||||
node3.example.com Ready <none> 10m v1.31.3
|
node3.example.com Ready <none> 10m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -334,10 +333,9 @@ List the pods.
|
|||||||
```
|
```
|
||||||
$ kubectl get pods --all-namespaces
|
$ kubectl get pods --all-namespaces
|
||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system cilium-6qp7f 1/1 Running 1 11m
|
kube-system calico-node-6qp7f 2/2 Running 1 11m
|
||||||
kube-system cilium-gnjrm 1/1 Running 0 11m
|
kube-system calico-node-gnjrm 2/2 Running 0 11m
|
||||||
kube-system cilium-llbgt 1/1 Running 0 11m
|
kube-system calico-node-llbgt 2/2 Running 0 11m
|
||||||
kube-system cilium-operator-68d778b448-g744f 1/1 Running 0 11m
|
|
||||||
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
|
kube-system coredns-1187388186-dj3pd 1/1 Running 0 11m
|
||||||
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
kube-system coredns-1187388186-mx9rt 1/1 Running 0 11m
|
||||||
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m
|
kube-system kube-apiserver-node1.example.com 1/1 Running 0 11m
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# DigitalOcean
|
# DigitalOcean
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on DigitalOcean with Flatcar Linux.
|
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on DigitalOcean with Flatcar Linux.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||||
|
|
||||||
@ -81,19 +81,19 @@ Define a Kubernetes cluster using the module `digital-ocean/flatcar-linux/kubern
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "nemo" {
|
module "nemo" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# Digital Ocean
|
# Digital Ocean
|
||||||
cluster_name = "nemo"
|
cluster_name = "nemo"
|
||||||
region = "nyc3"
|
region = "nyc3"
|
||||||
dns_zone = "digital-ocean.example.com"
|
dns_zone = "digital-ocean.example.com"
|
||||||
|
|
||||||
# instances
|
|
||||||
os_image = data.digitalocean_image.flatcar-stable-2303-4-0.id
|
|
||||||
worker_count = 2
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
|
os_image = data.digitalocean_image.flatcar-stable-2303-4-0.id
|
||||||
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
|
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
|
||||||
|
|
||||||
|
# optional
|
||||||
|
worker_count = 2
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -146,7 +146,6 @@ In 3-6 minutes, the Kubernetes cluster will be ready.
|
|||||||
resource "local_file" "kubeconfig-nemo" {
|
resource "local_file" "kubeconfig-nemo" {
|
||||||
content = module.nemo.kubeconfig-admin
|
content = module.nemo.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/nemo-config"
|
filename = "/home/user/.kube/configs/nemo-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -156,9 +155,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
10.132.110.130 Ready <none> 10m v1.31.3
|
10.132.110.130 Ready <none> 10m v1.30.3
|
||||||
10.132.115.81 Ready <none> 10m v1.31.3
|
10.132.115.81 Ready <none> 10m v1.30.3
|
||||||
10.132.124.107 Ready <none> 10m v1.31.3
|
10.132.124.107 Ready <none> 10m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -167,9 +166,9 @@ List the pods.
|
|||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
|
||||||
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
|
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
|
||||||
kube-system cilium-1m5bf 1/1 Running 0 11m
|
kube-system calico-node-1m5bf 2/2 Running 0 11m
|
||||||
kube-system cilium-7jmr1 1/1 Running 0 11m
|
kube-system calico-node-7jmr1 2/2 Running 0 11m
|
||||||
kube-system cilium-bknc8 1/1 Running 0 11m
|
kube-system calico-node-bknc8 2/2 Running 0 11m
|
||||||
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
|
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
|
||||||
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
|
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
|
||||||
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
|
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Google Cloud
|
# Google Cloud
|
||||||
|
|
||||||
In this tutorial, we'll create a Kubernetes v1.31.3 cluster on Google Compute Engine with Flatcar Linux.
|
In this tutorial, we'll create a Kubernetes v1.30.3 cluster on Google Compute Engine with Flatcar Linux.
|
||||||
|
|
||||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||||
|
|
||||||
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/flatcar-linux/kuberne
|
|||||||
|
|
||||||
```tf
|
```tf
|
||||||
module "yavin" {
|
module "yavin" {
|
||||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.31.3"
|
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.30.3"
|
||||||
|
|
||||||
# Google Cloud
|
# Google Cloud
|
||||||
cluster_name = "yavin"
|
cluster_name = "yavin"
|
||||||
@ -81,11 +81,11 @@ module "yavin" {
|
|||||||
dns_zone = "example.com"
|
dns_zone = "example.com"
|
||||||
dns_zone_name = "example-zone"
|
dns_zone_name = "example-zone"
|
||||||
|
|
||||||
# instances
|
|
||||||
worker_count = 2
|
|
||||||
|
|
||||||
# configuration
|
# configuration
|
||||||
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
|
||||||
|
|
||||||
|
# optional
|
||||||
|
worker_count = 2
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -138,7 +138,6 @@ In 4-8 minutes, the Kubernetes cluster will be ready.
|
|||||||
resource "local_file" "kubeconfig-yavin" {
|
resource "local_file" "kubeconfig-yavin" {
|
||||||
content = module.yavin.kubeconfig-admin
|
content = module.yavin.kubeconfig-admin
|
||||||
filename = "/home/user/.kube/configs/yavin-config"
|
filename = "/home/user/.kube/configs/yavin-config"
|
||||||
file_permission = "0600"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -148,9 +147,9 @@ List nodes in the cluster.
|
|||||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||||
$ kubectl get nodes
|
$ kubectl get nodes
|
||||||
NAME ROLES STATUS AGE VERSION
|
NAME ROLES STATUS AGE VERSION
|
||||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.31.3
|
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.3
|
||||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.31.3
|
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.3
|
||||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.31.3
|
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.3
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pods.
|
List the pods.
|
||||||
@ -158,9 +157,9 @@ List the pods.
|
|||||||
```
|
```
|
||||||
$ kubectl get pods --all-namespaces
|
$ kubectl get pods --all-namespaces
|
||||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||||
kube-system cilium-1cs8z 1/1 Running 0 6m
|
kube-system calico-node-1cs8z 2/2 Running 0 6m
|
||||||
kube-system cilium-d1l5b 1/1 Running 0 6m
|
kube-system calico-node-d1l5b 2/2 Running 0 6m
|
||||||
kube-system cilium-sp9ps 1/1 Running 0 6m
|
kube-system calico-node-sp9ps 2/2 Running 0 6m
|
||||||
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
kube-system coredns-1187388186-dkh3o 1/1 Running 0 6m
|
||||||
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
kube-system coredns-1187388186-zj5dl 1/1 Running 0 6m
|
||||||
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
|
kube-system kube-apiserver-controller-0 1/1 Running 0 6m
|
||||||
@ -211,14 +210,13 @@ resource "google_dns_managed_zone" "zone-for-clusters" {
|
|||||||
### Optional
|
### Optional
|
||||||
|
|
||||||
| Name | Description | Default | Example |
|
| Name | Description | Default | Example |
|
||||||
|:---------------------|:---------------------------------------------------------------------------|:-----------------|:--------------------------------------------|
|
|:-----|:------------|:--------|:--------|
|
||||||
| os_image | Flatcar Linux image for compute instances | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
|
|
||||||
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
|
| controller_count | Number of controllers (i.e. masters) | 1 | 3 |
|
||||||
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
|
|
||||||
| controller_disk_size | Controller disk size in GB | 30 | 20 |
|
|
||||||
| worker_count | Number of workers | 1 | 3 |
|
| worker_count | Number of workers | 1 | 3 |
|
||||||
|
| controller_type | Machine type for controllers | "n1-standard-1" | See below |
|
||||||
| worker_type | Machine type for workers | "n1-standard-1" | See below |
|
| worker_type | Machine type for workers | "n1-standard-1" | See below |
|
||||||
| worker_disk_size | Worker disk size in GB | 30 | 100 |
|
| os_image | Flatcar Linux image for compute instances | "flatcar-stable" | flatcar-stable, flatcar-beta, flatcar-alpha |
|
||||||
|
| disk_size | Size of the disk in GB | 30 | 100 |
|
||||||
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
| worker_preemptible | If enabled, Compute Engine will terminate workers randomly within 24 hours | false | true |
|
||||||
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
| controller_snippets | Controller Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||||
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
| worker_snippets | Worker Container Linux Config snippets | [] | [example](/advanced/customization/) |
|
||||||
@ -232,3 +230,4 @@ Check the list of valid [machine types](https://cloud.google.com/compute/docs/ma
|
|||||||
#### Preemption
|
#### Preemption
|
||||||
|
|
||||||
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`
|
Add `worker_preemptible = "true"` to allow worker nodes to be [preempted](https://cloud.google.com/compute/docs/instances/preemptible) at random, but pay [significantly](https://cloud.google.com/compute/pricing) less. Clusters tolerate stopping instances fairly well (reschedules pods, but cannot drain) and preemption provides a nice reward for running fault-tolerant cluster systems.`
|
||||||
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user