mirror of
https://github.com/puppetmaster/typhoon.git
synced 2025-08-01 18:41:33 +02:00
Compare commits
46 Commits
Author | SHA1 | Date | |
---|---|---|---|
c48b04ea88 | |||
7b8a51070f | |||
533ace7011 | |||
b3c384fbc0 | |||
563feacd29 | |||
178d1e6eb1 | |||
3f34e047f1 | |||
cc80ec9b98 | |||
1d63592c42 | |||
d08cd317d9 | |||
78d5100181 | |||
e8a42ae33e | |||
ed0fa5c9a9 | |||
15608fa6ae | |||
9e9362154d | |||
7d8c0631cd | |||
6ac5a0222b | |||
ed9a031d39 | |||
88112d4de2 | |||
bda94bd278 | |||
cafcdbc3e7 | |||
4bc10a8a4c | |||
4c3dd07ab3 | |||
8524aa00bc | |||
734c8c2107 | |||
fbe36b8b16 | |||
8038669504 | |||
7af83404e1 | |||
e9c7c4a4c1 | |||
ed82c41423 | |||
41907a0ba6 | |||
ab66d11edf | |||
2325a503e1 | |||
7a46eb03ae | |||
0e7977694f | |||
f2f625984e | |||
ac3eab4e00 | |||
aecb7775a8 | |||
301f460d25 | |||
e247673a20 | |||
808eafd178 | |||
4d4c5413de | |||
fbf4544cfd | |||
af719e46f2 | |||
25c9ec8e3d | |||
5bea4b7d9c |
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -1,10 +0,0 @@
|
||||
High level description of the change.
|
||||
|
||||
* Specific change
|
||||
* Specific change
|
||||
|
||||
## Testing
|
||||
|
||||
Describe your work to validate the change works.
|
||||
|
||||
rel: issue number (if applicable)
|
12
.github/release.yaml
vendored
Normal file
12
.github/release.yaml
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
changelog:
|
||||
categories:
|
||||
- title: Contributions
|
||||
labels:
|
||||
- '*'
|
||||
exclude:
|
||||
labels:
|
||||
- dependencies
|
||||
- no-release-note
|
||||
- title: Dependencies
|
||||
labels:
|
||||
- dependencies
|
55
CHANGES.md
55
CHANGES.md
@ -4,6 +4,61 @@ Notable changes between versions.
|
||||
|
||||
## Latest
|
||||
|
||||
## v1.30.1
|
||||
|
||||
* Kubernetes [v1.30.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1301)
|
||||
* Add firewall rules and security group rules for Cilium and Hubble metrics ([#1449](https://github.com/poseidon/typhoon/pull/1449))
|
||||
* Update Cilium from v1.15.3 to [v1.15.5](https://github.com/cilium/cilium/releases/tag/v1.15.5)
|
||||
* Update flannel from v0.24.4 to [v0.25.1](https://github.com/flannel-io/flannel/releases/tag/v0.25.1)
|
||||
* Introduce `components` variabe to enable/disable/configure pre-installed components ([#1453](https://github.com/poseidon/typhoon/pull/1453))
|
||||
* Add Terraform modules for `coredns`, `cilium`, and `flannel` components
|
||||
|
||||
### Azure
|
||||
|
||||
* Add `controller_security_group_name` output for adding custom security rules ([#1450](https://github.com/poseidon/typhoon/pull/1450))
|
||||
* Add `controller_address_prefixes` output for adding custom security rules ([#1450](https://github.com/poseidon/typhoon/pull/1450))
|
||||
|
||||
## v1.30.0
|
||||
|
||||
* Kubernetes [v1.30.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#v1300)
|
||||
* Update etcd from v3.5.12 to [v3.5.13](https://github.com/etcd-io/etcd/releases/tag/v3.5.13)
|
||||
* Update Cilium from v1.15.2 to [v1.15.3](https://github.com/cilium/cilium/releases/tag/v1.15.3)
|
||||
* Update Calico from v3.27.2 to [v3.27.3](https://github.com/projectcalico/calico/releases/tag/v3.27.3)
|
||||
|
||||
## v1.29.3
|
||||
|
||||
* Kubernetes [v1.29.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1293)
|
||||
* Update Cilium from v1.15.1 to [v1.15.2](https://github.com/cilium/cilium/releases/tag/v1.15.2)
|
||||
* Update flannel from v0.24.2 to [v0.24.4](https://github.com/flannel-io/flannel/releases/tag/v0.24.4)
|
||||
|
||||
## v1.29.2
|
||||
|
||||
* Kubernetes [v1.29.2](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1292)
|
||||
* Update etcd from v3.5.10 to [v3.5.12](https://github.com/etcd-io/etcd/releases/tag/v3.5.12)
|
||||
* Update Cilium from v1.14.3 to [v1.15.1](https://github.com/cilium/cilium/releases/tag/v1.15.1)
|
||||
* Update Calico from v3.26.3 to [v3.27.2](https://github.com/projectcalico/calico/releases/tag/v3.27.2)
|
||||
* Fix upstream incompatibility with Fedora CoreOS ([calico#8372](https://github.com/projectcalico/calico/issues/8372))
|
||||
* Update flannel from v0.22.2 to [v0.24.2](https://github.com/flannel-io/flannel/releases/tag/v0.24.2)
|
||||
* Add an `install_container_networking` variable (default `true`) ([#1421](https://github.com/poseidon/typhoon/pull/1421))
|
||||
* When `true`, the chosen container `networking` provider is installed during cluster bootstrap
|
||||
* Set `false` to self-manage the container networking provider. This allows flannel, Calico, or Cilium
|
||||
to be managed via Terraform (like any other Kubernetes resources). Nodes will be NotReady until you
|
||||
apply the self-managed container networking provider. This may become the default in future.
|
||||
* Continue to set `networking` to one of the three supported container networking providers. Most
|
||||
require custom firewall / security policies be present across nodes so they have some infra tie-ins.
|
||||
|
||||
## v1.29.1
|
||||
|
||||
* Kubernetes [v1.29.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1291)
|
||||
|
||||
### AWS
|
||||
|
||||
* Continue to support AWS IMDSv1 ([#1412](https://github.com/poseidon/typhoon/pull/1412))
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Calico and Fedora CoreOS cannot be used together currently ([calico#8372](https://github.com/projectcalico/calico/issues/8372))
|
||||
|
||||
## v1.29.0
|
||||
|
||||
* Kubernetes [v1.29.0](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#v1290)
|
||||
|
27
README.md
27
README.md
@ -1,4 +1,9 @@
|
||||
# Typhoon [](https://github.com/poseidon/typhoon/releases) [](https://github.com/poseidon/typhoon/stargazers) [](https://github.com/sponsors/poseidon) [](https://fosstodon.org/@typhoon)
|
||||
# Typhoon
|
||||
|
||||
[](https://github.com/poseidon/typhoon/releases)
|
||||
[](https://github.com/poseidon/typhoon/stargazers)
|
||||
[](https://github.com/sponsors/poseidon)
|
||||
[](https://fosstodon.org/@typhoon)
|
||||
|
||||
<img align="right" src="https://storage.googleapis.com/poseidon/typhoon-logo.png">
|
||||
|
||||
@ -13,7 +18,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.29.0 (upstream)
|
||||
* Kubernetes v1.30.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [preemptible](https://typhoon.psdn.io/flatcar-linux/google-cloud/#preemption) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
@ -21,7 +26,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Modules
|
||||
|
||||
Typhoon provides a Terraform Module for each supported operating system and platform.
|
||||
Typhoon provides a Terraform Module for defining a Kubernetes cluster on each supported operating system and platform.
|
||||
|
||||
Typhoon is available for [Fedora CoreOS](https://getfedora.org/coreos/).
|
||||
|
||||
@ -52,6 +57,14 @@ Typhoon is available for [Flatcar Linux](https://www.flatcar-linux.org/releases/
|
||||
| AWS | Flatcar Linux (ARM64) | [aws/flatcar-linux/kubernetes](aws/flatcar-linux/kubernetes) | alpha |
|
||||
| Azure | Flatcar Linux (ARM64) | [azure/flatcar-linux/kubernetes](azure/flatcar-linux/kubernetes) | alpha |
|
||||
|
||||
Typhoon also provides Terraform Modules for optionally managing individual components applied onto clusters.
|
||||
|
||||
| Name | Terraform Module | Status |
|
||||
|---------|------------------|--------|
|
||||
| CoreDNS | [addons/coredns](addons/coredns) | beta |
|
||||
| Cilium | [addons/cilium](addons/cilium) | beta |
|
||||
| flannel | [addons/flannel](addons/flannel) | beta |
|
||||
|
||||
## Documentation
|
||||
|
||||
* [Docs](https://typhoon.psdn.io)
|
||||
@ -65,7 +78,7 @@ Define a Kubernetes cluster by using the Terraform module for your chosen platfo
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -104,9 +117,9 @@ In 4-8 minutes (varies by platform), the cluster will be ready. This Google Clou
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.29.0
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.29.0
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.29.0
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.1
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.1
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
36
addons/cilium/cluster-role-binding.tf
Normal file
36
addons/cilium/cluster-role-binding.tf
Normal file
@ -0,0 +1,36 @@
|
||||
resource "kubernetes_cluster_role_binding" "operator" {
|
||||
metadata {
|
||||
name = "cilium-operator"
|
||||
}
|
||||
|
||||
role_ref {
|
||||
api_group = "rbac.authorization.k8s.io"
|
||||
kind = "ClusterRole"
|
||||
name = "cilium-operator"
|
||||
}
|
||||
|
||||
subject {
|
||||
kind = "ServiceAccount"
|
||||
name = "cilium-operator"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
}
|
||||
|
||||
resource "kubernetes_cluster_role_binding" "agent" {
|
||||
metadata {
|
||||
name = "cilium-agent"
|
||||
}
|
||||
|
||||
role_ref {
|
||||
api_group = "rbac.authorization.k8s.io"
|
||||
kind = "ClusterRole"
|
||||
name = "cilium-agent"
|
||||
}
|
||||
|
||||
subject {
|
||||
kind = "ServiceAccount"
|
||||
name = "cilium-agent"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
}
|
||||
|
112
addons/cilium/cluster-role.tf
Normal file
112
addons/cilium/cluster-role.tf
Normal file
@ -0,0 +1,112 @@
|
||||
resource "kubernetes_cluster_role" "operator" {
|
||||
metadata {
|
||||
name = "cilium-operator"
|
||||
}
|
||||
|
||||
# detect and restart [core|kube]dns pods on startup
|
||||
rule {
|
||||
verbs = ["get", "list", "watch", "delete"]
|
||||
api_groups = [""]
|
||||
resources = ["pods"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["list", "watch"]
|
||||
api_groups = [""]
|
||||
resources = ["nodes"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["patch"]
|
||||
api_groups = [""]
|
||||
resources = ["nodes", "nodes/status"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["get", "list", "watch"]
|
||||
api_groups = ["discovery.k8s.io"]
|
||||
resources = ["endpointslices"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["get", "list", "watch"]
|
||||
api_groups = [""]
|
||||
resources = ["services"]
|
||||
}
|
||||
|
||||
# Perform LB IP allocation for BGP
|
||||
rule {
|
||||
verbs = ["update"]
|
||||
api_groups = [""]
|
||||
resources = ["services/status"]
|
||||
}
|
||||
|
||||
# Perform the translation of a CNP that contains `ToGroup` to its endpoints
|
||||
rule {
|
||||
verbs = ["get", "list", "watch"]
|
||||
api_groups = [""]
|
||||
resources = ["services", "endpoints", "namespaces"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["*"]
|
||||
api_groups = ["cilium.io"]
|
||||
resources = ["ciliumnetworkpolicies", "ciliumnetworkpolicies/status", "ciliumnetworkpolicies/finalizers", "ciliumclusterwidenetworkpolicies", "ciliumclusterwidenetworkpolicies/status", "ciliumclusterwidenetworkpolicies/finalizers", "ciliumendpoints", "ciliumendpoints/status", "ciliumendpoints/finalizers", "ciliumnodes", "ciliumnodes/status", "ciliumnodes/finalizers", "ciliumidentities", "ciliumidentities/status", "ciliumidentities/finalizers", "ciliumlocalredirectpolicies", "ciliumlocalredirectpolicies/status", "ciliumlocalredirectpolicies/finalizers", "ciliumendpointslices", "ciliumloadbalancerippools", "ciliumloadbalancerippools/status", "ciliumcidrgroups", "ciliuml2announcementpolicies", "ciliuml2announcementpolicies/status", "ciliumpodippools"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["create", "get", "list", "update", "watch"]
|
||||
api_groups = ["apiextensions.k8s.io"]
|
||||
resources = ["customresourcedefinitions"]
|
||||
}
|
||||
|
||||
# Cilium leader elects if among multiple operator replicas
|
||||
rule {
|
||||
verbs = ["create", "get", "update"]
|
||||
api_groups = ["coordination.k8s.io"]
|
||||
resources = ["leases"]
|
||||
}
|
||||
}
|
||||
|
||||
resource "kubernetes_cluster_role" "agent" {
|
||||
metadata {
|
||||
name = "cilium-agent"
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["get", "list", "watch"]
|
||||
api_groups = ["networking.k8s.io"]
|
||||
resources = ["networkpolicies"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["get", "list", "watch"]
|
||||
api_groups = ["discovery.k8s.io"]
|
||||
resources = ["endpointslices"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["get", "list", "watch"]
|
||||
api_groups = [""]
|
||||
resources = ["namespaces", "services", "pods", "endpoints", "nodes"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["patch"]
|
||||
api_groups = [""]
|
||||
resources = ["nodes/status"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["create", "get", "list", "watch", "update"]
|
||||
api_groups = ["apiextensions.k8s.io"]
|
||||
resources = ["customresourcedefinitions"]
|
||||
}
|
||||
|
||||
rule {
|
||||
verbs = ["*"]
|
||||
api_groups = ["cilium.io"]
|
||||
resources = ["ciliumnetworkpolicies", "ciliumnetworkpolicies/status", "ciliumclusterwidenetworkpolicies", "ciliumclusterwidenetworkpolicies/status", "ciliumendpoints", "ciliumendpoints/status", "ciliumnodes", "ciliumnodes/status", "ciliumidentities", "ciliumidentities/status", "ciliumlocalredirectpolicies", "ciliumlocalredirectpolicies/status", "ciliumegressnatpolicies", "ciliumendpointslices", "ciliumcidrgroups", "ciliuml2announcementpolicies", "ciliuml2announcementpolicies/status", "ciliumpodippools"]
|
||||
}
|
||||
}
|
||||
|
196
addons/cilium/config.tf
Normal file
196
addons/cilium/config.tf
Normal file
@ -0,0 +1,196 @@
|
||||
resource "kubernetes_config_map" "cilium" {
|
||||
metadata {
|
||||
name = "cilium"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
data = {
|
||||
# Identity allocation mode selects how identities are shared between cilium
|
||||
# nodes by setting how they are stored. The options are "crd" or "kvstore".
|
||||
# - "crd" stores identities in kubernetes as CRDs (custom resource definition).
|
||||
# These can be queried with:
|
||||
# kubectl get ciliumid
|
||||
# - "kvstore" stores identities in a kvstore, etcd or consul, that is
|
||||
# configured below. Cilium versions before 1.6 supported only the kvstore
|
||||
# backend. Upgrades from these older cilium versions should continue using
|
||||
# the kvstore by commenting out the identity-allocation-mode below, or
|
||||
# setting it to "kvstore".
|
||||
identity-allocation-mode = "crd"
|
||||
cilium-endpoint-gc-interval = "5m0s"
|
||||
nodes-gc-interval = "5m0s"
|
||||
|
||||
# If you want to run cilium in debug mode change this value to true
|
||||
debug = "false"
|
||||
# The agent can be put into the following three policy enforcement modes
|
||||
# default, always and never.
|
||||
# https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
|
||||
enable-policy = "default"
|
||||
|
||||
# Prometheus
|
||||
enable-metrics = "true"
|
||||
prometheus-serve-addr = ":9962"
|
||||
operator-prometheus-serve-addr = ":9963"
|
||||
proxy-prometheus-port = "9964" # envoy
|
||||
|
||||
# Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
|
||||
# address.
|
||||
enable-ipv4 = "true"
|
||||
|
||||
# Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
|
||||
# address.
|
||||
enable-ipv6 = "false"
|
||||
|
||||
# Enable probing for a more efficient clock source for the BPF datapath
|
||||
enable-bpf-clock-probe = "true"
|
||||
|
||||
# Enable use of transparent proxying mechanisms (Linux 5.7+)
|
||||
enable-bpf-tproxy = "false"
|
||||
|
||||
# If you want cilium monitor to aggregate tracing for packets, set this level
|
||||
# to "low", "medium", or "maximum". The higher the level, the less packets
|
||||
# that will be seen in monitor output.
|
||||
monitor-aggregation = "medium"
|
||||
|
||||
# The monitor aggregation interval governs the typical time between monitor
|
||||
# notification events for each allowed connection.
|
||||
#
|
||||
# Only effective when monitor aggregation is set to "medium" or higher.
|
||||
monitor-aggregation-interval = "5s"
|
||||
|
||||
# The monitor aggregation flags determine which TCP flags which, upon the
|
||||
# first observation, cause monitor notifications to be generated.
|
||||
#
|
||||
# Only effective when monitor aggregation is set to "medium" or higher.
|
||||
monitor-aggregation-flags = "all"
|
||||
|
||||
# Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
|
||||
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
|
||||
bpf-map-dynamic-size-ratio = "0.0025"
|
||||
# bpf-policy-map-max specified the maximum number of entries in endpoint
|
||||
# policy map (per endpoint)
|
||||
bpf-policy-map-max = "16384"
|
||||
# bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
|
||||
# backend and affinity maps.
|
||||
bpf-lb-map-max = "65536"
|
||||
|
||||
# Pre-allocation of map entries allows per-packet latency to be reduced, at
|
||||
# the expense of up-front memory allocation for the entries in the maps. The
|
||||
# default value below will minimize memory usage in the default installation;
|
||||
# users who are sensitive to latency may consider setting this to "true".
|
||||
#
|
||||
# This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
|
||||
# this option and behave as though it is set to "true".
|
||||
#
|
||||
# If this value is modified, then during the next Cilium startup the restore
|
||||
# of existing endpoints and tracking of ongoing connections may be disrupted.
|
||||
# As a result, reply packets may be dropped and the load-balancing decisions
|
||||
# for established connections may change.
|
||||
#
|
||||
# If this option is set to "false" during an upgrade from 1.3 or earlier to
|
||||
# 1.4 or later, then it may cause one-time disruptions during the upgrade.
|
||||
preallocate-bpf-maps = "false"
|
||||
|
||||
# Name of the cluster. Only relevant when building a mesh of clusters.
|
||||
cluster-name = "default"
|
||||
# Unique ID of the cluster. Must be unique across all conneted clusters and
|
||||
# in the range of 1 and 255. Only relevant when building a mesh of clusters.
|
||||
cluster-id = "0"
|
||||
|
||||
# Encapsulation mode for communication between nodes
|
||||
# Possible values:
|
||||
# - disabled
|
||||
# - vxlan (default)
|
||||
# - geneve
|
||||
routing-mode = "tunnel"
|
||||
tunnel = "vxlan"
|
||||
# Enables L7 proxy for L7 policy enforcement and visibility
|
||||
enable-l7-proxy = "true"
|
||||
|
||||
auto-direct-node-routes = "false"
|
||||
|
||||
# enableXTSocketFallback enables the fallback compatibility solution
|
||||
# when the xt_socket kernel module is missing and it is needed for
|
||||
# the datapath L7 redirection to work properly. See documentation
|
||||
# for details on when this can be disabled:
|
||||
# http://docs.cilium.io/en/latest/install/system_requirements/#admin-kernel-version.
|
||||
enable-xt-socket-fallback = "true"
|
||||
|
||||
# installIptablesRules enables installation of iptables rules to allow for
|
||||
# TPROXY (L7 proxy injection), itpables based masquerading and compatibility
|
||||
# with kube-proxy. See documentation for details on when this can be
|
||||
# disabled.
|
||||
install-iptables-rules = "true"
|
||||
|
||||
# masquerade traffic leaving the node destined for outside
|
||||
enable-ipv4-masquerade = "true"
|
||||
enable-ipv6-masquerade = "false"
|
||||
|
||||
# bpfMasquerade enables masquerading with BPF instead of iptables
|
||||
enable-bpf-masquerade = "true"
|
||||
|
||||
# kube-proxy
|
||||
kube-proxy-replacement = "false"
|
||||
kube-proxy-replacement-healthz-bind-address = ""
|
||||
enable-session-affinity = "true"
|
||||
|
||||
# ClusterIPs from host namespace
|
||||
bpf-lb-sock = "true"
|
||||
# ClusterIPs from external nodes
|
||||
bpf-lb-external-clusterip = "true"
|
||||
|
||||
# NodePort
|
||||
enable-node-port = "true"
|
||||
enable-health-check-nodeport = "false"
|
||||
|
||||
# ExternalIPs
|
||||
enable-external-ips = "true"
|
||||
|
||||
# HostPort
|
||||
enable-host-port = "true"
|
||||
|
||||
# IPAM
|
||||
ipam = "cluster-pool"
|
||||
disable-cnp-status-updates = "true"
|
||||
cluster-pool-ipv4-cidr = "${var.pod_cidr}"
|
||||
cluster-pool-ipv4-mask-size = "24"
|
||||
|
||||
# Health
|
||||
agent-health-port = "9876"
|
||||
enable-health-checking = "true"
|
||||
enable-endpoint-health-checking = "true"
|
||||
|
||||
# Identity
|
||||
enable-well-known-identities = "false"
|
||||
enable-remote-node-identity = "true"
|
||||
|
||||
# Hubble server
|
||||
enable-hubble = var.enable_hubble
|
||||
hubble-disable-tls = "false"
|
||||
hubble-listen-address = ":4244"
|
||||
hubble-socket-path = "/var/run/cilium/hubble.sock"
|
||||
hubble-tls-client-ca-files = "/var/lib/cilium/tls/hubble/client-ca.crt"
|
||||
hubble-tls-cert-file = "/var/lib/cilium/tls/hubble/server.crt"
|
||||
hubble-tls-key-file = "/var/lib/cilium/tls/hubble/server.key"
|
||||
hubble-export-file-max-backups = "5"
|
||||
hubble-export-file-max-size-mb = "10"
|
||||
|
||||
# Hubble metrics
|
||||
hubble-metrics-server = ":9965"
|
||||
hubble-metrics = "dns drop tcp flow port-distribution icmp httpV2"
|
||||
enable-hubble-open-metrics = "false"
|
||||
|
||||
|
||||
# Misc
|
||||
enable-bandwidth-manager = "false"
|
||||
enable-local-redirect-policy = "false"
|
||||
policy-audit-mode = "false"
|
||||
operator-api-serve-addr = "127.0.0.1:9234"
|
||||
enable-l2-neigh-discovery = "true"
|
||||
enable-k8s-terminating-endpoint = "true"
|
||||
enable-k8s-networkpolicy = "true"
|
||||
external-envoy-proxy = "false"
|
||||
write-cni-conf-when-ready = "/host/etc/cni/net.d/05-cilium.conflist"
|
||||
cni-exclusive = "true"
|
||||
cni-log-file = "/var/run/cilium/cilium-cni.log"
|
||||
}
|
||||
}
|
||||
|
379
addons/cilium/daemonset.tf
Normal file
379
addons/cilium/daemonset.tf
Normal file
@ -0,0 +1,379 @@
|
||||
resource "kubernetes_daemonset" "cilium" {
|
||||
wait_for_rollout = false
|
||||
|
||||
metadata {
|
||||
name = "cilium"
|
||||
namespace = "kube-system"
|
||||
labels = {
|
||||
k8s-app = "cilium"
|
||||
}
|
||||
}
|
||||
spec {
|
||||
strategy {
|
||||
type = "RollingUpdate"
|
||||
rolling_update {
|
||||
max_unavailable = "1"
|
||||
}
|
||||
}
|
||||
selector {
|
||||
match_labels = {
|
||||
k8s-app = "cilium-agent"
|
||||
}
|
||||
}
|
||||
template {
|
||||
metadata {
|
||||
labels = {
|
||||
k8s-app = "cilium-agent"
|
||||
}
|
||||
annotations = {
|
||||
"prometheus.io/port" = "9962"
|
||||
"prometheus.io/scrape" = "true"
|
||||
}
|
||||
}
|
||||
spec {
|
||||
host_network = true
|
||||
priority_class_name = "system-node-critical"
|
||||
service_account_name = "cilium-agent"
|
||||
security_context {
|
||||
seccomp_profile {
|
||||
type = "RuntimeDefault"
|
||||
}
|
||||
}
|
||||
toleration {
|
||||
key = "node-role.kubernetes.io/controller"
|
||||
operator = "Exists"
|
||||
}
|
||||
toleration {
|
||||
key = "node.kubernetes.io/not-ready"
|
||||
operator = "Exists"
|
||||
}
|
||||
dynamic "toleration" {
|
||||
for_each = var.daemonset_tolerations
|
||||
content {
|
||||
key = toleration.value
|
||||
operator = "Exists"
|
||||
}
|
||||
}
|
||||
automount_service_account_token = true
|
||||
enable_service_links = false
|
||||
|
||||
# Cilium v1.13.1 starts installing CNI plugins in yet another init container
|
||||
# https://github.com/cilium/cilium/pull/24075
|
||||
init_container {
|
||||
name = "install-cni"
|
||||
image = "quay.io/cilium/cilium:v1.15.5"
|
||||
command = ["/install-plugin.sh"]
|
||||
security_context {
|
||||
allow_privilege_escalation = true
|
||||
privileged = true
|
||||
capabilities {
|
||||
drop = ["ALL"]
|
||||
}
|
||||
}
|
||||
volume_mount {
|
||||
name = "cni-bin-dir"
|
||||
mount_path = "/host/opt/cni/bin"
|
||||
}
|
||||
}
|
||||
|
||||
# Required to mount cgroup2 filesystem on the underlying Kubernetes node.
|
||||
# We use nsenter command with host's cgroup and mount namespaces enabled.
|
||||
init_container {
|
||||
name = "mount-cgroup"
|
||||
image = "quay.io/cilium/cilium:v1.15.5"
|
||||
command = [
|
||||
"sh",
|
||||
"-ec",
|
||||
# The statically linked Go program binary is invoked to avoid any
|
||||
# dependency on utilities like sh and mount that can be missing on certain
|
||||
# distros installed on the underlying host. Copy the binary to the
|
||||
# same directory where we install cilium cni plugin so that exec permissions
|
||||
# are available.
|
||||
"cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt \"$${BIN_PATH}/cilium-mount\" $CGROUP_ROOT; rm /hostbin/cilium-mount"
|
||||
]
|
||||
env {
|
||||
name = "CGROUP_ROOT"
|
||||
value = "/run/cilium/cgroupv2"
|
||||
}
|
||||
env {
|
||||
name = "BIN_PATH"
|
||||
value = "/opt/cni/bin"
|
||||
}
|
||||
security_context {
|
||||
allow_privilege_escalation = true
|
||||
privileged = true
|
||||
}
|
||||
volume_mount {
|
||||
name = "hostproc"
|
||||
mount_path = "/hostproc"
|
||||
}
|
||||
volume_mount {
|
||||
name = "cni-bin-dir"
|
||||
mount_path = "/hostbin"
|
||||
}
|
||||
}
|
||||
|
||||
init_container {
|
||||
name = "clean-cilium-state"
|
||||
image = "quay.io/cilium/cilium:v1.15.5"
|
||||
command = ["/init-container.sh"]
|
||||
security_context {
|
||||
allow_privilege_escalation = true
|
||||
privileged = true
|
||||
}
|
||||
volume_mount {
|
||||
name = "sys-fs-bpf"
|
||||
mount_path = "/sys/fs/bpf"
|
||||
}
|
||||
volume_mount {
|
||||
name = "var-run-cilium"
|
||||
mount_path = "/var/run/cilium"
|
||||
}
|
||||
# Required to mount cgroup filesystem from the host to cilium agent pod
|
||||
volume_mount {
|
||||
name = "cilium-cgroup"
|
||||
mount_path = "/run/cilium/cgroupv2"
|
||||
mount_propagation = "HostToContainer"
|
||||
}
|
||||
}
|
||||
|
||||
container {
|
||||
name = "cilium-agent"
|
||||
image = "quay.io/cilium/cilium:v1.15.5"
|
||||
command = ["cilium-agent"]
|
||||
args = [
|
||||
"--config-dir=/tmp/cilium/config-map"
|
||||
]
|
||||
env {
|
||||
name = "K8S_NODE_NAME"
|
||||
value_from {
|
||||
field_ref {
|
||||
api_version = "v1"
|
||||
field_path = "spec.nodeName"
|
||||
}
|
||||
}
|
||||
}
|
||||
env {
|
||||
name = "CILIUM_K8S_NAMESPACE"
|
||||
value_from {
|
||||
field_ref {
|
||||
api_version = "v1"
|
||||
field_path = "metadata.namespace"
|
||||
}
|
||||
}
|
||||
}
|
||||
env {
|
||||
name = "KUBERNETES_SERVICE_HOST"
|
||||
value_from {
|
||||
config_map_key_ref {
|
||||
name = "in-cluster"
|
||||
key = "apiserver-host"
|
||||
}
|
||||
}
|
||||
}
|
||||
env {
|
||||
name = "KUBERNETES_SERVICE_PORT"
|
||||
value_from {
|
||||
config_map_key_ref {
|
||||
name = "in-cluster"
|
||||
key = "apiserver-port"
|
||||
}
|
||||
}
|
||||
}
|
||||
port {
|
||||
name = "peer-service"
|
||||
protocol = "TCP"
|
||||
container_port = 4244
|
||||
}
|
||||
# Metrics
|
||||
port {
|
||||
name = "metrics"
|
||||
protocol = "TCP"
|
||||
container_port = 9962
|
||||
}
|
||||
port {
|
||||
name = "envoy-metrics"
|
||||
protocol = "TCP"
|
||||
container_port = 9964
|
||||
}
|
||||
port {
|
||||
name = "hubble-metrics"
|
||||
protocol = "TCP"
|
||||
container_port = 9965
|
||||
}
|
||||
# Not yet used, prefer exec's
|
||||
port {
|
||||
name = "health"
|
||||
protocol = "TCP"
|
||||
container_port = 9876
|
||||
}
|
||||
lifecycle {
|
||||
pre_stop {
|
||||
exec {
|
||||
command = ["/cni-uninstall.sh"]
|
||||
}
|
||||
}
|
||||
}
|
||||
security_context {
|
||||
allow_privilege_escalation = true
|
||||
privileged = true
|
||||
}
|
||||
liveness_probe {
|
||||
exec {
|
||||
command = ["cilium", "status", "--brief"]
|
||||
}
|
||||
initial_delay_seconds = 120
|
||||
timeout_seconds = 5
|
||||
period_seconds = 30
|
||||
success_threshold = 1
|
||||
failure_threshold = 10
|
||||
}
|
||||
readiness_probe {
|
||||
exec {
|
||||
command = ["cilium", "status", "--brief"]
|
||||
}
|
||||
initial_delay_seconds = 5
|
||||
timeout_seconds = 5
|
||||
period_seconds = 20
|
||||
success_threshold = 1
|
||||
failure_threshold = 3
|
||||
}
|
||||
# Load kernel modules
|
||||
volume_mount {
|
||||
name = "lib-modules"
|
||||
read_only = true
|
||||
mount_path = "/lib/modules"
|
||||
}
|
||||
# Access iptables concurrently
|
||||
volume_mount {
|
||||
name = "xtables-lock"
|
||||
mount_path = "/run/xtables.lock"
|
||||
}
|
||||
# Keep state between restarts
|
||||
volume_mount {
|
||||
name = "var-run-cilium"
|
||||
mount_path = "/var/run/cilium"
|
||||
}
|
||||
volume_mount {
|
||||
name = "sys-fs-bpf"
|
||||
mount_path = "/sys/fs/bpf"
|
||||
mount_propagation = "Bidirectional"
|
||||
}
|
||||
# Configuration
|
||||
volume_mount {
|
||||
name = "config"
|
||||
read_only = true
|
||||
mount_path = "/tmp/cilium/config-map"
|
||||
}
|
||||
# Install config on host
|
||||
volume_mount {
|
||||
name = "cni-conf-dir"
|
||||
mount_path = "/host/etc/cni/net.d"
|
||||
}
|
||||
# Hubble
|
||||
volume_mount {
|
||||
name = "hubble-tls"
|
||||
mount_path = "/var/lib/cilium/tls/hubble"
|
||||
read_only = true
|
||||
}
|
||||
}
|
||||
termination_grace_period_seconds = 1
|
||||
|
||||
# Load kernel modules
|
||||
volume {
|
||||
name = "lib-modules"
|
||||
host_path {
|
||||
path = "/lib/modules"
|
||||
}
|
||||
}
|
||||
# Access iptables concurrently with other processes (e.g. kube-proxy)
|
||||
volume {
|
||||
name = "xtables-lock"
|
||||
host_path {
|
||||
path = "/run/xtables.lock"
|
||||
type = "FileOrCreate"
|
||||
}
|
||||
}
|
||||
# Keep state between restarts
|
||||
volume {
|
||||
name = "var-run-cilium"
|
||||
host_path {
|
||||
path = "/var/run/cilium"
|
||||
type = "DirectoryOrCreate"
|
||||
}
|
||||
}
|
||||
# Keep state for bpf maps between restarts
|
||||
volume {
|
||||
name = "sys-fs-bpf"
|
||||
host_path {
|
||||
path = "/sys/fs/bpf"
|
||||
type = "DirectoryOrCreate"
|
||||
}
|
||||
}
|
||||
# Mount host cgroup2 filesystem
|
||||
volume {
|
||||
name = "hostproc"
|
||||
host_path {
|
||||
path = "/proc"
|
||||
type = "Directory"
|
||||
}
|
||||
}
|
||||
volume {
|
||||
name = "cilium-cgroup"
|
||||
host_path {
|
||||
path = "/run/cilium/cgroupv2"
|
||||
type = "DirectoryOrCreate"
|
||||
}
|
||||
}
|
||||
# Read configuration
|
||||
volume {
|
||||
name = "config"
|
||||
config_map {
|
||||
name = "cilium"
|
||||
}
|
||||
}
|
||||
# Install CNI plugin and config on host
|
||||
volume {
|
||||
name = "cni-bin-dir"
|
||||
host_path {
|
||||
path = "/opt/cni/bin"
|
||||
type = "DirectoryOrCreate"
|
||||
}
|
||||
}
|
||||
volume {
|
||||
name = "cni-conf-dir"
|
||||
host_path {
|
||||
path = "/etc/cni/net.d"
|
||||
type = "DirectoryOrCreate"
|
||||
}
|
||||
}
|
||||
# Hubble TLS (optional)
|
||||
volume {
|
||||
name = "hubble-tls"
|
||||
projected {
|
||||
default_mode = "0400"
|
||||
sources {
|
||||
secret {
|
||||
name = "hubble-server-certs"
|
||||
optional = true
|
||||
items {
|
||||
key = "ca.crt"
|
||||
path = "client-ca.crt"
|
||||
}
|
||||
items {
|
||||
key = "tls.crt"
|
||||
path = "server.crt"
|
||||
}
|
||||
items {
|
||||
key = "tls.key"
|
||||
path = "server.key"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
163
addons/cilium/deployment.tf
Normal file
163
addons/cilium/deployment.tf
Normal file
@ -0,0 +1,163 @@
|
||||
resource "kubernetes_deployment" "operator" {
|
||||
wait_for_rollout = false
|
||||
metadata {
|
||||
name = "cilium-operator"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
spec {
|
||||
replicas = 1
|
||||
strategy {
|
||||
type = "RollingUpdate"
|
||||
rolling_update {
|
||||
max_unavailable = "1"
|
||||
}
|
||||
}
|
||||
selector {
|
||||
match_labels = {
|
||||
name = "cilium-operator"
|
||||
}
|
||||
}
|
||||
template {
|
||||
metadata {
|
||||
labels = {
|
||||
name = "cilium-operator"
|
||||
}
|
||||
annotations = {
|
||||
"prometheus.io/scrape" = "true"
|
||||
"prometheus.io/port" = "9963"
|
||||
}
|
||||
}
|
||||
spec {
|
||||
host_network = true
|
||||
priority_class_name = "system-cluster-critical"
|
||||
service_account_name = "cilium-operator"
|
||||
security_context {
|
||||
seccomp_profile {
|
||||
type = "RuntimeDefault"
|
||||
}
|
||||
}
|
||||
toleration {
|
||||
key = "node-role.kubernetes.io/controller"
|
||||
operator = "Exists"
|
||||
}
|
||||
toleration {
|
||||
key = "node.kubernetes.io/not-ready"
|
||||
operator = "Exists"
|
||||
}
|
||||
topology_spread_constraint {
|
||||
max_skew = 1
|
||||
topology_key = "kubernetes.io/hostname"
|
||||
when_unsatisfiable = "DoNotSchedule"
|
||||
label_selector {
|
||||
match_labels = {
|
||||
name = "cilium-operator"
|
||||
}
|
||||
}
|
||||
}
|
||||
automount_service_account_token = true
|
||||
enable_service_links = false
|
||||
container {
|
||||
name = "cilium-operator"
|
||||
image = "quay.io/cilium/operator-generic:v1.15.5"
|
||||
command = ["cilium-operator-generic"]
|
||||
args = [
|
||||
"--config-dir=/tmp/cilium/config-map",
|
||||
"--debug=$(CILIUM_DEBUG)"
|
||||
]
|
||||
env {
|
||||
name = "K8S_NODE_NAME"
|
||||
value_from {
|
||||
field_ref {
|
||||
api_version = "v1"
|
||||
field_path = "spec.nodeName"
|
||||
}
|
||||
}
|
||||
}
|
||||
env {
|
||||
name = "CILIUM_K8S_NAMESPACE"
|
||||
value_from {
|
||||
field_ref {
|
||||
api_version = "v1"
|
||||
field_path = "metadata.namespace"
|
||||
}
|
||||
}
|
||||
}
|
||||
env {
|
||||
name = "KUBERNETES_SERVICE_HOST"
|
||||
value_from {
|
||||
config_map_key_ref {
|
||||
name = "in-cluster"
|
||||
key = "apiserver-host"
|
||||
}
|
||||
}
|
||||
}
|
||||
env {
|
||||
name = "KUBERNETES_SERVICE_PORT"
|
||||
value_from {
|
||||
config_map_key_ref {
|
||||
name = "in-cluster"
|
||||
key = "apiserver-port"
|
||||
}
|
||||
}
|
||||
}
|
||||
env {
|
||||
name = "CILIUM_DEBUG"
|
||||
value_from {
|
||||
config_map_key_ref {
|
||||
name = "cilium"
|
||||
key = "debug"
|
||||
optional = true
|
||||
}
|
||||
}
|
||||
}
|
||||
port {
|
||||
name = "metrics"
|
||||
protocol = "TCP"
|
||||
host_port = 9963
|
||||
container_port = 9963
|
||||
}
|
||||
port {
|
||||
name = "health"
|
||||
container_port = 9234
|
||||
protocol = "TCP"
|
||||
}
|
||||
liveness_probe {
|
||||
http_get {
|
||||
scheme = "HTTP"
|
||||
host = "127.0.0.1"
|
||||
port = "9234"
|
||||
path = "/healthz"
|
||||
}
|
||||
initial_delay_seconds = 60
|
||||
timeout_seconds = 3
|
||||
period_seconds = 10
|
||||
}
|
||||
readiness_probe {
|
||||
http_get {
|
||||
scheme = "HTTP"
|
||||
host = "127.0.0.1"
|
||||
port = "9234"
|
||||
path = "/healthz"
|
||||
}
|
||||
timeout_seconds = 3
|
||||
period_seconds = 15
|
||||
failure_threshold = 5
|
||||
}
|
||||
volume_mount {
|
||||
name = "config"
|
||||
read_only = true
|
||||
mount_path = "/tmp/cilium/config-map"
|
||||
}
|
||||
}
|
||||
|
||||
volume {
|
||||
name = "config"
|
||||
config_map {
|
||||
name = "cilium"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
15
addons/cilium/service-account.tf
Normal file
15
addons/cilium/service-account.tf
Normal file
@ -0,0 +1,15 @@
|
||||
resource "kubernetes_service_account" "operator" {
|
||||
metadata {
|
||||
name = "cilium-operator"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
automount_service_account_token = false
|
||||
}
|
||||
|
||||
resource "kubernetes_service_account" "agent" {
|
||||
metadata {
|
||||
name = "cilium-agent"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
automount_service_account_token = false
|
||||
}
|
17
addons/cilium/variables.tf
Normal file
17
addons/cilium/variables.tf
Normal file
@ -0,0 +1,17 @@
|
||||
variable "pod_cidr" {
|
||||
type = string
|
||||
description = "CIDR IP range to assign Kubernetes pods"
|
||||
default = "10.2.0.0/16"
|
||||
}
|
||||
|
||||
variable "daemonset_tolerations" {
|
||||
type = list(string)
|
||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "enable_hubble" {
|
||||
type = bool
|
||||
description = "Run the embedded Hubble Server and mount hubble-server-certs Secret"
|
||||
default = true
|
||||
}
|
8
addons/cilium/versions.tf
Normal file
8
addons/cilium/versions.tf
Normal file
@ -0,0 +1,8 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
kubernetes = {
|
||||
source = "hashicorp/kubernetes"
|
||||
version = "~> 2.8"
|
||||
}
|
||||
}
|
||||
}
|
37
addons/coredns/cluster-role.tf
Normal file
37
addons/coredns/cluster-role.tf
Normal file
@ -0,0 +1,37 @@
|
||||
resource "kubernetes_cluster_role" "coredns" {
|
||||
metadata {
|
||||
name = "system:coredns"
|
||||
}
|
||||
rule {
|
||||
api_groups = [""]
|
||||
resources = [
|
||||
"endpoints",
|
||||
"services",
|
||||
"pods",
|
||||
"namespaces",
|
||||
]
|
||||
verbs = [
|
||||
"list",
|
||||
"watch",
|
||||
]
|
||||
}
|
||||
rule {
|
||||
api_groups = [""]
|
||||
resources = [
|
||||
"nodes",
|
||||
]
|
||||
verbs = [
|
||||
"get",
|
||||
]
|
||||
}
|
||||
rule {
|
||||
api_groups = ["discovery.k8s.io"]
|
||||
resources = [
|
||||
"endpointslices",
|
||||
]
|
||||
verbs = [
|
||||
"list",
|
||||
"watch",
|
||||
]
|
||||
}
|
||||
}
|
30
addons/coredns/config.tf
Normal file
30
addons/coredns/config.tf
Normal file
@ -0,0 +1,30 @@
|
||||
resource "kubernetes_config_map" "coredns" {
|
||||
metadata {
|
||||
name = "coredns"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
data = {
|
||||
"Corefile" = <<-EOF
|
||||
.:53 {
|
||||
errors
|
||||
health {
|
||||
lameduck 5s
|
||||
}
|
||||
ready
|
||||
log . {
|
||||
class error
|
||||
}
|
||||
kubernetes ${var.cluster_domain_suffix} in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
prometheus :9153
|
||||
forward . /etc/resolv.conf
|
||||
cache 30
|
||||
loop
|
||||
reload
|
||||
loadbalance
|
||||
}
|
||||
EOF
|
||||
}
|
||||
}
|
151
addons/coredns/deployment.tf
Normal file
151
addons/coredns/deployment.tf
Normal file
@ -0,0 +1,151 @@
|
||||
resource "kubernetes_deployment" "coredns" {
|
||||
wait_for_rollout = false
|
||||
metadata {
|
||||
name = "coredns"
|
||||
namespace = "kube-system"
|
||||
labels = {
|
||||
k8s-app = "coredns"
|
||||
"kubernetes.io/name" = "CoreDNS"
|
||||
}
|
||||
}
|
||||
spec {
|
||||
replicas = var.replicas
|
||||
strategy {
|
||||
type = "RollingUpdate"
|
||||
rolling_update {
|
||||
max_unavailable = "1"
|
||||
}
|
||||
}
|
||||
selector {
|
||||
match_labels = {
|
||||
k8s-app = "coredns"
|
||||
tier = "control-plane"
|
||||
}
|
||||
}
|
||||
template {
|
||||
metadata {
|
||||
labels = {
|
||||
k8s-app = "coredns"
|
||||
tier = "control-plane"
|
||||
}
|
||||
}
|
||||
spec {
|
||||
affinity {
|
||||
node_affinity {
|
||||
preferred_during_scheduling_ignored_during_execution {
|
||||
weight = 100
|
||||
preference {
|
||||
match_expressions {
|
||||
key = "node.kubernetes.io/controller"
|
||||
operator = "Exists"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
pod_anti_affinity {
|
||||
preferred_during_scheduling_ignored_during_execution {
|
||||
weight = 100
|
||||
pod_affinity_term {
|
||||
label_selector {
|
||||
match_expressions {
|
||||
key = "tier"
|
||||
operator = "In"
|
||||
values = ["control-plane"]
|
||||
}
|
||||
match_expressions {
|
||||
key = "k8s-app"
|
||||
operator = "In"
|
||||
values = ["coredns"]
|
||||
}
|
||||
}
|
||||
topology_key = "kubernetes.io/hostname"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
dns_policy = "Default"
|
||||
priority_class_name = "system-cluster-critical"
|
||||
security_context {
|
||||
seccomp_profile {
|
||||
type = "RuntimeDefault"
|
||||
}
|
||||
}
|
||||
service_account_name = "coredns"
|
||||
toleration {
|
||||
key = "node-role.kubernetes.io/controller"
|
||||
effect = "NoSchedule"
|
||||
}
|
||||
container {
|
||||
name = "coredns"
|
||||
image = "registry.k8s.io/coredns/coredns:v1.11.1"
|
||||
args = ["-conf", "/etc/coredns/Corefile"]
|
||||
port {
|
||||
name = "dns"
|
||||
container_port = 53
|
||||
protocol = "UDP"
|
||||
}
|
||||
port {
|
||||
name = "dns-tcp"
|
||||
container_port = 53
|
||||
protocol = "TCP"
|
||||
}
|
||||
port {
|
||||
name = "metrics"
|
||||
container_port = 9153
|
||||
protocol = "TCP"
|
||||
}
|
||||
resources {
|
||||
requests = {
|
||||
cpu = "100m"
|
||||
memory = "70Mi"
|
||||
}
|
||||
limits = {
|
||||
memory = "170Mi"
|
||||
}
|
||||
}
|
||||
security_context {
|
||||
capabilities {
|
||||
add = ["NET_BIND_SERVICE"]
|
||||
drop = ["all"]
|
||||
}
|
||||
read_only_root_filesystem = true
|
||||
}
|
||||
liveness_probe {
|
||||
http_get {
|
||||
path = "/health"
|
||||
port = "8080"
|
||||
scheme = "HTTP"
|
||||
}
|
||||
initial_delay_seconds = 60
|
||||
timeout_seconds = 5
|
||||
success_threshold = 1
|
||||
failure_threshold = 5
|
||||
}
|
||||
readiness_probe {
|
||||
http_get {
|
||||
path = "/ready"
|
||||
port = "8181"
|
||||
scheme = "HTTP"
|
||||
}
|
||||
}
|
||||
volume_mount {
|
||||
name = "config"
|
||||
mount_path = "/etc/coredns"
|
||||
read_only = true
|
||||
}
|
||||
}
|
||||
volume {
|
||||
name = "config"
|
||||
config_map {
|
||||
name = "coredns"
|
||||
items {
|
||||
key = "Corefile"
|
||||
path = "Corefile"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
24
addons/coredns/service-account.tf
Normal file
24
addons/coredns/service-account.tf
Normal file
@ -0,0 +1,24 @@
|
||||
resource "kubernetes_service_account" "coredns" {
|
||||
metadata {
|
||||
name = "coredns"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
automount_service_account_token = false
|
||||
}
|
||||
|
||||
|
||||
resource "kubernetes_cluster_role_binding" "coredns" {
|
||||
metadata {
|
||||
name = "system:coredns"
|
||||
}
|
||||
role_ref {
|
||||
api_group = "rbac.authorization.k8s.io"
|
||||
kind = "ClusterRole"
|
||||
name = "system:coredns"
|
||||
}
|
||||
subject {
|
||||
kind = "ServiceAccount"
|
||||
name = "coredns"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
}
|
31
addons/coredns/service.tf
Normal file
31
addons/coredns/service.tf
Normal file
@ -0,0 +1,31 @@
|
||||
resource "kubernetes_service" "coredns" {
|
||||
metadata {
|
||||
name = "coredns"
|
||||
namespace = "kube-system"
|
||||
labels = {
|
||||
"k8s-app" = "coredns"
|
||||
"kubernetes.io/name" = "CoreDNS"
|
||||
}
|
||||
annotations = {
|
||||
"prometheus.io/scrape" = "true"
|
||||
"prometheus.io/port" = "9153"
|
||||
}
|
||||
}
|
||||
spec {
|
||||
type = "ClusterIP"
|
||||
cluster_ip = var.cluster_dns_service_ip
|
||||
selector = {
|
||||
k8s-app = "coredns"
|
||||
}
|
||||
port {
|
||||
name = "dns"
|
||||
protocol = "UDP"
|
||||
port = 53
|
||||
}
|
||||
port {
|
||||
name = "dns-tcp"
|
||||
protocol = "TCP"
|
||||
port = 53
|
||||
}
|
||||
}
|
||||
}
|
15
addons/coredns/variables.tf
Normal file
15
addons/coredns/variables.tf
Normal file
@ -0,0 +1,15 @@
|
||||
variable "replicas" {
|
||||
type = number
|
||||
description = "CoreDNS replica count"
|
||||
default = 2
|
||||
}
|
||||
|
||||
variable "cluster_dns_service_ip" {
|
||||
description = "Must be set to `cluster_dns_service_ip` output by cluster"
|
||||
default = "10.3.0.10"
|
||||
}
|
||||
|
||||
variable "cluster_domain_suffix" {
|
||||
description = "Must be set to `cluster_domain_suffix` output by cluster"
|
||||
default = "cluster.local"
|
||||
}
|
9
addons/coredns/versions.tf
Normal file
9
addons/coredns/versions.tf
Normal file
@ -0,0 +1,9 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
kubernetes = {
|
||||
source = "hashicorp/kubernetes"
|
||||
version = "~> 2.8"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
18
addons/flannel/cluster-role-binding.tf
Normal file
18
addons/flannel/cluster-role-binding.tf
Normal file
@ -0,0 +1,18 @@
|
||||
resource "kubernetes_cluster_role_binding" "flannel" {
|
||||
metadata {
|
||||
name = "flannel"
|
||||
}
|
||||
|
||||
role_ref {
|
||||
api_group = "rbac.authorization.k8s.io"
|
||||
kind = "ClusterRole"
|
||||
name = "flannel"
|
||||
}
|
||||
|
||||
subject {
|
||||
kind = "ServiceAccount"
|
||||
name = "flannel"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
}
|
||||
|
24
addons/flannel/cluster-role.tf
Normal file
24
addons/flannel/cluster-role.tf
Normal file
@ -0,0 +1,24 @@
|
||||
resource "kubernetes_cluster_role" "flannel" {
|
||||
metadata {
|
||||
name = "flannel"
|
||||
}
|
||||
|
||||
rule {
|
||||
api_groups = [""]
|
||||
resources = ["pods"]
|
||||
verbs = ["get"]
|
||||
}
|
||||
|
||||
rule {
|
||||
api_groups = [""]
|
||||
resources = ["nodes"]
|
||||
verbs = ["list", "watch"]
|
||||
}
|
||||
|
||||
rule {
|
||||
api_groups = [""]
|
||||
resources = ["nodes/status"]
|
||||
verbs = ["patch"]
|
||||
}
|
||||
}
|
||||
|
44
addons/flannel/config.tf
Normal file
44
addons/flannel/config.tf
Normal file
@ -0,0 +1,44 @@
|
||||
resource "kubernetes_config_map" "config" {
|
||||
metadata {
|
||||
name = "flannel-config"
|
||||
namespace = "kube-system"
|
||||
labels = {
|
||||
k8s-app = "flannel"
|
||||
tier = "node"
|
||||
}
|
||||
}
|
||||
|
||||
data = {
|
||||
"cni-conf.json" = <<-EOF
|
||||
{
|
||||
"name": "cbr0",
|
||||
"cniVersion": "0.3.1",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "flannel",
|
||||
"delegate": {
|
||||
"hairpinMode": true,
|
||||
"isDefaultGateway": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"capabilities": {
|
||||
"portMappings": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
"net-conf.json" = <<-EOF
|
||||
{
|
||||
"Network": "${var.pod_cidr}",
|
||||
"Backend": {
|
||||
"Type": "vxlan",
|
||||
"Port": 4789
|
||||
}
|
||||
}
|
||||
EOF
|
||||
}
|
||||
}
|
||||
|
167
addons/flannel/daemonset.tf
Normal file
167
addons/flannel/daemonset.tf
Normal file
@ -0,0 +1,167 @@
|
||||
resource "kubernetes_daemonset" "flannel" {
|
||||
metadata {
|
||||
name = "flannel"
|
||||
namespace = "kube-system"
|
||||
labels = {
|
||||
k8s-app = "flannel"
|
||||
}
|
||||
}
|
||||
spec {
|
||||
strategy {
|
||||
type = "RollingUpdate"
|
||||
rolling_update {
|
||||
max_unavailable = "1"
|
||||
}
|
||||
}
|
||||
selector {
|
||||
match_labels = {
|
||||
k8s-app = "flannel"
|
||||
}
|
||||
}
|
||||
template {
|
||||
metadata {
|
||||
labels = {
|
||||
k8s-app = "flannel"
|
||||
}
|
||||
}
|
||||
spec {
|
||||
host_network = true
|
||||
priority_class_name = "system-node-critical"
|
||||
service_account_name = "flannel"
|
||||
security_context {
|
||||
seccomp_profile {
|
||||
type = "RuntimeDefault"
|
||||
}
|
||||
}
|
||||
toleration {
|
||||
key = "node-role.kubernetes.io/controller"
|
||||
operator = "Exists"
|
||||
}
|
||||
toleration {
|
||||
key = "node.kubernetes.io/not-ready"
|
||||
operator = "Exists"
|
||||
}
|
||||
dynamic "toleration" {
|
||||
for_each = var.daemonset_tolerations
|
||||
content {
|
||||
key = toleration.value
|
||||
operator = "Exists"
|
||||
}
|
||||
}
|
||||
init_container {
|
||||
name = "install-cni"
|
||||
image = "quay.io/poseidon/flannel-cni:v0.4.2"
|
||||
command = ["/install-cni.sh"]
|
||||
env {
|
||||
name = "CNI_NETWORK_CONFIG"
|
||||
value_from {
|
||||
config_map_key_ref {
|
||||
name = "flannel-config"
|
||||
key = "cni-conf.json"
|
||||
}
|
||||
}
|
||||
}
|
||||
volume_mount {
|
||||
name = "cni-bin-dir"
|
||||
mount_path = "/host/opt/cni/bin/"
|
||||
}
|
||||
volume_mount {
|
||||
name = "cni-conf-dir"
|
||||
mount_path = "/host/etc/cni/net.d"
|
||||
}
|
||||
}
|
||||
|
||||
container {
|
||||
name = "flannel"
|
||||
image = "docker.io/flannel/flannel:v0.25.1"
|
||||
command = [
|
||||
"/opt/bin/flanneld",
|
||||
"--ip-masq",
|
||||
"--kube-subnet-mgr",
|
||||
"--iface=$(POD_IP)"
|
||||
]
|
||||
env {
|
||||
name = "POD_NAME"
|
||||
value_from {
|
||||
field_ref {
|
||||
field_path = "metadata.name"
|
||||
}
|
||||
}
|
||||
}
|
||||
env {
|
||||
name = "POD_NAMESPACE"
|
||||
value_from {
|
||||
field_ref {
|
||||
field_path = "metadata.namespace"
|
||||
}
|
||||
}
|
||||
}
|
||||
env {
|
||||
name = "POD_IP"
|
||||
value_from {
|
||||
field_ref {
|
||||
field_path = "status.podIP"
|
||||
}
|
||||
}
|
||||
}
|
||||
security_context {
|
||||
privileged = true
|
||||
}
|
||||
resources {
|
||||
requests = {
|
||||
cpu = "100m"
|
||||
}
|
||||
}
|
||||
volume_mount {
|
||||
name = "flannel-config"
|
||||
mount_path = "/etc/kube-flannel/"
|
||||
}
|
||||
volume_mount {
|
||||
name = "run-flannel"
|
||||
mount_path = "/run/flannel"
|
||||
}
|
||||
volume_mount {
|
||||
name = "xtables-lock"
|
||||
mount_path = "/run/xtables.lock"
|
||||
}
|
||||
}
|
||||
|
||||
volume {
|
||||
name = "flannel-config"
|
||||
config_map {
|
||||
name = "flannel-config"
|
||||
}
|
||||
}
|
||||
volume {
|
||||
name = "run-flannel"
|
||||
host_path {
|
||||
path = "/run/flannel"
|
||||
}
|
||||
}
|
||||
# Used by install-cni
|
||||
volume {
|
||||
name = "cni-bin-dir"
|
||||
host_path {
|
||||
path = "/opt/cni/bin"
|
||||
}
|
||||
}
|
||||
volume {
|
||||
name = "cni-conf-dir"
|
||||
host_path {
|
||||
path = "/etc/cni/net.d"
|
||||
type = "DirectoryOrCreate"
|
||||
}
|
||||
}
|
||||
# Acces iptables concurrently
|
||||
volume {
|
||||
name = "xtables-lock"
|
||||
host_path {
|
||||
path = "/run/xtables.lock"
|
||||
type = "FileOrCreate"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
7
addons/flannel/service-account.tf
Normal file
7
addons/flannel/service-account.tf
Normal file
@ -0,0 +1,7 @@
|
||||
resource "kubernetes_service_account" "flannel" {
|
||||
metadata {
|
||||
name = "flannel"
|
||||
namespace = "kube-system"
|
||||
}
|
||||
}
|
||||
|
11
addons/flannel/variables.tf
Normal file
11
addons/flannel/variables.tf
Normal file
@ -0,0 +1,11 @@
|
||||
variable "pod_cidr" {
|
||||
type = string
|
||||
description = "CIDR IP range to assign Kubernetes pods"
|
||||
default = "10.2.0.0/16"
|
||||
}
|
||||
|
||||
variable "daemonset_tolerations" {
|
||||
type = list(string)
|
||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||
default = []
|
||||
}
|
8
addons/flannel/versions.tf
Normal file
8
addons/flannel/versions.tf
Normal file
@ -0,0 +1,8 @@
|
||||
terraform {
|
||||
required_providers {
|
||||
kubernetes = {
|
||||
source = "hashicorp/kubernetes"
|
||||
version = "~> 2.8"
|
||||
}
|
||||
}
|
||||
}
|
@ -59,4 +59,11 @@ rules:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
|
||||
- apiGroups:
|
||||
- discovery.k8s.io
|
||||
resources:
|
||||
- "endpointslices"
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.29.0 (upstream)
|
||||
* Kubernetes v1.30.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/fedora-coreos/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f0d22ec89517bd7cbb60723d1e6091f278e57bb2"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e1b1e0c75e77e042cf369f463f0e656297a201a8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
@ -13,5 +13,6 @@ module "bootstrap" {
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
daemonset_tolerations = var.daemonset_tolerations
|
||||
components = var.components
|
||||
}
|
||||
|
||||
|
@ -12,7 +12,7 @@ systemd:
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.10
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -57,7 +57,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -116,7 +116,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.29.0
|
||||
quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -163,7 +163,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||
mkdir -p -- auth tls/{etcd,k8s} static-manifests manifests/{coredns,kube-proxy,network}
|
||||
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||
mkdir -p /etc/ssl/etcd/etcd
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
@ -177,8 +177,7 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
rm -rf assets auth static-manifests tls manifests
|
||||
chcon -R -u system_u -t container_file_t /etc/kubernetes/pki
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
|
@ -92,6 +92,30 @@ resource "aws_security_group_rule" "controller-cilium-health-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-cilium-metrics" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9962
|
||||
to_port = 9965
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-cilium-metrics-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9962
|
||||
to_port = 9965
|
||||
self = true
|
||||
}
|
||||
|
||||
# IANA VXLAN default
|
||||
resource "aws_security_group_rule" "controller-vxlan" {
|
||||
count = var.networking == "flannel" ? 1 : 0
|
||||
@ -379,6 +403,30 @@ resource "aws_security_group_rule" "worker-cilium-health-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-cilium-metrics" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9962
|
||||
to_port = 9965
|
||||
source_security_group_id = aws_security_group.controller.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-cilium-metrics-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9962
|
||||
to_port = 9965
|
||||
self = true
|
||||
}
|
||||
|
||||
# IANA VXLAN default
|
||||
resource "aws_security_group_rule" "worker-vxlan" {
|
||||
count = var.networking == "flannel" ? 1 : 0
|
||||
|
@ -176,3 +176,19 @@ variable "daemonset_tolerations" {
|
||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "components" {
|
||||
description = "Configure pre-installed cluster components"
|
||||
# Component configs are passed through to terraform-render-bootstrap,
|
||||
# which handles type enforcement and defines defaults
|
||||
# https://github.com/poseidon/terraform-render-bootstrap/blob/main/variables.tf#L95
|
||||
type = object({
|
||||
enable = optional(bool)
|
||||
coredns = optional(map(any))
|
||||
kube_proxy = optional(map(any))
|
||||
flannel = optional(map(any))
|
||||
calico = optional(map(any))
|
||||
cilium = optional(map(any))
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
@ -29,7 +29,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
|
@ -78,6 +78,11 @@ resource "aws_launch_template" "worker" {
|
||||
# network
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# metadata
|
||||
metadata_options {
|
||||
http_tokens = "optional"
|
||||
}
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.29.0 (upstream)
|
||||
* Kubernetes v1.30.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot](https://typhoon.psdn.io/flatcar-linux/aws/#spot) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f0d22ec89517bd7cbb60723d1e6091f278e57bb2"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e1b1e0c75e77e042cf369f463f0e656297a201a8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
@ -13,5 +13,6 @@ module "bootstrap" {
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
daemonset_tolerations = var.daemonset_tolerations
|
||||
components = var.components
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.10
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -58,7 +58,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -109,7 +109,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -162,7 +162,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||
mkdir -p -- auth tls/{etcd,k8s} static-manifests manifests/{coredns,kube-proxy,network}
|
||||
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||
mkdir -p /etc/ssl/etcd/etcd
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
@ -177,8 +177,7 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
rm -rf assets auth static-manifests tls manifests
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
|
@ -92,6 +92,30 @@ resource "aws_security_group_rule" "controller-cilium-health-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-cilium-metrics" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9962
|
||||
to_port = 9965
|
||||
source_security_group_id = aws_security_group.worker.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "controller-cilium-metrics-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.controller.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9962
|
||||
to_port = 9965
|
||||
self = true
|
||||
}
|
||||
|
||||
# IANA VXLAN default
|
||||
resource "aws_security_group_rule" "controller-vxlan" {
|
||||
count = var.networking == "flannel" ? 1 : 0
|
||||
@ -379,6 +403,30 @@ resource "aws_security_group_rule" "worker-cilium-health-self" {
|
||||
self = true
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-cilium-metrics" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9962
|
||||
to_port = 9965
|
||||
source_security_group_id = aws_security_group.controller.id
|
||||
}
|
||||
|
||||
resource "aws_security_group_rule" "worker-cilium-metrics-self" {
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
security_group_id = aws_security_group.worker.id
|
||||
|
||||
type = "ingress"
|
||||
protocol = "tcp"
|
||||
from_port = 9962
|
||||
to_port = 9965
|
||||
self = true
|
||||
}
|
||||
|
||||
# IANA VXLAN default
|
||||
resource "aws_security_group_rule" "worker-vxlan" {
|
||||
count = var.networking == "flannel" ? 1 : 0
|
||||
|
@ -176,3 +176,19 @@ variable "daemonset_tolerations" {
|
||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "components" {
|
||||
description = "Configure pre-installed cluster components"
|
||||
# Component configs are passed through to terraform-render-bootstrap,
|
||||
# which handles type enforcement and defines defaults
|
||||
# https://github.com/poseidon/terraform-render-bootstrap/blob/main/variables.tf#L95
|
||||
type = object({
|
||||
enable = optional(bool)
|
||||
coredns = optional(map(any))
|
||||
kube_proxy = optional(map(any))
|
||||
flannel = optional(map(any))
|
||||
calico = optional(map(any))
|
||||
cilium = optional(map(any))
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ terraform {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.11"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -30,7 +30,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
|
@ -6,7 +6,7 @@ terraform {
|
||||
aws = ">= 2.23, <= 6.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.11"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -78,6 +78,11 @@ resource "aws_launch_template" "worker" {
|
||||
# network
|
||||
vpc_security_group_ids = var.security_groups
|
||||
|
||||
# metadata
|
||||
metadata_options {
|
||||
http_tokens = "optional"
|
||||
}
|
||||
|
||||
# spot
|
||||
dynamic "instance_market_options" {
|
||||
for_each = var.spot_price > 0 ? [1] : []
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.29.0 (upstream)
|
||||
* Kubernetes v1.30.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [spot priority](https://typhoon.psdn.io/fedora-coreos/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,13 +1,12 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f0d22ec89517bd7cbb60723d1e6091f278e57bb2"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e1b1e0c75e77e042cf369f463f0e656297a201a8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = formatlist("%s.%s", azurerm_dns_a_record.etcds.*.name, var.dns_zone)
|
||||
|
||||
networking = var.networking
|
||||
|
||||
# only effective with Calico networking
|
||||
# we should be able to use 1450 MTU, but in practice, 1410 was needed
|
||||
network_encapsulation = "vxlan"
|
||||
@ -19,5 +18,6 @@ module "bootstrap" {
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
daemonset_tolerations = var.daemonset_tolerations
|
||||
components = var.components
|
||||
}
|
||||
|
||||
|
@ -12,7 +12,7 @@ systemd:
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.10
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -54,7 +54,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -111,7 +111,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.29.0
|
||||
quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -158,7 +158,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||
mkdir -p -- auth tls/{etcd,k8s} static-manifests manifests/{coredns,kube-proxy,network}
|
||||
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||
mkdir -p /etc/ssl/etcd/etcd
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
@ -172,8 +172,7 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
rm -rf assets auth static-manifests tls manifests
|
||||
chcon -R -u system_u -t container_file_t /etc/kubernetes/pki
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
|
@ -39,8 +39,19 @@ output "kubeconfig" {
|
||||
|
||||
# Outputs for custom firewalling
|
||||
|
||||
output "controller_security_group_name" {
|
||||
description = "Network Security Group for controller nodes"
|
||||
value = azurerm_network_security_group.controller.name
|
||||
}
|
||||
|
||||
output "worker_security_group_name" {
|
||||
value = azurerm_network_security_group.worker.name
|
||||
description = "Network Security Group for worker nodes"
|
||||
value = azurerm_network_security_group.worker.name
|
||||
}
|
||||
|
||||
output "controller_address_prefixes" {
|
||||
description = "Controller network subnet CIDR addresses (for source/destination)"
|
||||
value = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
output "worker_address_prefixes" {
|
||||
|
@ -121,7 +121,7 @@ resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
priority = "2018"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
@ -131,6 +131,22 @@ resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-cilium-metrics" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9962-9965"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -303,7 +319,7 @@ resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
priority = "2013"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
@ -313,6 +329,22 @@ resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-cilium-metrics" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9962-9965"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
|
@ -146,3 +146,19 @@ variable "daemonset_tolerations" {
|
||||
description = "List of additional taint keys kube-system DaemonSets should tolerate (e.g. ['custom-role', 'gpu-role'])"
|
||||
default = []
|
||||
}
|
||||
|
||||
variable "components" {
|
||||
description = "Configure pre-installed cluster components"
|
||||
# Component configs are passed through to terraform-render-bootstrap,
|
||||
# which handles type enforcement and defines defaults
|
||||
# https://github.com/poseidon/terraform-render-bootstrap/blob/main/variables.tf#L95
|
||||
type = object({
|
||||
enable = optional(bool)
|
||||
coredns = optional(map(any))
|
||||
kube_proxy = optional(map(any))
|
||||
flannel = optional(map(any))
|
||||
calico = optional(map(any))
|
||||
cilium = optional(map(any))
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
@ -26,7 +26,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.29.0 (upstream)
|
||||
* Kubernetes v1.30.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [worker pools](https://typhoon.psdn.io/advanced/worker-pools/), [low-priority](https://typhoon.psdn.io/flatcar-linux/azure/#low-priority) workers, and [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,13 +1,12 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f0d22ec89517bd7cbb60723d1e6091f278e57bb2"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e1b1e0c75e77e042cf369f463f0e656297a201a8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = formatlist("%s.%s", azurerm_dns_a_record.etcds.*.name, var.dns_zone)
|
||||
|
||||
networking = var.networking
|
||||
|
||||
# only effective with Calico networking
|
||||
# we should be able to use 1450 MTU, but in practice, 1410 was needed
|
||||
network_encapsulation = "vxlan"
|
||||
@ -19,5 +18,6 @@ module "bootstrap" {
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
daemonset_tolerations = var.daemonset_tolerations
|
||||
components = var.components
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.10
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -56,7 +56,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -105,7 +105,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -158,7 +158,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||
mkdir -p -- auth tls/{etcd,k8s} static-manifests manifests/{coredns,kube-proxy,network}
|
||||
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||
mkdir -p /etc/ssl/etcd/etcd
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
@ -173,8 +173,7 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
rm -rf assets auth static-manifests tls manifests
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
|
@ -39,8 +39,19 @@ output "kubeconfig" {
|
||||
|
||||
# Outputs for custom firewalling
|
||||
|
||||
output "controller_security_group_name" {
|
||||
description = "Network Security Group for controller nodes"
|
||||
value = azurerm_network_security_group.controller.name
|
||||
}
|
||||
|
||||
output "worker_security_group_name" {
|
||||
value = azurerm_network_security_group.worker.name
|
||||
description = "Network Security Group for worker nodes"
|
||||
value = azurerm_network_security_group.worker.name
|
||||
}
|
||||
|
||||
output "controller_address_prefixes" {
|
||||
description = "Controller network subnet CIDR addresses (for source/destination)"
|
||||
value = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
output "worker_address_prefixes" {
|
||||
|
@ -121,7 +121,7 @@ resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
priority = "2018"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
@ -131,6 +131,22 @@ resource "azurerm_network_security_rule" "controller-cilium-health" {
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-cilium-metrics" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.controller.name
|
||||
priority = "2019"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9962-9965"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.controller.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "controller-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
@ -303,7 +319,7 @@ resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||
|
||||
name = "allow-cilium-health"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
priority = "2013"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
@ -313,6 +329,22 @@ resource "azurerm_network_security_rule" "worker-cilium-health" {
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-cilium-metrics" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
count = var.networking == "cilium" ? 1 : 0
|
||||
|
||||
name = "allow-cilium-metrics"
|
||||
network_security_group_name = azurerm_network_security_group.worker.name
|
||||
priority = "2014"
|
||||
access = "Allow"
|
||||
direction = "Inbound"
|
||||
protocol = "Tcp"
|
||||
source_port_range = "*"
|
||||
destination_port_range = "9962-9965"
|
||||
source_address_prefixes = concat(azurerm_subnet.controller.address_prefixes, azurerm_subnet.worker.address_prefixes)
|
||||
destination_address_prefixes = azurerm_subnet.worker.address_prefixes
|
||||
}
|
||||
|
||||
resource "azurerm_network_security_rule" "worker-vxlan" {
|
||||
resource_group_name = azurerm_resource_group.cluster.name
|
||||
|
||||
|
@ -163,3 +163,19 @@ variable "cluster_domain_suffix" {
|
||||
description = "Queries for domains with the suffix will be answered by coredns. Default is cluster.local (e.g. foo.default.svc.cluster.local) "
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "components" {
|
||||
description = "Configure pre-installed cluster components"
|
||||
# Component configs are passed through to terraform-render-bootstrap,
|
||||
# which handles type enforcement and defines defaults
|
||||
# https://github.com/poseidon/terraform-render-bootstrap/blob/main/variables.tf#L95
|
||||
type = object({
|
||||
enable = optional(bool)
|
||||
coredns = optional(map(any))
|
||||
kube_proxy = optional(map(any))
|
||||
flannel = optional(map(any))
|
||||
calico = optional(map(any))
|
||||
cilium = optional(map(any))
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
@ -7,7 +7,7 @@ terraform {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.11"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -28,7 +28,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
|
@ -6,7 +6,7 @@ terraform {
|
||||
azurerm = ">= 2.8, < 4.0"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.11"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.29.0 (upstream)
|
||||
* Kubernetes v1.30.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f0d22ec89517bd7cbb60723d1e6091f278e57bb2"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e1b1e0c75e77e042cf369f463f0e656297a201a8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
@ -13,6 +13,7 @@ module "bootstrap" {
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
components = var.components
|
||||
}
|
||||
|
||||
|
||||
|
@ -12,7 +12,7 @@ systemd:
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.10
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -53,7 +53,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -113,7 +113,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPre=-/usr/bin/podman rm bootstrap
|
||||
ExecStart=/usr/bin/podman run --name bootstrap \
|
||||
--network host \
|
||||
@ -168,7 +168,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||
mkdir -p -- auth tls/{etcd,k8s} static-manifests manifests/{coredns,kube-proxy,network}
|
||||
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||
mkdir -p /etc/ssl/etcd/etcd
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
@ -182,8 +182,7 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
rm -rf assets auth static-manifests tls manifests
|
||||
chcon -R -u system_u -t container_file_t /etc/kubernetes/pki
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
|
@ -159,3 +159,18 @@ variable "cluster_domain_suffix" {
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "components" {
|
||||
description = "Configure pre-installed cluster components"
|
||||
# Component configs are passed through to terraform-render-bootstrap,
|
||||
# which handles type enforcement and defines defaults
|
||||
# https://github.com/poseidon/terraform-render-bootstrap/blob/main/variables.tf#L95
|
||||
type = object({
|
||||
enable = optional(bool)
|
||||
coredns = optional(map(any))
|
||||
kube_proxy = optional(map(any))
|
||||
flannel = optional(map(any))
|
||||
calico = optional(map(any))
|
||||
cilium = optional(map(any))
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
@ -25,7 +25,7 @@ systemd:
|
||||
Description=Kubelet (System Container)
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.29.0 (upstream)
|
||||
* Kubernetes v1.30.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f0d22ec89517bd7cbb60723d1e6091f278e57bb2"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e1b1e0c75e77e042cf369f463f0e656297a201a8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [var.k8s_domain_name]
|
||||
@ -13,5 +13,6 @@ module "bootstrap" {
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
components = var.components
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.10
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -64,7 +64,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
@ -114,7 +114,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -169,7 +169,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||
mkdir -p -- auth tls/{etcd,k8s} static-manifests manifests/{coredns,kube-proxy,network}
|
||||
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||
mkdir -p /etc/ssl/etcd/etcd
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
@ -184,8 +184,7 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
rm -rf assets auth static-manifests tls manifests
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
|
@ -175,3 +175,18 @@ variable "cluster_domain_suffix" {
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "components" {
|
||||
description = "Configure pre-installed cluster components"
|
||||
# Component configs are passed through to terraform-render-bootstrap,
|
||||
# which handles type enforcement and defines defaults
|
||||
# https://github.com/poseidon/terraform-render-bootstrap/blob/main/variables.tf#L95
|
||||
type = object({
|
||||
enable = optional(bool)
|
||||
coredns = optional(map(any))
|
||||
kube_proxy = optional(map(any))
|
||||
flannel = optional(map(any))
|
||||
calico = optional(map(any))
|
||||
cilium = optional(map(any))
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
@ -6,7 +6,7 @@ terraform {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
|
@ -36,7 +36,7 @@ systemd:
|
||||
After=docker.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
ExecStartPre=/bin/mkdir -p /opt/cni/bin
|
||||
|
@ -6,7 +6,7 @@ terraform {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
matchbox = {
|
||||
source = "poseidon/matchbox"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.29.0 (upstream)
|
||||
* Kubernetes v1.30.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/), SELinux enforcing
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,13 +1,12 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f0d22ec89517bd7cbb60723d1e6091f278e57bb2"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e1b1e0c75e77e042cf369f463f0e656297a201a8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = digitalocean_record.etcds.*.fqdn
|
||||
|
||||
networking = var.networking
|
||||
|
||||
# only effective with Calico networking
|
||||
network_encapsulation = "vxlan"
|
||||
network_mtu = "1450"
|
||||
@ -17,5 +16,6 @@ module "bootstrap" {
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
components = var.components
|
||||
}
|
||||
|
||||
|
@ -12,7 +12,7 @@ systemd:
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.10
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
Type=exec
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/etcd
|
||||
ExecStartPre=-/usr/bin/podman rm etcd
|
||||
@ -55,7 +55,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -123,7 +123,7 @@ systemd:
|
||||
--volume /opt/bootstrap/assets:/assets:ro,Z \
|
||||
--volume /opt/bootstrap/apply:/apply:ro,Z \
|
||||
--entrypoint=/apply \
|
||||
quay.io/poseidon/kubelet:v1.29.0
|
||||
quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStartPost=/bin/touch /opt/bootstrap/bootstrap.done
|
||||
ExecStartPost=-/usr/bin/podman stop bootstrap
|
||||
storage:
|
||||
@ -165,7 +165,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||
mkdir -p -- auth tls/{etcd,k8s} static-manifests manifests/{coredns,kube-proxy,network}
|
||||
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||
mkdir -p /etc/ssl/etcd/etcd
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
@ -179,8 +179,7 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
rm -rf assets auth static-manifests tls manifests
|
||||
chcon -R -u system_u -t container_file_t /etc/kubernetes/pki
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
|
@ -28,7 +28,7 @@ systemd:
|
||||
After=afterburn.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
EnvironmentFile=/run/metadata/afterburn
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
|
@ -32,6 +32,13 @@ resource "digitalocean_firewall" "rules" {
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Cilium metrics
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "9962-9965"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# IANA vxlan (flannel, calico)
|
||||
inbound_rule {
|
||||
protocol = "udp"
|
||||
|
@ -106,3 +106,18 @@ variable "cluster_domain_suffix" {
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "components" {
|
||||
description = "Configure pre-installed cluster components"
|
||||
# Component configs are passed through to terraform-render-bootstrap,
|
||||
# which handles type enforcement and defines defaults
|
||||
# https://github.com/poseidon/terraform-render-bootstrap/blob/main/variables.tf#L95
|
||||
type = object({
|
||||
enable = optional(bool)
|
||||
coredns = optional(map(any))
|
||||
kube_proxy = optional(map(any))
|
||||
flannel = optional(map(any))
|
||||
calico = optional(map(any))
|
||||
cilium = optional(map(any))
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
@ -6,7 +6,7 @@ terraform {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.9"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
|
@ -11,7 +11,7 @@ Typhoon distributes upstream Kubernetes, architectural conventions, and cluster
|
||||
|
||||
## Features <a href="https://www.cncf.io/certification/software-conformance/"><img align="right" src="https://storage.googleapis.com/poseidon/certified-kubernetes.png"></a>
|
||||
|
||||
* Kubernetes v1.29.0 (upstream)
|
||||
* Kubernetes v1.30.1 (upstream)
|
||||
* Single or multi-master, [Calico](https://www.projectcalico.org/) or [Cilium](https://github.com/cilium/cilium) or [flannel](https://github.com/coreos/flannel) networking
|
||||
* On-cluster etcd with TLS, [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/)-enabled, [network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
* Advanced features like [snippets](https://typhoon.psdn.io/advanced/customization/#hosts) customization
|
||||
|
@ -1,13 +1,12 @@
|
||||
# Kubernetes assets (kubeconfig, manifests)
|
||||
module "bootstrap" {
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=f0d22ec89517bd7cbb60723d1e6091f278e57bb2"
|
||||
source = "git::https://github.com/poseidon/terraform-render-bootstrap.git?ref=e1b1e0c75e77e042cf369f463f0e656297a201a8"
|
||||
|
||||
cluster_name = var.cluster_name
|
||||
api_servers = [format("%s.%s", var.cluster_name, var.dns_zone)]
|
||||
etcd_servers = digitalocean_record.etcds.*.fqdn
|
||||
|
||||
networking = var.networking
|
||||
|
||||
# only effective with Calico networking
|
||||
network_encapsulation = "vxlan"
|
||||
network_mtu = "1450"
|
||||
@ -17,5 +16,6 @@ module "bootstrap" {
|
||||
cluster_domain_suffix = var.cluster_domain_suffix
|
||||
enable_reporting = var.enable_reporting
|
||||
enable_aggregation = var.enable_aggregation
|
||||
components = var.components
|
||||
}
|
||||
|
||||
|
@ -11,7 +11,7 @@ systemd:
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
[Service]
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.10
|
||||
Environment=ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
|
||||
ExecStartPre=/usr/bin/docker run -d \
|
||||
--name etcd \
|
||||
--network host \
|
||||
@ -66,7 +66,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
@ -117,7 +117,7 @@ systemd:
|
||||
Type=oneshot
|
||||
RemainAfterExit=true
|
||||
WorkingDirectory=/opt/bootstrap
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
ExecStart=/usr/bin/docker run \
|
||||
-v /etc/kubernetes/pki:/etc/kubernetes/pki:ro \
|
||||
-v /opt/bootstrap/assets:/assets:ro \
|
||||
@ -167,7 +167,7 @@ storage:
|
||||
contents:
|
||||
inline: |
|
||||
#!/bin/bash -e
|
||||
mkdir -p -- auth tls/etcd tls/k8s static-manifests manifests/coredns manifests-networking
|
||||
mkdir -p -- auth tls/{etcd,k8s} static-manifests manifests/{coredns,kube-proxy,network}
|
||||
awk '/#####/ {filename=$2; next} {print > filename}' assets
|
||||
mkdir -p /etc/ssl/etcd/etcd
|
||||
mkdir -p /etc/kubernetes/pki
|
||||
@ -182,8 +182,7 @@ storage:
|
||||
mv static-manifests/* /etc/kubernetes/manifests/
|
||||
mkdir -p /opt/bootstrap/assets
|
||||
mv manifests /opt/bootstrap/assets/manifests
|
||||
mv manifests-networking/* /opt/bootstrap/assets/manifests/
|
||||
rm -rf assets auth static-manifests tls manifests-networking
|
||||
rm -rf assets auth static-manifests tls manifests
|
||||
- path: /opt/bootstrap/apply
|
||||
mode: 0544
|
||||
contents:
|
||||
|
@ -38,7 +38,7 @@ systemd:
|
||||
After=coreos-metadata.service
|
||||
Wants=rpc-statd.service
|
||||
[Service]
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.29.0
|
||||
Environment=KUBELET_IMAGE=quay.io/poseidon/kubelet:v1.30.1
|
||||
EnvironmentFile=/run/metadata/coreos
|
||||
ExecStartPre=/bin/mkdir -p /etc/cni/net.d
|
||||
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
|
||||
|
@ -32,6 +32,13 @@ resource "digitalocean_firewall" "rules" {
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# Cilium metrics
|
||||
inbound_rule {
|
||||
protocol = "tcp"
|
||||
port_range = "9962-9965"
|
||||
source_tags = [digitalocean_tag.controllers.name, digitalocean_tag.workers.name]
|
||||
}
|
||||
|
||||
# IANA vxlan (flannel, calico)
|
||||
inbound_rule {
|
||||
protocol = "udp"
|
||||
|
@ -106,3 +106,18 @@ variable "cluster_domain_suffix" {
|
||||
default = "cluster.local"
|
||||
}
|
||||
|
||||
variable "components" {
|
||||
description = "Configure pre-installed cluster components"
|
||||
# Component configs are passed through to terraform-render-bootstrap,
|
||||
# which handles type enforcement and defines defaults
|
||||
# https://github.com/poseidon/terraform-render-bootstrap/blob/main/variables.tf#L95
|
||||
type = object({
|
||||
enable = optional(bool)
|
||||
coredns = optional(map(any))
|
||||
kube_proxy = optional(map(any))
|
||||
flannel = optional(map(any))
|
||||
calico = optional(map(any))
|
||||
cilium = optional(map(any))
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
@ -6,7 +6,7 @@ terraform {
|
||||
null = ">= 2.1"
|
||||
ct = {
|
||||
source = "poseidon/ct"
|
||||
version = "~> 0.11"
|
||||
version = "~> 0.13"
|
||||
}
|
||||
digitalocean = {
|
||||
source = "digitalocean/digitalocean"
|
||||
|
@ -1,9 +1,131 @@
|
||||
# Addons
|
||||
# Components
|
||||
|
||||
Typhoon clusters are verified to work well with several post-install addons.
|
||||
Typhoon's component model allows for managing cluster components independent from the cluster's lifecycle, upgrading in a rolling or automated fashion, or customizing components in advanced ways.
|
||||
|
||||
Typhoon clusters install core components like `CoreDNS`, `kube-proxy`, and a chosen CNI provider (`flannel`, `calico`, or `cilium`) by default. Since v1.30.1, pre-installed components are optional. Other "addon" components like Nginx Ingress, Prometheus, or Grafana may be optionally applied though the component model (after cluster creation).
|
||||
|
||||
## Components
|
||||
|
||||
Pre-installed by default:
|
||||
|
||||
* CoreDNS
|
||||
* kube-proxy
|
||||
* CNI provider (set via `var.networking`)
|
||||
* flannel
|
||||
* Calico
|
||||
* Cilium
|
||||
|
||||
Addons:
|
||||
|
||||
* Nginx [Ingress Controller](ingress.md)
|
||||
* [Prometheus](prometheus.md)
|
||||
* [Grafana](grafana.md)
|
||||
* [fleetlock](fleetlock.md)
|
||||
|
||||
## Pre-installed Components
|
||||
|
||||
By default, Typhoon clusters install `CoreDNS`, `kube-proxy`, and a chosen CNI provider (`flannel`, `calico`, or `cilium`). Disable any or all of these components using the `components` system.
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
region = "us-central1"
|
||||
dns_zone = "example.com"
|
||||
dns_zone_name = "example-zone"
|
||||
|
||||
# configuration
|
||||
ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
|
||||
|
||||
# pre-installed components (defaults shown)
|
||||
components = {
|
||||
enable = true
|
||||
coredns = {
|
||||
enable = true
|
||||
}
|
||||
kube_proxy = {
|
||||
enable = true
|
||||
}
|
||||
# Only the CNI set in var.networking will be installed
|
||||
flannel = {
|
||||
enable = true
|
||||
}
|
||||
calico = {
|
||||
enable = true
|
||||
}
|
||||
cilium = {
|
||||
enable = true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
!!! warn
|
||||
Disabling pre-installed components is for advanced users who intend to manage these components separately. Without a CNI provider, cluster nodes will be NotReady and wait for the CNI provider to be applied.
|
||||
|
||||
## Managing Components
|
||||
|
||||
If you choose to manage components youself, a recommended pattern is to use a separate Terraform workspace per component, like you would any application.
|
||||
|
||||
```
|
||||
mkdir -p infra/components/{coredns, cilium}
|
||||
|
||||
tree components/coredns
|
||||
components/coredns/
|
||||
├── backend.tf
|
||||
├── manifests.tf
|
||||
└── providers.tf
|
||||
```
|
||||
|
||||
Let's consider managing CoreDNS resources. Configure the `kubernetes` provider to use the kubeconfig credentials of your Typhoon cluster(s) in a `providers.tf` file. Here we show provider blocks for interacting with Typhoon clusters on AWS, Azure, or Google Cloud, assuming each cluster's `kubeconfig-admin` output was written to local file.
|
||||
|
||||
```tf
|
||||
provider "kubernetes" {
|
||||
alias = "aws"
|
||||
config_path = "~/.kube/configs/aws-config"
|
||||
}
|
||||
|
||||
provider "kubernetes" {
|
||||
alias = "google"
|
||||
config_path = "~/.kube/configs/google-config"
|
||||
}
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
Typhoon maintains Terraform modules for most addon components. You can reference `main`, a tagged release, a SHA revision, or custom module of your own. Define the CoreDNS manifests using the `addons/coredns` module in a `manifests.tf` file.
|
||||
|
||||
```tf
|
||||
# CoreDNS manifests for the aws cluster
|
||||
module "aws" {
|
||||
source = "git::https://github.com/poseidon/typhoon//addons/coredns?ref=v1.30.1"
|
||||
providers = {
|
||||
kubernetes = kubernetes.aws
|
||||
}
|
||||
}
|
||||
|
||||
# CoreDNS manifests for the google cloud cluster
|
||||
module "aws" {
|
||||
source = "git::https://github.com/poseidon/typhoon//addons/coredns?ref=v1.30.1"
|
||||
providers = {
|
||||
kubernetes = kubernetes.google
|
||||
}
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
Plan and apply the CoreDNS Kubernetes resources to cluster(s).
|
||||
|
||||
```
|
||||
terraform plan
|
||||
terraform apply
|
||||
...
|
||||
module.aws.kubernetes_service_account.coredns: Refreshing state... [id=kube-system/coredns]
|
||||
module.aws.kubernetes_config_map.coredns: Refreshing state... [id=kube-system/coredns]
|
||||
module.aws.kubernetes_cluster_role.coredns: Refreshing state... [id=system:coredns]
|
||||
module.aws.kubernetes_cluster_role_binding.coredns: Refreshing state... [id=system:coredns]
|
||||
module.aws.kubernetes_service.coredns: Refreshing state... [id=kube-system/coredns]
|
||||
...
|
||||
```
|
||||
|
@ -15,7 +15,7 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -40,7 +40,7 @@ Create a cluster on AWS with ARM64 controller and worker nodes. Container worklo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -66,9 +66,9 @@ Verify the cluster has only arm64 (`aarch64`) nodes. For Flatcar Linux, describe
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-21-119 Ready <none> 77s v1.29.0 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.29.0 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.29.0 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-21-119 Ready <none> 77s v1.30.1 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-32-166 Ready <none> 80s v1.30.1 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
ip-10-0-5-79 Ready <none> 77s v1.30.1 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Hybrid
|
||||
@ -79,7 +79,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -102,7 +102,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "gravitas"
|
||||
@ -125,7 +125,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -149,7 +149,7 @@ Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a [wo
|
||||
|
||||
```tf
|
||||
module "gravitas-arm64" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.gravitas.vpc_id
|
||||
@ -174,10 +174,10 @@ Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
|
||||
```
|
||||
$ kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-1-73 Ready <none> 111m v1.29.0 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.29.0 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.29.0 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.29.0 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-1-73 Ready <none> 111m v1.30.1 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-22-79... Ready <none> 111m v1.30.1 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
|
||||
ip-10-0-24-130 Ready <none> 111m v1.30.1 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
ip-10-0-39-19 Ready <none> 111m v1.30.1 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
|
||||
```
|
||||
|
||||
## Azure
|
||||
@ -186,7 +186,7 @@ Create a cluster on Azure with ARM64 controller and worker nodes. Container work
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
|
@ -36,7 +36,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -57,7 +57,7 @@ Add custom initial worker node labels to default workers or worker pool nodes to
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -89,7 +89,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -110,7 +110,7 @@ Add custom initial taints on worker pool nodes to indicate a node is unique and
|
||||
|
||||
```tf
|
||||
module "yavin-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
|
@ -19,7 +19,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.tempest.vpc_id
|
||||
@ -42,7 +42,7 @@ Create a cluster following the AWS [tutorial](../flatcar-linux/aws.md#cluster).
|
||||
|
||||
```tf
|
||||
module "tempest-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
vpc_id = module.tempest.vpc_id
|
||||
@ -111,7 +111,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# Azure
|
||||
region = module.ramius.region
|
||||
@ -137,7 +137,7 @@ Create a cluster following the Azure [tutorial](../flatcar-linux/azure.md#cluste
|
||||
|
||||
```tf
|
||||
module "ramius-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# Azure
|
||||
region = module.ramius.region
|
||||
@ -207,7 +207,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
region = "europe-west2"
|
||||
@ -231,7 +231,7 @@ Create a cluster following the Google Cloud [tutorial](../flatcar-linux/google-c
|
||||
|
||||
```tf
|
||||
module "yavin-worker-pool" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
region = "europe-west2"
|
||||
@ -262,11 +262,11 @@ Verify a managed instance group of workers joins the cluster within a few minute
|
||||
```
|
||||
$ kubectl get nodes
|
||||
NAME STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.29.0
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.29.0
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.29.0
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.29.0
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.29.0
|
||||
yavin-controller-0.c.example-com.internal Ready 6m v1.30.1
|
||||
yavin-worker-jrbf.c.example-com.internal Ready 5m v1.30.1
|
||||
yavin-worker-mzdm.c.example-com.internal Ready 5m v1.30.1
|
||||
yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.30.1
|
||||
yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.30.1
|
||||
```
|
||||
|
||||
### Variables
|
||||
|
@ -51,7 +51,7 @@ Add firewall rules to the worker security group.
|
||||
|
||||
```tf
|
||||
resource "azurerm_network_security_rule" "some-app" {
|
||||
resource_group_name = "${module.ramius.resource_group_name}"
|
||||
resource_group_name = module.ramius.resource_group_name
|
||||
|
||||
name = "some-app"
|
||||
network_security_group_name = module.ramius.worker_security_group_name
|
||||
|
@ -1,10 +1,10 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.29.0 cluster on AWS with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.30.1 cluster on AWS with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/fedora-coreos/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
@ -145,9 +145,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready <none> 10m v1.29.0
|
||||
ip-10-0-26-65 Ready <none> 10m v1.29.0
|
||||
ip-10-0-41-21 Ready <none> 10m v1.29.0
|
||||
ip-10-0-3-155 Ready <none> 10m v1.30.1
|
||||
ip-10-0-26-65 Ready <none> 10m v1.30.1
|
||||
ip-10-0-41-21 Ready <none> 10m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Azure
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.29.0 cluster on Azure with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.30.1 cluster on Azure with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -86,7 +86,7 @@ Define a Kubernetes cluster using the module `azure/fedora-coreos/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
@ -161,9 +161,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready <none> 24m v1.29.0
|
||||
ramius-worker-000001 Ready <none> 25m v1.29.0
|
||||
ramius-worker-000002 Ready <none> 24m v1.29.0
|
||||
ramius-controller-0 Ready <none> 24m v1.30.1
|
||||
ramius-worker-000001 Ready <none> 25m v1.30.1
|
||||
ramius-worker-000002 Ready <none> 24m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.29.0 cluster on bare-metal with Fedora CoreOS.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.30.1 cluster on bare-metal with Fedora CoreOS.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Fedora CoreOS to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/fedora-coreos/kubernete
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -191,7 +191,7 @@ Workers with similar features can be defined inline using the `workers` field as
|
||||
|
||||
```tf
|
||||
module "mercury-node1" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.30.1"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -313,9 +313,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready <none> 10m v1.29.0
|
||||
node2.example.com Ready <none> 10m v1.29.0
|
||||
node3.example.com Ready <none> 10m v1.29.0
|
||||
node1.example.com Ready <none> 10m v1.30.1
|
||||
node2.example.com Ready <none> 10m v1.30.1
|
||||
node3.example.com Ready <none> 10m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# DigitalOcean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.29.0 cluster on DigitalOcean with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.30.1 cluster on DigitalOcean with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/fedora-coreos/kubern
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Digital Ocean
|
||||
cluster_name = "nemo"
|
||||
@ -155,9 +155,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
10.132.110.130 Ready <none> 10m v1.29.0
|
||||
10.132.115.81 Ready <none> 10m v1.29.0
|
||||
10.132.124.107 Ready <none> 10m v1.29.0
|
||||
10.132.110.130 Ready <none> 10m v1.30.1
|
||||
10.132.115.81 Ready <none> 10m v1.30.1
|
||||
10.132.124.107 Ready <none> 10m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.29.0 cluster on Google Compute Engine with Fedora CoreOS.
|
||||
In this tutorial, we'll create a Kubernetes v1.30.1 cluster on Google Compute Engine with Fedora CoreOS.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/fedora-coreos/kuberne
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -147,9 +147,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.29.0
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.29.0
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.29.0
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.1
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.1
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# AWS
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.29.0 cluster on AWS with Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.30.1 cluster on AWS with Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a VPC, gateway, subnets, security groups, controller instances, worker auto-scaling group, network load balancer, and TLS assets.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -72,7 +72,7 @@ Define a Kubernetes cluster using the module `aws/flatcar-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "tempest" {
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.1"
|
||||
|
||||
# AWS
|
||||
cluster_name = "tempest"
|
||||
@ -145,9 +145,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/tempest-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-3-155 Ready <none> 10m v1.29.0
|
||||
ip-10-0-26-65 Ready <none> 10m v1.29.0
|
||||
ip-10-0-41-21 Ready <none> 10m v1.29.0
|
||||
ip-10-0-3-155 Ready <none> 10m v1.30.1
|
||||
ip-10-0-26-65 Ready <none> 10m v1.30.1
|
||||
ip-10-0-41-21 Ready <none> 10m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Azure
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.29.0 cluster on Azure with Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.30.1 cluster on Azure with Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -75,7 +75,7 @@ Define a Kubernetes cluster using the module `azure/flatcar-linux/kubernetes`.
|
||||
|
||||
```tf
|
||||
module "ramius" {
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Azure
|
||||
cluster_name = "ramius"
|
||||
@ -149,9 +149,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/ramius-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ramius-controller-0 Ready <none> 24m v1.29.0
|
||||
ramius-worker-000001 Ready <none> 25m v1.29.0
|
||||
ramius-worker-000002 Ready <none> 24m v1.29.0
|
||||
ramius-controller-0 Ready <none> 24m v1.30.1
|
||||
ramius-worker-000001 Ready <none> 25m v1.30.1
|
||||
ramius-worker-000002 Ready <none> 24m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Bare-Metal
|
||||
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.29.0 cluster on bare-metal with Flatcar Linux.
|
||||
In this tutorial, we'll network boot and provision a Kubernetes v1.30.1 cluster on bare-metal with Flatcar Linux.
|
||||
|
||||
First, we'll deploy a [Matchbox](https://github.com/poseidon/matchbox) service and setup a network boot environment. Then, we'll declare a Kubernetes cluster using the Typhoon Terraform module and power on machines. On PXE boot, machines will install Container Linux to disk, reboot into the disk install, and provision themselves as Kubernetes controllers or workers via Ignition.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns` while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns` while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -154,7 +154,7 @@ Define a Kubernetes cluster using the module `bare-metal/flatcar-linux/kubernete
|
||||
|
||||
```tf
|
||||
module "mercury" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/flatcar-linux/kubernetes?ref=v1.30.1"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -194,7 +194,7 @@ Workers with similar features can be defined inline using the `workers` field as
|
||||
|
||||
```tf
|
||||
module "mercury-node1" {
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//bare-metal/fedora-coreos/kubernetes/worker?ref=v1.30.1"
|
||||
|
||||
# bare-metal
|
||||
cluster_name = "mercury"
|
||||
@ -323,9 +323,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/mercury-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready <none> 10m v1.29.0
|
||||
node2.example.com Ready <none> 10m v1.29.0
|
||||
node3.example.com Ready <none> 10m v1.29.0
|
||||
node1.example.com Ready <none> 10m v1.30.1
|
||||
node2.example.com Ready <none> 10m v1.30.1
|
||||
node3.example.com Ready <none> 10m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# DigitalOcean
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.29.0 cluster on DigitalOcean with Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.30.1 cluster on DigitalOcean with Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -81,7 +81,7 @@ Define a Kubernetes cluster using the module `digital-ocean/flatcar-linux/kubern
|
||||
|
||||
```tf
|
||||
module "nemo" {
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Digital Ocean
|
||||
cluster_name = "nemo"
|
||||
@ -155,9 +155,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
10.132.110.130 Ready <none> 10m v1.29.0
|
||||
10.132.115.81 Ready <none> 10m v1.29.0
|
||||
10.132.124.107 Ready <none> 10m v1.29.0
|
||||
10.132.110.130 Ready <none> 10m v1.30.1
|
||||
10.132.115.81 Ready <none> 10m v1.30.1
|
||||
10.132.124.107 Ready <none> 10m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
@ -1,10 +1,10 @@
|
||||
# Google Cloud
|
||||
|
||||
In this tutorial, we'll create a Kubernetes v1.29.0 cluster on Google Compute Engine with Flatcar Linux.
|
||||
In this tutorial, we'll create a Kubernetes v1.30.1 cluster on Google Compute Engine with Flatcar Linux.
|
||||
|
||||
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a network, firewall rules, health checks, controller instances, worker managed instance group, load balancers, and TLS assets.
|
||||
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and `calico` (or `flannel`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
Controller hosts are provisioned to run an `etcd-member` peer and a `kubelet` service. Worker hosts run a `kubelet` service. Controller nodes run `kube-apiserver`, `kube-scheduler`, `kube-controller-manager`, and `coredns`, while `kube-proxy` and (`flannel`, `calico`, or `cilium`) run on every node. A generated `kubeconfig` provides `kubectl` access to the cluster.
|
||||
|
||||
## Requirements
|
||||
|
||||
@ -73,7 +73,7 @@ Define a Kubernetes cluster using the module `google-cloud/flatcar-linux/kuberne
|
||||
|
||||
```tf
|
||||
module "yavin" {
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.29.0"
|
||||
source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes?ref=v1.30.1"
|
||||
|
||||
# Google Cloud
|
||||
cluster_name = "yavin"
|
||||
@ -147,9 +147,9 @@ List nodes in the cluster.
|
||||
$ export KUBECONFIG=/home/user/.kube/configs/yavin-config
|
||||
$ kubectl get nodes
|
||||
NAME ROLES STATUS AGE VERSION
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.29.0
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.29.0
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.29.0
|
||||
yavin-controller-0.c.example-com.internal <none> Ready 6m v1.30.1
|
||||
yavin-worker-jrbf.c.example-com.internal <none> Ready 5m v1.30.1
|
||||
yavin-worker-mzdm.c.example-com.internal <none> Ready 5m v1.30.1
|
||||
```
|
||||
|
||||
List the pods.
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user